sentences
sequence | labels
sequence |
---|---|
[
"In this paper we apply self-knowledge distillation to text summarization which we argue can alleviate problems with maximum-likelihood training on single reference and noisy datasets.",
"Instead of relying on one-hot annotation labels, our student summarization model is trained with guidance from a teacher which generates smoothed labels to help regularize training.",
"Furthermore, to better model uncertainty during training, we introduce multiple noise signals for both teacher and student models.",
"We demonstrate experimentally on three benchmarks that our framework boosts the performance of both pretrained and non-pretrained summarizers achieving state-of-the-art results.",
"1 1 Introduction Automatic summarization has enjoyed renewed interest in recent years, thanks to the popularity of neural network models and their ability to learn continuous representations without recourse to preprocessing tools or linguistic annotations.",
"The availability of large-scale datasets (Sandhaus, 2008; Hermann et al., 2015; Grusky et al., 2018; Narayan et al., 2018) containing hundreds of thousands of document-summary pairs has driven the development of neural architectures for summarization.",
"Several approaches have been proposed, in the vast majority sequence-to-sequence models which are trained in an end-to-end fashion with a maximum likelihood estimation loss (See et al., 2017; Celiky-ilmaz et al., 2018; Paulus et al., 2018; Gehrmann et al., 2018).",
"Despite promising results, there are specific characteristics of the summarization task which render it ill-suited to standard sequence-to-sequence training.",
"For instance, maximum-likelihood training on single reference datasets might not be optimal for summarization which is subject to a great 1 Our code is available at https://github.com/ nlpyang/NoisySumm .",
"deal of human variation (Harman and Over, 2004; Nenkova, 2006).",
"In the context of extractive summarization, different people select different sentences to include in a summary (Rath et al., 1961), and when writing abstracts, disagreement exists both in terms of writing style and the specific content deemed important for the summary (Harman and Over, 2004).",
"Although summarization models would naturally benefit from multiple target references, it is unrealistic to expect that multi-reference datasets can be created at scale for neural network training.",
"In fact, most popular benchmarks are collated opportunistically, based on summaries which only loosely correspond to the source input.",
"For example, Narayan et al. (2018) create a dataset by pairing the first sentence of a news article with the rest of the document under the assumption that the introductory sentence expresses the gist of the article.",
"Grusky et al. (2018) pair articles with metadata available in HTML pages under the assumption that HTML tags (e.g., description ) denote summary-like content.",
"In other work (Liu et al., 2018; Perez-Beltrachini et al., 2019), multi-document summarization datasets are created by viewing lead sections in Wikipedia articles as summaries of documents cited therein.",
"The inherent noise in the data collection process further hampers training with models often being prone to hallucination (Song et al., 2018; Maynez et al., 2020), and struggling to identify which content units are salient (Tan et al., 2017).",
"In this paper, we propose to alleviate these problems by turning to knowledge distillation (Bucilu et al., 2006; Ba and Caruana, 2014; Hinton et al., 2015; Kim and Rush, 2016).",
"Knowledge distillation transfers knowledge from a larger teacher network to a smaller student model by training the student to imitate the teacher's outputs (in addition to learning from the training data set).",
"In born-again networks, (Furlanello et al., 2018) the teacher and student have the same neural architecture and model size, and yet surprisingly the student is able to surpass the teacher's accuracy.",
"Intuitively, such self-knowledge distillation is effective because the teacher's output distribution provides a richer training signal capturing additional information about training examples.",
"In the context of summarization, the teacher can benefit student training in two ways.",
"It provides a softened distribution over reference summaries thereby enriching the single reference setting.",
"Moreover, the teacher's distribution is (to a certain extent) de-noised enabling the student to circumvent inaccuracies in the training data.",
"We further capitalize on the idea that both the teacher and the student should be robust to noise and introduce several noise injection techniques which together with knowledge distillation improve model generalization and performance.",
"We present experiments on several summarization benchmarks (Narayan et al., 2018; Perez-Beltrachini et al., 2019; Hermann et al., 2015) covering singleand multi-document summarization settings as well as different types of summaries (e.g., verbose or more telegraphic).",
"Across datasets, the proposed framework boosts the performance of pretrained and non-pretrained abstractive summarizers, achieving new state-of-the-art results.",
"Neural approaches to abstractive summarization conceptualize the task as a sequence-to-sequence problem, where the encoder maps the sequence of tokens in the source document x = [ x 1 , ..., x n ] to a sequence of continuous representations z = [ z 1 , ..., z n ] , and the decoder autoregressively generates the target summary y = ( y 1 , ..., y m ) token-by-token, hence modeling the conditional probability p ( y 1 , ..., y m | x 1 , ..., x n ) .",
"Rush et al. (2015) and Nallapati et al. (2016) were among the first to apply the neural encoder-decoder architecture to text summarization.",
"See et al. (2017) enhance this model with a pointer-generator network which allows to copy words from the source text, and a coverage mechanism which keeps track of words that have been summarized.",
"Other work develops abstractive models trained end-to-end with reinforcement learning based on multiple encoders and hierarchical attention (Celikyilmaz et al., 2018) or a coverage mechanism where the decoder attends over previously generated words (Paulus et al., 2018).",
"Gehrmann et al. (2018) follow a bottom-up approach where a content selector first determines which phrases in a source document should be part of the summary, and a copy mechanism is applied only to preselected phrases during decoding.",
"Although the majority of summarization systems are composed of LSTM units, Narayan et al. (2018) and (Perez-Beltrachini et al., 2019) propose abstractive models based on convolutional neural networks.",
"Pretrained language models have recently emerged as a key technology for achieving impressive gains in abstractive summarization (Liu and Lapata, 2019; Lewis et al., 2020; Song et al., 2019).",
"These models first pretrain a language model with self-supervised objectives on large corpora and then fine-tune it on summarization datasets.",
"Liu and Lapata (2019) combine a pretrained encoder based on BERT (Devlin et al., 2019) with a randomly initialized decoder, demonstrating substantial gains on summarization performance.",
"Song et al. (2019) pretrain an encoder-decoder framework to reconstruct (masked) fragments within a sentence and then fine-tune it on summarization datasets.",
"In the same vein, Lewis et al. (2020) present BART, an encoder-decoder Transformer (Vaswani et al., 2017), pretrained by reconstructing a text corrupted with several arbitrary noising functions.",
"Bao et al. (2020) design UNILM v2, a Transformer-based neural network pretrained as a pseudo-masked language model.",
"Qi et al. (2020) introduce their own novel self-supervised task based on future n -gram prediction.",
"Knowledge Distillation refers to a class of methods for training a new smaller student network by learning from a teacher network (in addition to learning from the training data).",
"It is generally assumed that the teacher has been previously trained, and the parameters for the student are estimated by matching the student's predictions to the teacher.",
"Let T and S denote teacher and student models, respectively.",
"Let f T and f S be functions of the teacher and student.",
"The models are typically neural networks and function f can be in principle de-fined using the output of any network layer (e.g., a hidden or softmax layer).",
"Knowledge distillation methods are commonly expressed as minimizing an objective function over training set X : LKD = (cid:88) x i X l ( f T ( x i ) , f S ( x i )) (1) where l () is a loss function that penalizes the difference between the teacher and the student.",
"Specific instantiations of this general framework include minimizing the teacher/student difference based on output logits, intermediate hidden representations, attention maps, and derivatives of the loss to the input (Ba and Caruana, 2014; Romero et al., 2014; Zagoruyko and Komodakis, 2017; Czarnecki et al., 2017).",
"Other work integrates an ensemble of teachers in order to improve the student (Urban et al., 2016), trains a succession of students (Furlanello et al., 2018), introduces a teacher assistant for better knowledge transfer (Mirzadeh et al., 2019), and regularizes multi-task agents (Parisotto et al., 2015; Teh et al., 2017) in reinforcement learning.",
"Compared to direct training, knowledge distillation provides a more stable training process which leads to better performing student models (Hinton et al., 2015; Phuong and Lampert, 2019).",
"Recent work (Furlanello et al., 2018; Hahn and Choi, 2019) also sheds light on leveraging knowledge distillation for training a high-performing student model with the same size as the teacher (see the discussion in the next section).",
"Knowledge distillation has been also shown to improve results for various NLP tasks.",
"Tan et al. (2019) use it to transfer knowledge from BERT to smaller models, helping them approach or exceed the quality of much larger pretrained neural networks.",
"Aside from distilling large models into smaller ones (Kim and Rush, 2016; Mou et al., 2016) or ensembles of models into single models (Kuncoro et al., 2016; Liu et al., 2019), knowledge distillation has been further used in multi-task learning, e.g., to teach a multi-task student from single-task teachers (Clark et al., 2019).",
"Self-knowledge distillation refers to the special case where the teacher and student have identical neural network architectures.",
"Surprisingly, perhaps, it has been consistently observed (Furlanello et al., 2018; Yang et al., 2019; Ahn et al., 2019; Liu et al., 2020) that students trained with self-knowledge distillation outperform their teachers by significant margins in several computer vision and language modeling tasks.",
"Recent efforts have also focused on understanding why this happens, e.g., by observing that knowledge transferred by the teacher is localized mainly in higher layers and does not affect early (feature extraction) layers much (Got-mare et al., 2019), by interpreting the teacher's knowledge as importance weighting (Furlanello et al., 2018), by showing that early-stopping is crucial (Dong et al., 2019), and by studying how self-distillation modifies regularization (Mobahi et al., 2020).",
"For text summarization, we argue that self-knowledge distillation can potentially alleviate problems in conventional maximum likelihood training.",
"Summarization models are typically trained on single reference document-summary pairs, however considering a single summary as the only correct reference during maximum likelihood training can harm model generalization (El-bayad et al., 2018) and is counter-intuitive.",
"There can be multiple valid summaries for a source input (Harman and Over, 2004; Nenkova, 2006) and even the single reference summaries available are not entirely goldstandard due to the inherent noise in the automatic construction of large-scale summarization datasets (Kryscinski et al., 2019).",
"With self-knowledge distillation, teacher outputs provide softened distributions of the reference summaries, which can be viewed as an enrichment of the single reference setting and a reweighting of gold summaries to prevent the student from becoming over-confident in its predictions.",
"The standard objective for an abstractive summarization model is negative log likelihood: LNLL = T (cid:88) t =1 log ( p ( y t | y t 1 1 , x )) (2) where x indicates the source document, y t 1 indicates the t -th token in the target summary and y t 1 1 are the first t 1 tokens in the target summary.",
"We further assume that the teacher is a fully trained neural model, the student has the same architecture with the teacher, and access to the learned teacher's output distribution p T ( y t | y 1: t 1 , x )) : LKD = T (cid:88) t =1 KL ( p T ( y t | y t 1 1 , x ) , p S ( y t | y t 1 1 , x )) (3) where p T ( y t | y t 1 1 , x ) and p S ( y t | y t 1 1 , x ) are model outputs from the teacher and student, respectively.",
"It is common practice to compensate for no direct access to the training data (see Equation (3)) by interpolating between the two losses in Equations (3) and (2).",
"So, the final objective for training the student becomes: LFINAL = (1 ) LNLL + LKD (4) where is a mixture parameter combining the one-hot distribution and the teacher distribution.",
"We further want our summarization systems to be robust to natural noise found in existing datasets.",
"Injecting noise onto training samples has been proven useful for improving model generalization (Xie et al., 2019).",
"We extend this idea for knowledge distillation, and propose a novel framework for introducing noise to both distillation signals and training data.",
"We design different noise mechanisms for the teacher and student, and select the best noise configuration experimentally.",
"Noisy Teacher To inject noise into the distillation signals, we incorporate a teacher dropout mechanism (Bul et al., 2016), where dropout is kept active while generating teacher predictions for training the student.",
"In this manner, the teacher generates variable supervision labels for the student with some degree of uncertainty, alleviating the problem of overfitting to the teacher predictions.",
"Meanwhile, it can also be considered as approximating an average ensemble from many neural networks (Bul et al., 2016).",
"Noisy Student To inject noise into the training data, we propose various mechanisms to perturb the source input.",
"Random perturbation is effective in enforcing local smoothness for training text generation models under the assumption that semantically similar inputs can be mapped to the same or similar targets.",
"A related approach has been shown to improve the performance of machine translation models in self-training settings (He et al., 2019).",
"For text summarization, where the input is usually a long document, we design the following perturbation policies:",
"2. Word Replacement : for each word x i in the source document, we calculate a candidate replacement list by selecting k words most similar to x i from the vocabulary.",
"The similarity is calculated as the cosine distance between the embedding of x i and embeddings of all other words in the vocabulary.",
"Then, a source word is replaced with a word randomly selected from its candidate replacement list with probability p r .",
"3. Sentence Drop : a sentence in the source document is removed with probability p s .",
"4. Gaussian Noise : a Gaussian noise vector e is multiplied with the embeddings x of input words: x x e , e N ( I, 2 I ) .",
"These perturbation policies can be applied simultaneously or successively as a pipeline.",
"We experimentally found the best combination for our task to be the sequential application of word drop, followed by word replacement, and sentence drop.",
"Although Gaussian noise has been effective in natural language understanding tasks (Zhang and Yang, 2018), we found it not to be helfpul in our summarization experiments.",
"The knowledge distillation loss with a student trained on noisy data becomes: LKD = T (cid:88) t =1 KL ( p T ( y t | y t 1 1 , x ) , p S ( y t | y t 1 1 , x )) (6) where x indicates perturbed source input.",
"In this section, we describe the summarization datasets used in our experiments and discuss various implementation details.",
"We evaluated our model on two single-document summarization datasets, namely the CNN/DailyMail news highlights (Hermann et al., 2015) and XSum (Narayan et al., 2018), and one multi-document summarization dataset, i.e., WikiCatSum (Perez-Beltrachini et al., 2019).",
"These datasets represent different summary styles ranging from highlights to very brief-one sentence summaries.",
"The summaries also vary with respect to the type of rewriting operations they exemplify (e.g., CNN/DailyMail showcases more cut and paste operations while XSum is genuinely abstractive).",
"Finally, two of these CNN/DailyMail XSum Without Pretraining R1 R2 RL R1 R2 RL LEAD 40.42 17.62 36.67 16.30 1.60 11.95 PTRNET 39.53 17.28 36.38 28.10 8.02 21.72 TransformerAbs 40.21 17.76 37.09 31.04 10.48 24.54 + SKD 40.64 18.10 37.43 32.22 11.45 25.56 + SKD + Noisy T 40.79 18.24 37.57 32.32 11.56 25.72 + SKD + Noisy T + Noisy S 40.86 18.27 37.66 32.76 11.88 26.07 BASE-size Pretrained Models R1 R2 RL R1 R2 RL MASSBASE (123M) 42.12 19.50 39.01 39.75 17.24 31.95 BERTSUMABS (156M) 41.72 19.39 38.76 38.76 16.33 31.15 UNI LMv2 BASE (110M) 43.45 20.71 40.49 43.69 20.71 35.73 + SKD (110M) 43.44 20.68 40.51 43.76 21.04 36.04 + SKD + Noisy T (110M) 43.59 21.01 40.66 44.11 21.30 36.32 + SKD + Noisy T + Noisy S (110M) 43.77 20.98 40.82 44.14 21.34 36.35 LARGE-size Pretrained Models R1 R2 RL R1 R2 RL UNILMLARGE (340M) 43.08 20.43 40.34 BARTLARGE (400M) 44.16 21.28 40.90 45.14 22.27 37.25 T5 11B (11B) 42.05 20.34 39.40 Table 1: ROUGE F1 results on CNN/DailyMail and XSUM test sets (R1 and R2 are shorthands for unigram and bigram overlap; RL is the longest common subsequence).",
"datasets (XSum and WikiCatSum) were created automatically following various assumptions about the correspondence of purported summaries to the source input.",
"CNN/DailyMail contains news articles and associated highlights, i.e., a few bullet points written by journalists which give a brief overview of the article.",
"We used the standard splits of Hermann et al. (2015) for training, validation, and testing (90,266/1,220/1,093 CNN documents and 196,961/12,148/10,397 DailyMail documents).",
"We did not anonymize entities.",
"Sentences were split with the Stanford CoreNLP toolkit (Manning et al., 2014) and the dataset was pre-processed following See et al. (2017).",
"Input documents were truncated to 512 tokens.",
"XSum contains 226,711 news articles accompanied with a one-sentence summary, answering the question What is this article about?.",
"We used the splits of Narayan et al. (2018) for training, validation, and testing (204,045/11,332/11,334) and followed the pre-processing introduced in their work.",
"Input documents were also truncated to 512 tokens.",
"WikiCatSum is a multi-document summarization dataset derived from WikiSum (Liu et al., 2018).",
"The target summary is the lead section of a Wikipedia article, and the source input are webpages related to this article.",
"WikiCatSum (Perez-Beltrachini et al., 2019) represents three domains from the original Wikisum dataset under the assumption that these vary in terms of the topics the summaries discuss and their linguistic characteristics.",
"Aside from the summaries, the dataset contains the input webpages whose length is truncated to the first 800 tokens.",
"WikiCatSum contains 62,545 samples for the Company domain, 59,973 samples for the Film domain, and 60,816 samples for the Animal domain.",
"For all datasets, we evaluated our self-knowledge distillation framework in two settings.",
"In the first setting, our models are non-pretrained while in the second setting we take advantage of pretrained language models which have demonstrated impressive improvements in summarization (Lewis et al., 2020; Liu and Lapata, 2019; Bao et al., 2020).",
"Specifically, we adopt UNILM v2 (Bao et al., 2020) as the pretrained model.",
"UNILM v2 is a Transformer-based neural network (Vaswani et al., 2017) with 12 Transformer layers and 12 attention Company Film Animal All Without Pretraining R1 R2 RL R1 R2 RL R1 R2 RL R1 R2 RL CV-S2S 24.5 9.4 19.9 34.6 19.8 30.7 42.2 28.4 38.5 33.8 19.2 29.7 CV-S2D 27.6 10.5 21.3 37.7 20.8 32.0 42.3 27.3 37.1 35.9 19.5 30.1 TF-S2S 26.0 9.5 20.4 36.5 18.8 31.0 44.0 28.8 40.0 35.5 19.0 30.5 + SKD 26.8 9.9 20.9 37.2 19.3 31.8 44.3 29.0 40.3 36.1 19.4 31.0 + SKD + Noisy T 27.2 10.3 21.0 37.7 20.6 32.0 44.6 29.1 40.4 36.5 20.0 31.1 + SKD + Noisy T + Noisy S 27.4 10.4 21.3 37.9 21.0 32.2 44.6 29.0 40.4 36.6 20.1 31.3 With Pretraining R1 R2 RL R1 R2 RL R1 R2 RL R1 R2 RL UNI LMv2 BASE 33.32 14.36 25.39 42.51 25.92 36.54 45.45 31.69 40.91 40.4 24.0 34.3 + SKD 33.20 14.66 25.53 42.39 25.90 36.53 45.59 31.87 41.12 40.4 24.1 34.4 + SKD + Noisy T 33.42 14.87 25.80 42.60 26.02 36.65 45.75 32.19 41.30 40.6 24.4 34.6 + SKD + Noisy T + Noisy S 33.50 14.95 25.85 42.71 26.09 36.77 45.86 32.23 41.40 40.7 24.4 34.7 Table 2: ROUGE F1 results on WikiCatSum test sets (R1 and R2 are shorthands for unigram and bigram overlap; RL is the longest common subsequence).",
"heads.",
"It is pretrained as a pseudo-masked language model on a large corpus (label smoothing is applied with smoothing factor 0 . 1 ).",
"We fine-tuned our teacher models following the procedure outlined in Bao et al. (2020).",
"In the non-pretrained setting, we adopt a Transformer encoder-decoder model with 6 layers, 768 hidden size and 2,048 feed-forward filter size.",
"Label smoothing was also used with smoothing factor 0 .",
"1 .",
"All teacher models in this setting were trained from randomly initialized parameters following Liu and Lapata (2019).",
"In all knowledge distillation experiments, student models have the same neural network architecture with their teachers and are trained with the same hyperparameters as the teacher models.",
"The best teacher and student model are selected by evaluating perplexity on the development set.",
"For noisy distillation models, word drop probability p d was set to 0 .",
"1 .",
"The candidate length k for word replacement was 10 and word replacement probability p r was 0 .",
"1 .",
"Sentence drop probability p s was 0 .",
"05 .",
"During decoding we used beam search (size 5 ), and tuned for the length penalty (Wu et al., 2016) between 0 .",
"6 and 1 on the validation set; we decode until an end-of-sequence token is emitted.",
"Repeated trigrams are blocked (Paulus et al., 2018).",
"We evaluated summarization quality automatically using ROUGE (Lin, 2004).",
"We report unigram and bigram overlap (ROUGE-1 and ROUGE-2) as a means of assessing informativeness and the longest common subsequence (ROUGE-L) as a means of assessing fluency.",
"Examples of system output are shown in Table",
"5. Table 1 summarizes our results on the CNN/DailyMail and XSum (single document) datasets.",
"The first block includes the results of non-pretrained models.",
"We present the LEAD baseline (which simply selects the first three sentences in a document for CNN/DailyMail and the first sentence for XSum).",
"We also report the results of See et",
"al.'s (2017) pointer generator network (PTRNET ), and an abstractive system from Liu and Lapata (2019) based on Transformers (TransformerAbs; see Section 4.2 for details).",
"The latter forms the backbone of our self-knowledge distillation models (SKD).",
"We present a variant without noise ( + SKD), a variant with noise in the teacher training signal ( + Noisy T), and a third variant where the student is additionally trained on noisy data ( + Noisy S).",
"The second and third blocks in Table 1 include the results of pretrained models.",
"To make comparisons fairer, we separate LARGE(second block) from BASE-size (third block) pretrained models based on parameter size (shown within parenthe-ses).",
"With regard to LARGE-size models, we report the results of three very strong summarization systems finetuned with UNILMLARGE (Bao et al., 2020), BARTLARGE (Lewis et al., 2020), and T5 11B (Raffel et al., 2019).",
"Our BASE-size models include BERTSUMBASE (Liu and Lapata, 2019), a Models CNN/DailyMail XSum TRANSFORMERABS 20.8 32.7 + Noisy SKD 21.4 33.6 UNI LMv2 BASE 23.7 38.7 + Noisy SKD 24.8 39.9 Table 3: Factual correctness on CNN/DailyMail and XSum test set.",
"summarizer based on a BASE-size BERT encoder and a randomly initialized decoder, MASSBASE (Song et al., 2019) and UNILMBASE which are both finetuned with BASE-size pretrained models.",
"As can be seen in Table 1, SKD improves over teacher models in both pretrained (BASE-size) and non-pretrained settings.",
"We also observe that injection of noise brings further improvements with noise in the training signal ( + Noisy T) seeming more effective compared to noisy data augmentation ( + Noisy S).",
"Overall, we obtain competitive results with SKD and BASE-size pretrained models and even manage to outperform UNILMLARGE and T5 11B on the CNN/DailyMail dataset.",
"Table 2 presents experimental results on the WikiCatSum dataset.",
"The first block in the table includes results for non-pretrained models.",
"CV-S2S and CV-S2D (Perez-Beltrachini et al., 2019) are convolutional encoder-decoder models.",
"The former is a standard convolutional decoder, while the latter adopts a hierarchical convolutional decoder which first generates target sentence vectors, and then generates target words based on sentence vectors.",
"TF-S2S is a standard Transformer encoder-decoder model trained on WikiCatSum (Perez-Beltrachini et al., 2019).",
"TF-S2S is the model used in our SKD system and its noisy version ( + Noisy T, + Noisy S).",
"The second block includes the results of a system using the BASE-size pretrained model UNILMBASE on its own and with SKD.",
"Results are reported per domain (Company, Film, and Animal) and across domains (All).",
"Under pretrained and non-pretrained settings, we observe that SKD boosts the performance of the teacher model (UNILMBASE and TF-S2S, respectively) and that the injection of noise is bene-ficial.",
"Improvements in performance vary across domains, with Film showing the least gains.",
"Column All in Table 2 shows average ROUGE across domains.",
"Although SKD and noise injection improve results, we observe that non-pretrained models benefit more.",
"Besides ROUGE, we also use FactCC (Kryscinski et al., 2019) to evaluate the factual correctness of the generated summaries.",
"FactCC is a BERT-based classifier trained to identify conflicts between a source document and a generated summary.",
"Given a document-sentence pair as input, it assigns a positive label if factual information mentioned in a summary sentence is consistent with the document, otherwise it assigns a negative label.",
"We view the percentage of positive labels assigned by FactCC to all generated summaries as a factual correctness score for a summarization system.",
"We performed experiments with the publicly released version of FactCC.",
"2 Our results on the CNN/DailyMail and XSum datasets are presented in Table",
"3. Here, we only focus on single-document summarization, as there is no version of FactCC trained on multi-document datasets.",
"As can be seen, the application of SKD (trained with noisy signals and on noisy data) improves factual consistency for non-pretrained and pretrained models on both datasets.",
"All + Noisy SKD students are significantly ( p < 0 . 05 ) more factually correct compared to their teachers (TransformerAbs and UNI LMv2 BASE ), using a paired student t -test.",
"In addition to automatic evaluation, we also assessed system output by eliciting human judgments.",
"We compared the quality of the summaries produced by a teacher model (UNI LMv2 BASE ) 2 https://github.com/salesforce/factCC CNN/Daily Mail GOLDLZ Granderson: millennials say they'll marry if and when they want.",
"on the CNN/DailyMail, XSum, and WikiCatSum datasets.",
"against its distilled student ( + Noisy SKD).",
"For CNN/DailyMail and XSum, human participants were presented with the output of two systems (and the original document) and asked to decide which one was better according to the following criteria: Succinctness (Does the summary avoid repetition?), Informativeness (Does the summary capture the document's most important information?), and Fluency (Is the summary fluent and grammatical?).",
"Evaluation was conducted on the Amazon Mechanical Turk crowdsourcing platform.",
"We used the same test documents (20 in total) from Liu and Lapata (2019) for both CNN/DailyMail and XSum.",
"We elicited five responses per HIT.",
"Systems were rated along each dimension, and assigned a score corresponding to the proportion of times a system was selected as better against another.",
"XSum datasets participants perceive the student ( + Noisy SKD) as significantly ( p < 0 . 05 ) more succinct and informative compared to the teacher (UNI LMv2 BASE ).",
"However, on Fluency, the student tends to be worse.",
"Upon inspection we found student summaries to be rather telegraphic, and hypothesize that crowdworkers tend to penalize them in terms of fluency, even though they are grammatical.",
"Human evaluation was performed slightly different for WikiCatSum.",
"Recall that this is a multi-document dataset, where input documents are discontinuous webpage fragments.",
"To allow participants to perform the experiment in a timely fashion, we used the gold summary as a proxy for the content of the input.",
"Crowdworkers were presented with the output of two systems (again UNI LMv2 BASE and + Noisy SKD) and asked to decide which one was better according to the information contained in the gold summary.",
"Evaluation was conducted on AMT, we randomly selected 20 samples from the test set and elicited three responses per HIT.",
"For each domain, we report the proportion of times a system was chosen as better.",
"Human evaluation results are shown in Table 4 (lower part).",
"AMT Crowdworkers prefer the summaries produced by the student for the Animal and Film domains, but not for Company; we found that the distilled model tends to generate too many entities in one sentence which render the summaries too dense for this domain.",
"In this paper we advocated the use of self-knowledge distillation for abstractive summarization, as a means to alleviate problems associated with maximum-likelihood training for this task.",
"We also introduced several noise functions (in the training signal and training data) which help regularize training and further boost performance.",
"Experiments on three benchmark datasets demonstrate that our framework can improve both non-pretrained and pretrained summarizers.",
"In the future we would like to investigate more thoroughly which aspects of pretrained models improve and how self-knowledge distillation can be enhanced with more sophisticated noise functions.",
"Acknowledgments We gratefully acknowledge the support of the European Research Council (La-pata, award number 681760, Translating Multiple Modalities into Text)."
] | [
"method",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"abstain",
"other",
"objective",
"method",
"method",
"other",
"other",
"objective",
"other",
"other",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"abstain"
] |
[
"Prerequisite relations among concepts are crucial for educational applications, such as curriculum planning and intelligent tutoring.",
"In this paper, we propose a novel concept prerequisite relation learning approach, named CPRL, which combines both concept representation learned from a heterogeneous graph and concept pairwise features.",
"Furthermore, we extend CPRL under weakly supervised settings to make our method more practical, including learning prerequisite relations from learning object dependencies and generating training data with data programming.",
"Our experiments on four datasets show that the proposed approach achieves the state-of-the-art results comparing with existing methods.",
"With the increasing availability of learning resources and the requirement of self-regulated learning, there is a rising need to organize knowledge in a reasonable order.",
"Concept prerequisite relations are essentially considered as the dependency among concepts, and they are crucial for people to learn, organize, apply and generate knowledge (Margolis and Laurence, 1999).",
"For example, if someone wants to learn the knowledge about Conditional Random Fields , the knowledge about Hidden Markov Model should be learned first.",
"Consequently, the concept Hidden Markov Model is a prerequisite concept of the concept Conditional Random Fields .",
"Nowadays, prerequisite relations among concepts have played a crucial role in educational applications, such as curriculum planning (Liu et al., 2016) and intelligent tutoring (Wang and Liu, 2016; Chen et al., 2018).",
"Recently, several attempts have been made to extract prerequisite relations among concepts from textbooks (Wang et al., 2016; Liang et al., 2018), MOOCs (Massive Open Online Courses) (Pan corresponding author et al., 2017), courses (Liang et al., 2015a; Liu et al., 2016; Liang et al., 2017; Li et al., 2019a; Roy et al., 2019) and scientific papers (Gordon et al., 2016).",
"They either proposed a local statistical information, such as reference distance (Liang et al., 2015a) and cross-entropy (Gordon et al., 2016) to measure the prerequisite relations between concepts, or proposed handcrafted features to learn a prerequisite relation classifier (Pan et al., 2017).",
"Liang et al. (2017) proposed CPR-Recover to recover concept prerequisite relations from course dependencies.",
"More recently, Li et al. (2019a) applied variational graph autoencoders to learn concept prerequisite relations from courses.",
"While Roy et al. (2019) developed a supervised learning approach called PREREQ.",
"However, there are still several challenges to learn the prerequisite relations among concepts.",
"Firstly, there are multiple and complex relations among concepts and learning resources, but they were not fully utilized before.",
"Secondly, labeling training data is enormously expensive and time consuming, especially when domain expertise is required for concept prerequisite relation judgement.",
"In order to address these challenges, we propose a novel concept prerequisite relation learning approach, named CPRL, which firstly learns concept representation via a relational graph convolutional network (R-GCN) (Schlichtkrull et al., 2018) on a heterogeneous graph, and predicts the concept prerequisite relations with a Siamese network.",
"Then, it is optimized with the learning object dependencies and handcrafted features.",
"Moreover, we extend CPRL under the weakly-supervised settings to make our approach more practical, including learning prerequisite relation from learning object dependencies and generating training data with data programming paradigm.",
"multiple and complex relations among concepts and learning resources to learn concept representation.",
"We propose a novel concept prerequisite relation learning approach, named CPRL, which combines evidences from concept representations via R-GCN on HCLoG, learning object dependencies, and concept pairwise features.",
"We extend CPRL under weakly supervised settings to avoid costly training data labeling.",
"We conduct extensive experiments on four real-world datasets with different domains: Textbook , MOOC , LectureBank and University Course , and our approach achieves new state-of-the-art performance.",
"The educational data can be a textbook or a course, which can be modeled as a sequential learning objects (denoted as LO for short), such as book chapters, MOOC videos and lectures.",
"There are concepts in an educational data, and we would like to extract the prerequisite relation among these concepts, as shown in Figure",
"1. A textbook A MOOC course A course videos chapters lectures graph minimal spanning tree Kruskal's algorithm Figure 1: An example of prerequisite relation learning for concepts in educational data.",
"For convenience, we will use the following notations: D = { o 1 , o 2 , ..., o M } is an educational data, where o i denotes the i -th learning object in D and is represented as a document.",
"The document can be the text from a book chapter, or the speech script from a MOOC video.",
"Therefore, the problem could be formally defined as: given an educational data D and its corresponding concepts C , the goal is to learn a function F : C C { 0 , 1 } , which can predict whether c i is a prerequisite concept of c j by mapping the concept pair (cid:104) c i , c j (cid:105) to a binary class.",
"The overview of our proposed CPRL is shown in Figure",
"2. We firstly build a heterogeneous concept-learning object graph from the educational data, and then use a relational graph convolutional network (R-GCN) (Schlichtkrull et al., 2018) to represent the concepts and learning objects.",
"Then, pairwise features for concepts are extracted according to their textual and structural information.",
"Finally, all features are combined to learn the concept prerequisite relations.",
"It should be noted that the dependencies among learning objects can be viewed as a signal of weak supervision, which are also used to train the model.",
"We build a heterogeneous concept-learning object graph from an educational data, which contains concepts and learning objects, so the concept co-occurrence and the learning object-concept relations can be explicitly modeled.",
"The heterogeneous concept-learning object graph is defined as a graph G = ( V , E ) , where V consists of two types of nodes: concept nodes V c = { c 1 , c 2 , ..., c N } and learning object nodes V o = { o 1 , o 2 , ..., o M } , and E represents the relations among them.",
"Specifically, we define the following three types of edges in G .",
"1. an edge between a concept and a learning object, and the weight is the term frequency-inverse document frequency (tfidf) of the concept in the document, where the term frequency is the number of times the concept appears in the document, while the inverse document frequency is the logarithmically scaled inverse fraction of the number of documents that contain the concept.",
"E.g., e co in Figure",
"2.",
"2. an edge between two concepts which co-occur in a fixed size sliding window in documents.",
"Point-wise mutual information (P-MI) is used to calculate the weight.",
"Formally, pmi ( i, j ) = log p ( i,j ) p ( i ) p ( j ) , p ( i, j ) = # W ( i,j ) # W and p ( i ) = # W ( i ) # W , where # W ( i, j ) is the number of sliding windows that contain both c i and c j , # W ( i ) is the number of sliding windows that only contain c i , and # W is the Algorithm Tree Abstract Data Type Linear List String and Array Sort Algorithm Time Complexity AVL Tree Heap Space Complexity Array List LO Concept R-GCN Heterogeneous Concept-LO Graph Concept Representation Learning Object Representation Concept Pairwise Features ( , ) ( , ) ( , ) ( , ) Learning Objects Graph Generation Siamese Network Prerequisite Relation Classification MLP Network Heap List Array AVL Tree Time Complexity Concepts Figure 2: The overview of our proposed CPRL framework.",
"number of sliding windows in D .",
"E.g., e cc in Figure",
"2.",
"3. an edge between two learning objects, and the weight is the normalized distance between these two learning objects in the educational data.",
"Formally, dis ( i, j ) = | j i | M .",
"E.g., e oo in Figure",
"2. Thus, the adjacency matrix A R ( M + N ) ( M + N ) of the graph G is defined as: A ij = pmi ( i, j ) i and j are concepts tfidf ( i, j ) i is a concept and j is a LO dis ( i, j ) i and j are LOs 1 i = j 0 otherwise 3.2 Concept Representation via R-GCN Since there are different types of relations among the nodes in the heterogeneous concept-learning object graph, we employ R-GCN to learn the representations of concepts and LOs.",
"We first use pretrained word embeddings GLoVE (Pennington et al., 2014) to represent each concept node in G .",
"To represent the learning object, we calculate the average word embeddings of concepts in that learning object.",
"Then, we update the node representation with R-GCN by aggregating messages from its direct neighbors as follows: h l +1 i = ( W l 0 h li + (cid:88) r R (cid:88) j N ri 1 c i,r W lr A ij h lj ) where N ri is the neighbors of node i of relation r R , W lr R d d is a relation-specific weight matrix, W l 0 R d d is a general weight matrix, h li is the hidden state of node i at l -th layer, is the ReLU function, and c i,r = (cid:80) j N ri A ij is a normalization constant.",
"We stack the networks for L layers, and the concepts and learning objects can be represented by the hidden state of nodes in the L -th layer.",
"After representing concepts via R-GCN, a Siamese network is used to predict whether the concept c is prerequisite of c j .",
"We firstly take the concept representation of c i and c j as the input of a Siamese network, as shown in Figure 3, to calculate the likelihood of c i being a prerequisite concept of c j .",
"Formally, (cid:126)c i = ReLU ( W s h Lc i + b s ) , where h Lc i is the output of the R-GCN for concept c i in L -th layer.",
"Then, the likelihood p GCN ( c i , c j ) is calculated as ( WT [ (cid:126)c i ; (cid:126)c j ; (cid:126)c i (cid:126)c j ; (cid:126)c i (cid:126)c j ] + b ) , where is the sigmoid function, and are the element-wise multiplication and subtraction operators, and [ ; ] means the concatenation of vectors.",
"Finally, we use the cross-entropy as the loss function: L c = 1 | T | (cid:80) ( c i ,c j ,y ij ) T [ y ij log( p GCN ( c i , c j )) + (1 y ij ) log(1 p GCN ( c i , c j ))] , where T is the training dataset, and y ij { 0 , 1 } is the ground truth of ( c i , c j ) .",
"Intuitively, the dependencies among learning objects can reflect the prerequisite relations among concepts, but how can we utilize the learning object dependencies to enhance our model?",
"represented in the same space, so they can be fed to the same Siamese network.",
"Formally, we feed the representations of learning object o i and o j to the same Siamese network mentioned in previous section, and obtain the likelihood of the learning object dependency as p GCN ( o i , o j ) .",
"Similarly, we define the loss function as: L o = 1 | T | (cid:80) ( o i ,o j ,y ij ) T [ y ij log( p GCN ( o i , o j ))+(1 y ij ) log(1 p GCN ( o i , o j ))] , where T is the training dataset, and y ij { 0 , 1 } is the ground truth of ( o i , o j ) .",
"Predicting the dependencies among learning objects can be considered as an auxiliary task for concept prerequisite relation learning, so the loss function could be: L = L c + L o .",
"In order to fully utilize the information of LOs, we also extract concept pairwise features from their textual and structural information.",
"Liang et al. (2015a) pointed out that when learning concept A , if one needs to refer to concept B a lot but not vice versa, then B is more likely to be a prerequisite of A than A of B .",
"Inspired by this idea, we propose a new generic metric, namely learning object reference distance (LOrd), in a learning object sequence D = { o 1 , o 2 , ..., o M } to measure prerequisite relations among concepts.",
"For a concept pair ( c i , c j ) , we propose the reference weight ( rw ) to qualify how c j is referred by LOs which mention concept c i , defined as: rw ( c i , c j ) = (cid:80) o D f ( c i , o ) r ( o, c j ) (cid:80) o D f ( c i , o ) where f ( c i , o ) indicates the frequency of concept c i appears in the learning object o , and r ( o, c j ) { 0 , 1 } denotes whether concept c j appears in o .",
"Then, the LOrd is defined as: LOrd ( c i , c j ) = rw ( c j , c i ) rw ( c i , c j ) .",
"Obviously, LOrd can be easily calculated for textbooks, MOOC courses and university courses.",
"In addition, for MOOCs, we use features as in (Pan et al., 2017).",
"While for textbooks, we extract several pairwise features as in (Pan et al., 2017), including Semantic Relatedness , Wikipedia reference distance and complexity level distance .",
"The details can be referred in the Appendix.",
"Moreover, we also extract head matching feature and ToC distance (Wang et al., 2016) for concept pairs for textbooks.",
"Head matching feature represents whether two concepts have a common head or not, which is obtained by suffix matching.",
"Usually, it implies the existence of prerequisite relation, e.g., tree and binary tree .",
"ToC distance measures the distance of concepts in the table of contents in D .",
"All the pairwise features are concatenated and fed into a forward neural network, which will generate the prediction result p F ( c i , c j ) for the concept pair ( c i , c j ) .",
"The loss function for the pairwise features is: L f = 1 | T | (cid:80) ( c i ,c j ,y ij ) T [ y ij log( p F ( c i , c j )) + (1 y ij ) log(1 p F ( c i , c j ))] .",
"Therefore, the overall loss function is: L = L c + L o + L f , where and are two hyper-parameters.",
"In practice, it is expensive to collect massive hand-labeled data for model training.",
"One intuitive way to alleviate the labeling cost is that we can train the model in one domain (e.g. Calculus ), and then use it to predict the concept prerequisite relations in other domains (e.g. Data Structure and Physics ).",
"However, the idea fails and we will explain it in our experiments.",
"Therefore, we extend our model under the weak supervision settings in two ways.",
"We call the first way as learning prerequisite relations from LO dependencies .",
"Since concepts and LOs are embedded into the same space through R-GCN in the heterogeneous graph, our model can implicitly infer the prerequisite relationships between concepts by explicitly learning the dependencies between LOs.",
"This procedure is called CPRL lo .",
"Another way is use the data programming (Rat-ner et al., 2016) paradigm to create probabilistic training data.",
"Data programming expresses weak supervision strategies or domain heuristics as labeling functions (LFs), and then estimates the label accuracies by fitting a generative model.",
"The process is shown as Figure",
"4. Probabilistic Training Data Generative Model Learning Objects Books/MOOCs/Courses Label Functions Label Matrix 1 0 1 -1 1 0 0 1 0 1 -1 0 0 -1 1 1 Figure 4: The pipeline of probabilistic label generation.",
"features extracted before as heuristic labeling functions (LF for short): : ( c i , c j ) { 1 , 0 , 1 } , where 1 means the labeling function abstains from providing a label.",
"We define label functions corresponding to the features among concepts, and some examples are shown in Figure",
"5. def _ ( , ) : if , < return 1 elseif , > return 0 else return -1 def _ ( , ) : if return 1 elsereturn -1 Figure 5: Two LF examples, where maxLOrd and minLOrd are learned thresholds.",
"Other LFs and the settings of thresholds are listed in the Appendix.",
"We apply m such LFs to the unlabeled concept pairs { ( c t i , c t j ) nt =1 } to generate a label matrix { 1 , 0 , 1 } n m .",
"Then, we use the weak supervision framework Snorkel (Ratner et al., 2019a) to train a probabilistic model.",
"The probabilistic model takes the label matrix as input, and generates the probabilistic training labels Y = p ( Y | ) for each concept pair.",
"The generated labels could be used to train our model.",
"With the probabilistic training data, L c and L f are changed to the noise-aware variants: L c = (cid:80) ( c i ,c j ) TE y ij Y [ [ y ij log( p GCN ( c i , c j )) + (1 y ij ) log(1 p GCN ( c i , c j ))]] and L f = (cid:80) ( c i ,c j ) TE y ij Y [ [ y ij log( p F ( c i , c j )) + (1 y ij ) log(1 p F ( c i , c j ))]] .",
"This procedure is called CPRL dp .",
"In order to validate the efficiency of our model, we conducted experiments on four datasets with different domains.",
"Textbook : we selected six Chinese textbooks in each of the three domains: Calculus , Data Structure , and Physics , and then extracted 89, 84 and 139 concepts, and labeled 449, 439 and 623 prerequisite relations for each domain respectively.",
"The datasets will be publicly available later.",
"MOOC : we used MOOC data 1 mentioned in (Pan et al., 2017), which involves two domains: Data Structure and Algorithms (DSA) and Machine Learning (ML) .",
"1 http://keg.cs.tsinghua.edu.cn/jietang/software/acl17-prerequisite-relation.rar LectureBank : This dataset 2 (Li et al., 2019a) contains 1,352 English lecture files collected from university courses, and the annotations of prerequisite relations on 208 concepts.",
"University Course : This dataset 3 (Liang et al., 2017) has 654 courses with 861 course prerequisite edges from various universities in USA, and 1008 pairs of concepts with prerequisite relations are manually annotated.",
"The set of concepts and prerequisite relations among them was annotated by experts, and released with the datasets.",
"The statistics of the datasets are listed in the appendix.",
"Binary classifiers : We compared our model with the binary classifiers as in (Pan et al., 2017), including Nave Bayes classifier (NB), Support vector machine (SVM), Logistic Regression (LR) and Random Forest classifier (RF).",
"RefD : RefD (Liang et al., 2015b) is a simple link-based metric for measuring the prerequisite relations among concepts.",
"GAE : GAE denotes graph autoencoder, which encodes a graph with GCN, and predicts links through the adjacency matrix reconstruction.",
"Li et al. (2019a) used GAE for concept prerequisite relation learning.",
"VGAE : VGAE is an extension to GAE, which was also used in (Li et al., 2019a) for concept prerequisite relation learning.",
"PREREQ : PREREQ (Roy et al., 2019) obtains latent representations of concepts through the pairwise-link LDA model, and identifies concept prerequisite relations through a Siamese network.",
"We also compared our weakly-supervised variants with CPR-Recover (Liang et al., 2017), which is an unsupervised approach, and can recover concept prerequisite relations from course dependencies.",
"Consistent with many methods, we mainly used F-score(F 1 ) to evaluate the performance of CPRL with all the baselines.",
"We also compared preci-sion(P) and recall(R) against other methods.",
"In all datasets, only concept prerequisite pairs are manually annotated, and we split the positive samples into train and test sets.",
"In order to fairly compare with the previous researches, 90% samples of LectureBank were used for training while the rest 10% for testing.",
"For other datasets, the proportions changed to 70% and 30%.",
"Then, we generated negative samples by sampling random unrelated pairs of concepts from the vocabulary in addition to the reverse pair of original positive samples.",
"In order to address the imbalance problem, we oversampled 3.5 and 1.5 times the number of the positive examples in the training and testing sets for Textbook dataset and other datasets respectively.",
"The results are averaged over 5 train-test splits.",
"The parameters were initialized randomly from a Gaussian distribution with zero mean and standard deviation = 0 .",
"3 .",
"The initial learning rate is 0.5 for Textbook and 0.1 for other datasets.",
"Besides, the learning rate annealed every 50 epochs by 0 .",
"99 .",
"We trained CPRL using the Stochastic Gradient Descent method and stopped training if the train loss did not decrease for 30 consecutive epochs.",
"For baseline models, we used default parameter settings as in their original implementations, and also used 300-dimensional GloVE as the pre-trained word embeddings.",
"For R-GCN, we set the number of R-GCN layers L = 2 and set the embedding size of the first convolution layer as 256 and the second convolution layer as the number of concepts in each dataset.",
"We experimented with other settings and found that small changes did not influence the result much.",
"In addition, we set = 0 .",
"2 and = 0 .",
"1 , since they made the best performance.",
"The influence of parameters L , and can be referred to the Appendix.",
"Table 1 shows the precision, recall and F-score four datasets with different domains.",
"From the table, we find that (1) CPRL achieves the best performance with F-score against all baselines on all datasets, except for DSA domain of the MOOC dataset.",
"(2) CPRL performs best in LectureBank and University even without pairwise features and dependencies among learning objects.",
"It tells that HCLoG can effectively model the multiple and complex relations among concepts and learning resources to learn better concept representation.",
"(3) RefD can indeed measure the prerequisite relations among concepts, and obtains a higher precision, but a lower recall.",
"(4) GAE and VGAE utilize GCN for adjacency matrix reconstruction, but they perform worse than CPRL.",
"The reason is that CPRL utilizes the heterogeneous concept learning object graph to learn the concept representation, which can fully utilize the complex relationships among concepts and learning objects, while GAE and VGAE only use the graph among concepts.",
"In order to prove the effects of pairwise features and LO dependencies, we conducted ablation experiments on Textbook and MOOC datasets.",
"The results are shown in table 2 Dataset Metric CPRL CPRL f CPRL c Textbook DS P 0.795 0.793 0.811 R 0.809 0.802 0.749 F 1 0.802 0.797 0.779 PHY P 0.778 0.779 0.778 R 0.798 0.799 0.716 F 1 0.788 0.789 0.746 CAL P 0.770 0.772 0.769 R 0.825 0.809 0.755 F 1 0.797 0.790 0.762 MOOC DSA P 0.640 0.659 0.562 R 0.619 0.615 0.565 F 1 0.630 0.636 0.563 ML P 0.800 0.788 0.767 R 0.642 0.628 0.598 F 1 0.712 0.699 0.672 Table 2: Ablation Study on CPRL.",
"As shown in Table 2, CPRL performs better than CPRL f and CPRL c on most of the datasets, so pairwise features and learning object dependencies can both contribute to the performance.",
"Besides, even CPRL c obtains a better performance than the baselines in Table 1, which proves the effectiveness of the heterogeneous graph.",
"In order to evaluate our weakly supervised prerequisite relation learning approaches, we compared our two variants CPRL lo and CPRL dp with CPR-Recover (Liang et al., 2017) in Textbook dataset, and the results are shown in Table",
"3. From the table, we find that CPRL lo and CPRL dp outperform CPR-Recover in all metrics, and CPRL dp achieves the best performance.",
"It proves that the knowledge of learning object dependencies can be transferred to learn the concept prerequisite relations through the concept learning object graph.",
"In addition, the data programming with our designed label functions can generate help-Dataset Metric SVM LR RF NB RefD GAE VGAE PREREQ CPRL Textbook DS P 0.818 0.852 0.755 0.481 0.920 0.446 0.434 0.226 0.795 R 0.632 0.590 0.685 0.897 0.244 0.900 0.570 0.369 0.809 F 1 0.713 0.697 0.718 0.626 0.385 0.597 0.493 0.280 0.802 PHY P 0.806 0.863 0.748 0.399 0.900 0.505 0.460 0.432 0.770 R 0.655 0.588 0.752 0.922 0.409 0.943 0.649 0.423 0.825 F 1 0.723 0.699 0.750 0.557 0.562 0.657 0.538 0.427 0.797 CAL P 0.839 0.860 0.746 0.404 0.950 0.436 0.414 0.391 0.778 R 0.637 0.570 0.715 0.995 0.302 0.900 0.558 0.506 0.798 F 1 0.724 0.686 0.730 0.574 0.458 0.587 0.475 0.441 0.788 MOOC DSA P 0.705 0.808 0.344 0.613 0.920 0.294 0.269 0.492 0.641 R 0.624 0.168 0.715 0.696 0.252 0.715 0.657 0.462 0.619 F 1 0.662 0.278 0.464 0.652 0.396 0.417 0.382 0.476 0.630 ML P 0.668 0.748 0.375 0.577 0.784 0.293 0.266 0.448 0.800 R 0.577 0.27 0.669 0.623 0.188 0.733 0.647 0.592 0.642 F 1 0.619 0.397 0.481 0.599 0.303 0.419 0.377 0.510 0.712 LectureBank P 0.857 0.744 0.855 0.670 0.666 0.462 0.417 0.590 0.861 R 0.692 0.744 0.681 0.640 0.228 0.811 0.575 0.502 0.858 F 1 0.766 0.744 0.758 0.655 0.339 0.589 0.484 0.543 0.860 University Course P 0.796 0.595 0.739 0.478 0.919 0.450 0.470 0.468 0.689 R 0.635 0.546 0.480 0.649 0.415 0.886 0.694 0.916 0.760 F 1 0.707 0.569 0.582 0.550 0.572 0.597 0.560 0.597 0.723 Table 1: The performance of CPRL on four datasets with different domains.",
"ful training data, and achieve comparable performance with the supervised CPRL.",
"In order to explore the transfer ability of our model between different domains, we conducted an experiment on Textbook dataset.",
"Specifically, for CPRL, we firstly trained the model in one domain, and then used the model to predict prerequisite relations between concepts in another domain.",
"While for CPRL dp , we obtained the best thresholds such as maxLOrd and minLOrd in LFs in one domain and then used them to other domains.",
"The results are shown in Table 4 CPRL CPRL dp DS PHY CAL DS PHY CALDS 0.802 0.393 0.219 0.706 0.621 0.587 PHY 0.640 0.797 0.430 0.692 0.634 0.616 CAL 0.520 0.438 0.788 0.658 0.633 0.629 Table 4: Domain transfer ability verification experiments for CPRL and CPRL dp , where each row and column represent the source and target domain respectively, and the values in the cells are F 1 -scores.",
"CPRL, so we cannot simply transfer the model across domains due to the difference among concepts and LOs.",
"(2) CPRL dp is more stable and can be used in practice since we only need to label a small amount of training data in one domain.",
"Our approach can learn the concept prerequisite relations from one learning object sequence, such as a textbook.",
"While the concepts in textbooks in the same domain are basically the same, so the prerequisite relations among them can be aggregated.",
"Here, we used a simple majority voting strategy for aggregation, and the results are shown in Figure 6.",
"From the table, we see a significant improvement for the ensemble results.",
"Learning prerequisite relations between concepts has attracted much recent work, and can be classi-fied into three categories: local statistical information based approaches, recovery based approaches and learning based approaches.",
"As local statistical information, reference distance (Liang et al., 2015a) and cross-entropy (Gor-don et al., 2016) were proposed to measure the concept prerequisite relations.",
"CPR-Recover (Liang et al., 2017) is a recovery based approach, which recovers prerequisite relations from course dependencies.",
"The learning based approaches are the most popular.",
"For example, Pan et al. (2017) proposed contextual, structural and semantic features for concept prerequisite relation classification.",
"Roy et al. (2019) applied the pairwise-link LDA model to represent concept, and trained a Siamese network to identify prerequisite relations.",
"Li et al. (2019a) trained variational graph autoencoders to predict concept prerequisite relations.",
"However, these approaches didn't model the mutiple and complex relations among concepts and learning resources.",
"Meanwhile, they also need a large set of training data, which is costly to obtain.In order to reduce the amount of training data required, active learning was investigated in (Liang et al., 2018) and (Liang et al., 2019) for concept prerequisite learning.",
"One of the most significant bottlenecks for machine learning is the need for a big training data set.",
"Nowadays, it is very promising to use weakly supervised learning techniques to reduce the amount of human intervention needed.",
"For example, distant supervision can produce noisy training data by aligning unlabeled data with an external knowledge base, e.g. relation extraction in (Smirnova and Cudr-Mauroux, 2018).",
"Crowdsourcing (Yuen et al., 2011) and heuristic rules (Sa et al., 2016) can also generate noisy training data.",
"However, these weakly supervised data is incomplete, inexact and inaccurate, so it is important to integrate multiple noisy labeling data to produce more accuracy data.",
"Data programming (Rat-ner et al., 2016) provides a simple and unifying framework for the creation of training sets, which expresses weak supervision strategies as labeling functions, and then uses a generative model to de-noise the labeling data.",
"Snorkel 4 (Ratner et al., 2019a) is a system built around the data programming paradigm for rapidly creating, modeling, and managing training data.",
"Several works have been explored to use data programming for training data creation.",
"For example, SwellShark (Fries et al., 2017) was proposed for quickly building biomed-4 https://www.snorkel.org/ ical named entity recognition taggers using lexicons, heuristics, and other forms of weak supervision instead of hand-labeled data.",
"GWASkb with thousands of genotype-phenotype associations was created by using Snorkel in (Kuleshov et al., 2019).",
"Snorkel was also used for chemical reaction relationship extraction (Mallory et al., 2020), discourse structure learning (Badene et al., 2019) and medical entity classification (Fries et al., 2020).",
"In addition, data programming was further improved under different situations.",
"For example, MeTaL (Ratner et al., 2019b) was proposed for modeling and integrating weak supervision sources with different unknown accuracies, correlations, and granularities.",
"Cross-modal data programming was proposed in (Dunnmon et al., 2020).",
"Fly-ingSquid (Fu et al., 2020) speeded up weak supervision with triplet methods.",
"In this paper, we propose a novel concept prerequisite relation learning approach, named CPRL, which combines both concept representation learned from a heterogeneous graph and concept pairwise features.",
"Furthermore, we extend CPRL under weakly supervised settings to make our method more practical.",
"The experiments on four datasets show that our method achieves state-of-the-art performance.",
"In addition, we also prove the effectiveness of our weakly supervised prerequisite relation learning variants.",
"In future, we plan to design more effective label functions or employ more reliable weakly supervised learning approaches (Li et al., 2019b; Guo et al., 2019) to further improve the performance.",
"Moreover, we will also introduce concept prerequisite relations into curriculum planning and intelligent tutoring applications, e.g. organizing learning resources into a reasonable order and incorporating prerequisite relations into knowledge tracing technologies.",
"This work is supported by the National Key Research and Development Project of China (No. 2018AAA0101900), the Zhejiang Provincial Natural Science Foundation of China (No. LY17F020015), the Chinese Knowledge Center of Engineering Science and Technology (CKCEST) and MOE Engineering Research Center of Digital Library."
] | [
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"objective",
"result",
"abstain",
"result",
"abstain",
"other"
] |
[
"Contextualized representations (e.g. ELMo, BERT) have become the default pretrained representations for downstream NLP applications.",
"In some settings, this transition has rendered their static embedding predecessors (e.g. Word2Vec, GloVe) obsolete.",
"As a side-effect, we observe that older interpretability methods for static embeddings while more mature than those available for their dynamic counterparts are underutilized in studying newer contextualized representations.",
"Consequently, we introduce simple and fully general methods for converting from contextualized representations to static lookup-table embeddings which we apply to 5 popular pretrained models and 9 sets of pretrained weights.",
"Our analysis of the resulting static embeddings notably reveals that pooling over many contexts significantly improves representational quality under intrinsic evaluation.",
"Complementary to analyzing representational quality, we consider social biases encoded in pretrained representations with respect to gender, race/ethnicity, and religion and find that bias is encoded disparately across pretrained models and internal layers even for models that share the same training data.",
"Concerningly, we find dramatic inconsistencies between social bias estimators for word embeddings.",
"Word embeddings (Bengio et al., 2003; Collobert and Weston, 2008; Collobert et al., 2011) have been a hallmark of modern natural language processing (NLP) for many years.",
"Embedding methods have been broadly applied and have experienced parallel and complementary innovations alongside neural network methods for NLP.",
"Advances in embedding quality in part have come from integrating additional information such as syntax (Levy and Goldberg, 2014a; Li et al., 2017), morphology (Cot-terell and Schutze, 2015), subwords (Bojanowski et al., 2017), subcharacters (Stratos, 2017; Yu et al., 2017) and, most recently, context (Peters et al., 2018; Devlin et al., 2019).",
"Due to their tremendous representational power, pretrained contextualized representations, in particular, have seen widespread adoption across myriad subareas of NLP.",
"The recent dominance of pretrained contextualized representations such as ELMo (Peters et al., 2018) and BERT (Devlin et al., 2019) has served as the impetus for exciting and diverse interpretability research: Liu et al. (2019a); Tenney et al. (2019a) study what is learned across the layers of these models, Tenney et al. (2019b); Ethayarajh (2019) consider what is learned from context, Clark et al. (2019); Michel et al. (2019) look at specific attention heads, Hewitt and Manning (2019); Ettinger (2020) address linguistic understanding such as syntax and negation, and Wallace et al. (2019); Tan and Celis (2019) address ethical concerns such as security (adversarial robustness) and social bias.",
"In fact, the neologism BERTology was coined specifically to describe this flurry of interpretability research.",
"1 While these works have provided nuanced fine-grained analyses by creating new interpretability schema/techniques, we instead take an alternate approach of trying to re-purpose methods developed for analyzing static word embeddings.",
"In order to employ static embedding interpretability methods to contextualized representations, we begin by proposing a simple strategy for converting from contextualized representations to static embeddings.",
"Crucially, our method is fully general and assumes only that the contextualized model maps word sequences to vector sequences.",
"Given this generality, we apply our method to 9 popular pretrained contextualized representations.",
"The resulting static embeddings serve as proxies for the original contextualized model.",
"We initially examine the representational quality of these embeddings under intrinsic evaluation.",
"Our evaluation produces several insights regarding layer-wise lexical semantic understanding and representational variation in contextualized representations.",
"Importantly, our analyses suggest constructive improvements to potentially improve downstream practices in using contextualized models.",
"Simultaneously, we find that our static embeddings substantially outperform Word2Vec and GloVe and therefore suggests our method serves the dual purpose of being a lightweight mechanism for generating static embeddings that track with advances in contextualized representations.",
"Since static embeddings have significant advantages with respect to speed, computational resources, and ease of use, these results have important implications for resource-constrained settings (Shen et al., 2019), environmental concerns (Strubell et al., 2019), and the broader accessibility of NLP technologies.",
"2 Alongside more developed methods for embedding analysis, the static embedding setting is also equipped with a richer body of work regarding social bias.",
"In this sense, we view understanding the encoded social bias in representations as a societally critical special-case of interpretability research.",
"We employ methods for identifying and quantifying gender, racial/ethnic, and religious bias (Bolukbasi et al., 2016; Garg et al., 2018; Manzini et al., 2019) to our static embeddings.",
"These experiments not only shed light on the properties of our static embeddings for downstream use but can also serve as a proxy for understanding latent biases in the original pretrained contextual representations.",
"We find that biases in different models and across different layers are quite disparate; this has important consequences on model and layer selection for downstream use.",
"Further, for two sets of pretrained weights learned on the same training data, we find that bias patterns still remain fairly distinct.",
"Most surprisingly, our large-scale evaluation makes clear that existing bias estimators are dramatically inconsistent with each other.",
"In order to use a contextualized model like BERT to compute a single context-agnostic representation for a given word w , we define two operations.",
"2 A humanist's outlook on the (in)accessibility of BERT: https://tedunderwood.com/2019/07/15/do-humanists-need-bert/ The first is subword pooling : the application of a pooling mechanism over the k subword representations generated for w in context c in order to compute a single representation for w in c , i.e. { w 1 c , . . . , w kc } (cid:55) w c .",
"Beyond this, we define context combination to be the mapping from representations w c 1 , . . . , w c n of w in different contexts c 1 , . . . , c n to a single static embedding w that is agnostic of context.",
"Subword Pooling.",
"The tokenization procedure for BERT can be decomposed into two steps: performing a simple word-level tokenization and then potentially deconstructing a word into multiple subwords, yielding w 1 , . . . , w k such that cat ( w 1 , . . . , w k ) = w where cat ( ) indicates concatenation.",
"Then, every layer of the model computes vectors w 1 c , . . . , w kc .",
"Given these vectors, we consider four pooling mechanisms to compute w c : w c = f ( w 1 c , . . . , w kc ) f { min , max , mean , last } min( ) , max( ) are element-wise min/max pooling, mean( ) is the arithmetic mean and last( ) indicates selecting the last vector, w kc .",
"Context Combination.",
"Next, we describe two approaches for specifying contexts c 1 , . . . , c n and combining the associated representations w c 1 , . . . , w c n .",
"Decontextualized: For a word w , we use a single context c 1 = w .",
"That is, we feed the single word w into the pretrained model and use the outputted vector as the representation of w (applying subword pooling if the word is split into multiple subwords).",
"Aggregated: Since the Decontextualized strategy presents an unnatural input to the pretrained encoder, which likely never encountered w in isolation, we instead aggregate representations of w across multiple contexts.",
"In particular, we sample n sentences from a text corpus D (see A.2) each of which contains the word w , and compute the vectors w c 1 , . . . , w c n .",
"Then, we apply a pooling strategy to yield a single representation that aggregates representations across contexts: w = g ( w c 1 , . . . , w c n ); g { min , max , mean } 3 Setup We begin by verifying that the resulting static embeddings that we derive retain their representational strength, to some extent.",
"We take this step to ensure that properties we observe of the static embeddings can be attributed to, and are consistent with, the original contextualized representations.",
"Inspired by concerns with probing methods/diagnostic classifiers (Liu et al., 2019a; Hewitt and Liang, 2019) regarding whether learning can be attributed to the classifier and not the underlying representation, we employ an exceptionally simple parameter-free method for converting from contextualized to static representations to ensure that any properties observed in the latter are not introduced via this process.",
"When evaluating static embedding performance, we consider Word2Vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014) embeddings as baselines since they have been the most prominent pretrained static embeddings for several years.",
"Similarly, we begin with BERT as the contextualized model as it is currently the most prominent in downstream use among the growing number of alternatives.",
"We provide identical analyses for 4 other contextualized model architectures (GPT-2 (Radford et al., 2019), XLNet (Yang et al., 2019), RoBERTa (Liu et al., 2019b), DistilBERT (Sanh et al., 2019)) and, in total, 9 sets of pretrained weights.",
"All models, weights, and naming conventions used are enumerated in Appendix C and Table 9.",
"Additional representation quality results appear in Tables 47 and Figures 410.",
"We primarily report results for bert-base-uncased ; further results for bert-large-uncased appear in Figure 3.",
"To assess the representational quality of our static embeddings, we evaluate on several word similarity and word relatedness datasets.",
"3 We consider 4 such datasets: RG65 (Rubenstein and Goodenough, 1965), WS353 (Agirre et al., 2009), SIMLEX 999 (Hill et al., 2015) and SIMVERB 3500 (Gerz et al., 2016) (see A.4 for more details).",
"Taken together, these datasets contain 4917 examples and specify a vocabulary V of 2005 unique words.",
"Each example is a pair of words ( w 1 , w 2 ) with a gold-standard annotation (provided by one or more humans) of the semantic similarity or relatedness between w 1 and w 2 .",
"A word embedding is evaluated by the relative correctness of its ranking 3 Concerns with this decision are discussed in A.3.",
"of the similarity/relatedness of all examples in a dataset with respect to the gold-standard ranking using the Spearman coefficient.",
"Embedding predictions are computed using cosine similarity.",
"Pooling Strategy.",
"In Figure 1, we show the performance on all 4 datasets for the resulting static embeddings.",
"For embeddings computed using the Aggregated strategy, representations are aggregated over N = 100 K sentences where N is the number of total contexts for all words ( A.5).",
"Across all four datasets, we see that g = mean is the best-performing pooling mechanism within the Aggregated strategy and also outperforms the Decontexualized strategy by a substantial margin.",
"Fixing g = mean , we further observe that mean pooling at the subword level also performs best (the dark green dashed line in all plots).",
"We further find that this trend consistently holds across pretrained models.",
"Number of Contexts.",
"In Table 1, we see that performance for both BERT-12 and BERT-24 steadily increases across all datasets with increas-Figure 1: Layer-wise performance of distilled BERT-12 embeddings for all pairs ( f, g ) with N = 100 K. ing N ; this trend holds for the other 7 pretrained models.",
"In particular, in the largest setting with N = 1 M, the BERT-24 embeddings distilled from the best-performing layer for each dataset drastically outperform both Word2Vec and GloVe.",
"However, this can be seen as an unfair comparison given that we are selecting specific layers for specific datasets.",
"As the middle band of Table 1 shows, we can fix a particular layer for all datasets and still outperform both Word2Vec and GloVe on all datasets.",
"Relationship between N and model layer.",
"In Figure 1, there is a clear preference towards the first quarter of the model's layers (layers 0-3) with a sharp drop-off in performance immediately thereafter.",
"A similar preference for the first quarter of the model is observed in models with a different number of layers (Figure 3, Figure 10).",
"Given that our intrinsic evaluation is centered on lexical semantic understanding, this appears to be largely consistent with the findings of Liu et al. (2019a); Tenney et al. (2019a) regarding where lexical semantic information is best encoded in pretrained contextualized models.",
"However, as we pool over a larger number of contexts, Table 1 reveals an interesting relationship between N and the best-performing layer.",
"The best-performing layer monotonically (with a single exception) shifts to be later and later within the pretrained model.",
"Since the later layers did not perform better for smaller values of N , these layers demonstrate greater variance with respect to the layer-wise distributional mean and reducing this variance improves performance.",
"4 Since later layers of the 4 Shi et al. (2019) concurrently propose a different ap-model are generally preferred by downstream practitioners (Zhang et al., 2020), our findings suggest that downstream performance could be further improved by considering variance reduction as we suggest; Ethayarajh (2019) also provides concrete evidence of the tremendous variance in the later layers of deep pretrained contextualized models.",
"Cross-Model Results.",
"Remarkably, we find that most tendencies we observe generalize well to all other pretrained models we study (specifically the optimality of f = mean , g = mean , the improved performance for larger N , and the layer-wise tendencies with respect to N ).",
"This is particularly noteworthy given that several works have found that different contextualized models pattern substantially differently (Liu et al., 2019a; Ethayarajh, 2019).",
"In Table 2, we summarize the performance of all models we studied.",
"All of the models considered were introduced during a similar time period and have comparable properties in terms of downstream performance.",
"In spite of this, we observe that their static analogues perform radically differently.",
"For example, several do not reliably outperform Word2Vec and GloVe despite outperforming Word2vec and GloVe reliably in downstream evaluation.",
"Future work may consider whether the reduction to static embeddings affects different models differently and whether this is reflective of the quality of context-agnostic lexical semantics from other types of linguistic knowledge (e.g. context modelling, syntactic understanding, and semantic composition).",
"In general, these results proach with similar motivations.",
"provide further evidence to suggest that linguistic understanding captured by different pretrained weights may be substantially different, even for models with near-identical Transformer (Vaswani et al., 2017) architectures.",
"Somewhat surprisingly, in Table 2, DistilBert-6 outperforms BERT-12 on three out of the four datasets despite being distilled (Ba and Caruana, 2014; Hinton et al., 2015) from BERT-12.",
"Analogously, RoBERTa, which was introduced as a direct improvement over BERT, does not reliably outperform the corresponding BERT models.",
"Bias is a complex and highly relevant topic in developing representations and models in NLP and ML.",
"In this context, we study the social bias encoded within our static word representations as a proxy for understanding biases of the source contextualized representations.",
"As Kate Crawford argued for in her NIPS 2017 keynote, while studying individual models is important given that specific models may propagate, accentuate, or diminish biases in different ways, studying the representations that serve as the starting point and that are shared across models (which are used for possibly different tasks) allows for more generalizable understanding of bias (Baro-cas et al., 2017).",
"In this work, we simultaneously consider multiple axes of social bias (i.e. gender, race, and religion) and multiple proposed methods for computationally quantifying these biases.",
"We do so precisely because we find that existing NLP literature has primarily prioritized gender (which may be a technically easier setting and is starkly incomplete in terms of social biases of interest).",
"Further, as we will show, different computational specifi-cations of bias that evaluate the same underlying social phenomena yield markedly different results.",
"As a direct consequence, we strongly caution that the results must be taken with respect to the definitions of bias being applied.",
"Further, we note that an embedding which receives low bias scores cannot be assumed to be (nearly) unbiased.",
"Instead, it satisfies the significantly weaker condition that under existing definitions the embedding exhibits low bias and perhaps additional (more nuanced) definitions are needed.",
"Bolukbasi et al. (2016) introduced a measure of",
"gender bias which assumes access to a set P = { ( m 1 , f 1 ) , . . . , ( m n , f n ) } of (male, female) word pairs where m i and f i only differ in gender (e.g. men' and women').",
"They compute a gender direction g : g = PCA (cid:0) [ m 1 f 1 , . . . , m n f n ] (cid:1) [0] where [0] indicates the first principal component.",
"Then, given a set N of target words that we are interested in evaluating the bias with respect to, Bolukbasi et al. (2016) specifies the bias as: bias BOLUKBASI ( N ) = mean w N | cos ( w , g ) | This definition is only inherently applicable to binary bias settings, i.e. where there are exactly two protected classes .",
"Multi-class generalizations are difficult to realize since constructing P requires aligned k -tuples whose entries only differ in the underlying social attribute and this becomes increasingly challenging for increasing k .",
"Further, this definition assumes the first principal component explains a large fraction of the observed variance.",
"Garg et al. (2018) introduced a different definition that is not restricted to gender and assumes access to sets A 1 = { m 1 , , m n } and A 2 = { f 1 , , f n (cid:48) } of representative words for each of the two protected classes.",
"For each class, i = mean w A i w is computed.",
"Garg et al. (2018) computes the bias in two ways: bias GARG-EUC ( N ) = mean w N (cid:107) w 1 (cid:107) 2 (cid:107) w 2 (cid:107) 2 bias GARG-COS ( N ) = mean w N cos( w , 1 ) cos( w , 2 ) Compared to the definition of Bolukbasi et al. (2016), these definitions may be more general as constructing P is strictly more difficult than constructing A 1 , A 2 (as P can always be split into two such sets but the reverse is not generally true) and Garg et al. (2018)'s definition does not rely on the first principal component explaining a large fraction of the variance.",
"However, unlike the first definition, Garg et al. (2018) computes the bias in favor of/against a specific class (meaning if N = { programmer', homemaker' } and pro-grammer' was equally male-biased as homemaker' was female-biased, then under the definition of Garg et al. (2018), there would be no bias in ag-gregate).",
"To permit comparison, we insert absolute values around each term in the mean over N .",
"to sets of representative words A 1 , . . . , A k 5",
"bias MANZINI ( N ) = mean w N mean i { 1 ,...,k } mean a A i cos( w , a )",
"Inspired by the results of Nissim et al. (2020), in this work we transparently report social bias in existing static embeddings as well as the embeddings we produce.",
"In particular, we exhaustively report the measured bias for all 3542 valid (pretrained model, layer, social attribute, bias definition, target word list) 5-tuples all possible combinations of static embeddings and bias measures considered.",
"The results for models beyond BERT appear in Figures 1118.",
"We specifically report results for binary gender (male, female), two-class religion (Christianity, Islam) and three-class race (white, Hispanic, and Asian), directly following Garg et al. (2018).",
"We study bias with respect to target word lists of professions N prof and adjectives N adj .",
"These results are by no means intended to be comprehensive with regards to the breadth of bias socially and only address a restricted subset of social biases which notably does not include intersectional biases.",
"The types of biases being evaluated for are taken with respect to specific word lists (which are sometimes subjective albeit being peer-reviewed) that serve as exemplars and definitions of bias are grounded in the norms of the United States.",
"All word lists are provided in Appendix B and are sourced in A.6.",
"Layer-wise Bias Trends.",
"In Figure 2, we report layer-wise bias across all ( attribute, definition ) pairs.",
"We clearly observe that for every social attribute, there is a great deal of variation across the layers in the quantified amount of bias for a fixed bias estimator.",
"Further, while we are not surprised that different bias measures for the same social attribute and the same layer assign different absolute scores, we observe that they also do not agree in relative judgments.",
"For gender, we observe that the bias estimated by the definition of Manzini et al. (2019) steadily increases before peaking at the penultimate layer and slightly decreasing thereafter.",
"In contrast, under bias GARG-EUC 5 We slightly modify the definition of Manzini et al. (2019) by",
"(a) using cosine similarity where they use cosine distance and",
"(b) inserting absolute values around each term in the mean over N .",
"We make these changes to introduce consistency with the other definitions and to permit comparison.",
"we see a distribution with two peaks corresponding to layers at the start or end of the pretrained model with less bias within the intermediary layers.",
"For estimating the same quantity, bias GARG-COS is mostly uniform across the layers.",
"Similarly, in looking at the religious bias, we see similar inconsistencies with the bias increasing monotonically from layers 2 through 8 under bias MANZINI , decreasing monotonically under bias GARG-EUC , and remaining roughly constant under bias GARG-COS .",
"In general, while the choice of N (and the choice of A i for gender) does affect the absolute bias estimates, the relative trends across layers are fairly robust to these choices for a specific definition.",
"Consequences.",
"Taken together, our analysis suggests a concerning state of affairs regarding bias quantification measures for (static) word embeddings.",
"In particular, while estimates are seemingly stable to some types of choices regarding word lists, bias scores for a particular word embedding are tightly related to the definition being used and existing bias measures are markedly inconsistent with each other.",
"We find this has important consequences beyond understanding the social biases in our representations.",
"Concretely, we argue that without certainty regarding the extent to which embeddings are biased, it is impossible to properly interpret the meaningfulness of debiasing procedures (Bolukbasi et al., 2016; Zhao et al., 2018a,b; Sun et al., 2019) as we cannot reliably estimate the bias in the embeddings both before and after the procedure.",
"This is further compounded with the existing evidence that current intrinsic measures of social bias may not handle geometric behavior such as clustering (Gonen and Goldberg, 2019).",
"Cross-Model Bias Trends.",
"In light of the above, next we compare bias estimates across different pretrained models in Table 3.",
"Given the conflicting scores assigned by different definitions, we retain all definitions along with all social attributes in this comparison.",
"However, we only consider target words given by N prof due to the aforementioned stability (and for visual clarity) with results for N adj appearing in Table 8.",
"Since we do not preprocess or normalize embeddings, the scores using bias GARG-EUC are incomparable (and may be improper to compare in the layer-wise case) as Figure 2: Layer-wise bias of distilled BERT-12 embeddings for f = mean , g = mean , N = 100 K. Gender Race Religion B, P GE, P GC, P M, P GE GC M M GE GC M Word2Vec 0.0503 0.1758 0.075 0.2403 0.1569 0.0677 0.2163 0.0672 0.0907 0.053 0.14 GloVe 0.0801 0.3534 0.0736 0.1964 0.357 0.0734 0.1557 0.1171 0.2699 0.0702 0.0756 BERT-12 0.0736 0.3725 0.0307 0.3186 0.2868 0.0254 0.3163 0.2575 1.2349 0.0604 0.2955 BERT-24 0.0515 0.6418 0.0462 0.234 0.4674 0.0379 0.2284 0.1956 0.6476 0.0379 0.2316 GPT2-12 0.4933 25.8743 0.0182 0.6464 2.0771 0.0062 0.7426 0.6532 4.5282 0.0153 0.776 GPT2-24 0.6871 40.1423 0.0141 0.8514 2.3244 0.0026 0.9019 0.8564 8.9528 0.0075 0.9081 RoBERTa-12 0.0412 0.2923 0.0081 0.8546 0.2077 0.0057 0.8551 0.8244 0.4356 0.0111 0.844 RoBERTa-24 0.0459 0.3771 0.0089 0.7879 0.2611 0.0064 0.783 0.7479 0.5905 0.0144 0.7636 XLNet-12 0.0838 1.0954 0.0608 0.3374 0.6661 0.042 0.34 0.2792 0.8537 0.0523 0.318 XLNet-24 0.0647 0.7644 0.0407 0.381 0.459 0.0268 0.373 0.328 0.8009 0.0505 0.368 DistilBERT-6 0.0504 0.5435 0.0375 0.3182 0.3343 0.0271 0.3185 0.2786 0.8128 0.0437 0.3106 Table 3: Social bias encoded within different pretrained models with respect to a set of professions N prof .",
"they are sensitive to the absolute norms of the embeddings.",
"6 Further, we note that bias BOLUKBASI may not be a reliable indicator since the first principal component explains less than 35% of the variance for the majority of distilled embedding (Zhao et al. (2019a) show similar findings for ELMo).",
"For bias MANZINI and bias GARG-COS , we find that all distilled static embeddings have substantially higher scores under bias MANZINI but generally lower scores under bias GARG-COS when compared to Word2Vec and GloVe.",
"Interestingly, we see that under bias MANZINI both GPT-2 and RoBERTa embeddings consistently get high scores when compared to other distilled embeddings but under bias GARG-COS they are deemed the least biased.",
"Data alone does not determine bias.",
"Comparing the results for BERT-12 and BERT-24 (full layer-wise results for BERT-24 appear in Figure 11) reveals that bias trends for BERT-12 and BERT-24 are starkly different for any fixed 6 When we normalized using the Euclidean norm, we found the relative results to reliably coincide with those for bias GARG-COS which is consistent with Garg et al. (2018).",
"bias measure.",
"What this indicates is the bias observed in contextualized models is not strictly determined by the training data (as these models share the same training data as do all other 12 and 24 model pairs) and must also be a function of the architecture, training procedure, and/or random initialization.",
"Takeaways.",
"Ultimately, given the aforementioned issues regarding the reliability of bias measures, it is difficult to arrive at clear consensus of the how the bias encoded compares between our distilled representations and prior static embeddings.",
"What our analysis does resolutely reveal is a pronounced and likely problematic effect of existing bias definitions on the resulting bias estimates.",
"Contextualized Static.",
"Recently, Akbik et al. (2019) introduced an approach that gradually aggregates representations during training to accumulate global information and demonstrated improvements over only contextualized representations for NER.",
"May et al. (2019) instead synthetically construct a single semantically-bleached sentence which is fed into a sentence encoder to yield a static representation.",
"In doing so, they introduce SEAT as a means for studying biases in sentence encoders by applying WEAT (Caliskan et al., 2017) to the resulting static representations.",
"This approach appears inappropriate for quantifying bias in sentence encoders 7 as sentence encoders are trained on semantically-meaningful sentences and semantically-bleached constructions are not representative of this distribution and their templates heavily rely on deictic expressions which are difficult to adapt for certain syntactic categories such as verbs (as required for SIMVERB 3500 especially).",
"Given these concerns, our reduction method may be preferable for use in estimation of bias in contextualized representations.",
"Due to the fact that we use mean-pooling, our approach may lend itself to interpretations of the bias in a model on average across contexts.",
"Ethayarajh (2019) considers a similar method to ours where pooling is replaced by PCA .",
"While this work demonstrated contextualized representations are highly contextual, our work naturally explores the complementary problem of what value can be extracted from the static analogue of these representations.",
"Bias.",
"Social bias in NLP has been primarily evaluated in three ways:",
"(a) using geometric similarity between embeddings (Bolukbasi et al., 2016; Garg et al., 2018; Manzini et al., 2019),",
"(b) adapting psychological association tests (Caliskan et al., 2017; May et al., 2019), and",
"(c) considering downstream behavior (Zhao et al., 2017, 2018a, 2019a; Stanovsky et al., 2019).",
"8 Our bias evaluation is in the style of",
"(a) and we consider multi-class social bias in the lens of gender, race, and religion whereas prior work has centered on binary gender.",
"Additionally, while most prior work has discussed the static embedding setting, recent work has considered sentence encoders and contextualized models.",
"Zhao et al. (2019a) consider gender bias in ELMo when applied to coreference systems and Kurita et al. (2019) extend these results by leveraging the masked language modeling objective of BERT.",
"Similarly, Basta et al. (2019) considers intrinsic gender bias in ELMo via gender-swapped 7 The authors also identified several empirical concerns that draw the meaningfulness of this method into question.",
"8 Sun et al. (2019) provides a taxonomy of the work towards understanding gender bias within NLP.",
"sentences.",
"When compared to these approaches, we study a broader class of biases under more than one bias definition and consider more than one model.",
"Further, while many of these approaches generally neglect reporting bias values for different layers of the model, we show this is crucial as bias is not uniformly distributed throughout model layers and practitioners often do not use the last layer of deep Transformer models (Liu et al., 2019a; Zhang et al., 2020; Zhao et al., 2019b).",
"9 7 Future Directions Our work furnishes multiple insights about pretrained contextualized models that suggest changes (subword pooling, layer choice, beneficial variance reduction via averaging across contexts) to improve downstream performance.",
"Recent models have combined static and dynamic embeddings (Peters et al., 2018; Bommasani et al., 2019; Akbik et al., 2019) and our representations may also support drop-in improvements in these settings.",
"While not central to our goals, we discovered that our static embeddings substantially outperform Word2Vec and GloVe under intrinsic evaluation.",
"Future research may consider downstream gains as improved static embeddings are critical for resource-constrained settings and may help address environmental concerns in NLP (Strubell et al., 2019), machine learning (Canziani et al., 2016), and the broader AI community (Schwartz et al., 2019).",
"Future research could explore weighting schema in the averaging process analogous to SIF (Arora et al., 2017) for sentence representations computed via averaging (Wieting et al., 2016).",
"The generality of the proxy analysis method implies that other interpretability methods for static embeddings can also be considered.",
"Further, post-processing approaches beyond analy-sis/interpretability such as dimensionality reduction may be particularly intriguing given that this is often challenging to perform within large multilayered networks like BERT (Sanh et al., 2019) but has been successfully demonstrated for static embeddings (Nunes and Antunes, 2018; Mu and Viswanath, 2018; Raunak et al., 2019).",
"Future work may revisit the choice of the corpus D from which contexts are drawn.",
"For downstream use, setting D to be the target domain may serve as a lightweight domain adaptation strategy similar to findings for averaged word representations for 9 This is the only layer studied in Kurita et al. (2019).",
"While our work demonstrates that contextualized representations retain substantial representational power even when reduced to be noncontextual, it is unclear what information is lost.",
"After all, contextualized representations have been so effective precisely because they are tremendously contextual (Ethayarajh, 2019).",
"As such, the validity of treating the resulting static embeddings as reliable proxies for the original contextualized model still remains open.",
"On the other hand, human language processing has often been conjectured to have both context-dependent and context-independent properties (Barsalou, 1982; Rubio-Fernandez, 2008; De-praetere, 2014, 2019).",
"Given this divide, our approach may provide an alternative mechanism for clarifying how these two properties interact in the computational setting from both an interpretability standpoint (i.e. comparing results for analyses on the static embeddings and the original contextualized representations) and a downstream standpoint (i.e. comparing downstream performance for models initialized using the static embeddings and the original contextualized representations).",
"However, the precise relationship between the role of context in human language processing and computational language processing remains unclear.",
"Theoretical explanation for the behavior we observe in two settings is also needed.",
"First, it is unclear why learning contextualized representations and then reducing them to static embeddings drastically outperforms directly learning static embeddings.",
"In particular, the GloVe embeddings we use are learned using 6 billion tokens whereas the BERT representations were trained on roughly half as much data (3.3 billion tokens).",
"Perhaps the behavior is reminiscent of the benefits of modelling in higher dimensional settings temporarily as is seen in other domains (e.g. the kernel trick and Mercer's theorem for learning non-linear classifiers using inner product methods): begin by recasting the problem in a more expressive space (contextual-ized representations) and then project/reduce to the original space (static embeddings).",
"Second, the reason for the benefits of the variance reduction that we observe are unclear.",
"Given that best-performing mechanism is to average over many contexts, it may be that approaching the asymptotic mean of the distribution across contexts is desirable/helps combat the anisotropy that exists in the original contextualized space (Ethayarajh, 2019).",
"In this work, we consider how methods developed for analyzing static embeddings can be re-purposed for understanding contextualized representations.",
"We introduce simple and effective procedures for converting from contextualized representations to static word embeddings.",
"When applied to pretrained models like BERT, we find the resulting embeddings are useful proxies that provide insights into the pretrained model while simultaneously outperforming Word2Vec and GloVe substantially under intrinsic evaluation.",
"We further study the extent to which various social biases (gender, race, religion) are encoded, employing several different quantification schemas.",
"Our large-scale analysis reveals that bias is encoded disparately across different popular pretrained models and different model layers.",
"Our findings also have significant implications with respect to the reliability of existing protocols for estimating bias in word embeddings.",
"All data, code and visualizations are made publicly available.",
"10 Further details are explictly and comprehensively reported in Appendix A. Acknowledgments We thank Ge Gao, Marty van Schijndel, Forrest Davis, and members of the Mozilla DeepSpeech and Cornell NLP groups for their valuable advice.",
"We especially thank the reviewers and area chairs for their articulate and constructive feedback."
] | [
"abstain",
"abstain",
"objective",
"method",
"result",
"result",
"result",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"objective",
"objective",
"method",
"method",
"abstain",
"abstain",
"method",
"result",
"objective",
"abstain",
"abstain",
"method",
"method",
"abstain",
"result",
"result",
"abstain",
"method",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"result",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"method",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"method",
"abstain",
"result",
"abstain",
"objective",
"method",
"result",
"method",
"result",
"result",
"abstain",
"abstain",
"abstain"
] |
[
"Related tasks often have inter-dependence on each other and perform better when solved in a joint framework.",
"In this paper, we present a deep multi-task learning framework that jointly performs sentiment and emotion analysis both.",
"The multi-modal inputs (i.e., text , acoustic and visual frames ) of a video convey diverse and distinctive information, and usually do not have equal contribution in the decision making.",
"We propose a context-level inter-modal attention framework for simultaneously predicting the sentiment and expressed emotions of an utterance.",
"We evaluate our proposed approach on CMU-MOSEI dataset for multi-modal sentiment and emotion analysis.",
"Evaluation results suggest that multitask learning framework offers improvement over the single-task framework.",
"The proposed approach reports new state-of-the-art performance for both sentiment analysis and emotion analysis.",
"With the rapid growth of social media video platforms such as Youtube, Vimeo, users now tend to upload videos on these platforms.",
"Such video platforms offer users an opportunity to express their opinions on any topic.",
"Videos usually consist of audio and visual modalities, and thus can be considered as a source of multi-modal information.",
"Although videos contain more information than text, fusing multiple modalities is a major challenge.",
"A common practice in sentiment analysis and emotion recognition or affective computing, in general, is to analyze textual opinions.",
"However, in recent days multi-modal affect analysis has gained a major attention (Poria et al., 2017b, 2016).",
"In these works, in addition to the visual frames , other sources of information such as acoustic and textual (transcript) representation of the spoken languages are also incorporated in the analysis.",
"Multi-modal analysis (e.g. sentiment analysis Zadeh et al. 2018c, emotion recognition Poria et al. 2016, question-answering Teney et al. 2017 etc.) is an emerging field of study, that utilizes multiple information sources for solving a problem.",
"These sources (e.g., text, visual, acoustic, etc.) offer a diverse and often distinct piece of information that a system can leverage on.",
"For example, text' carries semantic information of the spoken sentence, whereas acoustic ' information reveals the emphasis (pitch, voice quality) on each word.",
"In contrast, the visual ' information (image or video frame) extracts the gesture and posture of the speaker.",
"Traditionally, text ' has been the key factor in any Natural Language Processing (NLP) tasks including sentiment and emotion analysis.",
"However, with the recent emergence of social media platforms and their available multi-modal contents, an interdisciplinary study involving text , acoustic and visual features have drawn significant interest among the research community.",
"Effectively fusing this diverse information is non-trivial and poses several challenges to the underlying problem.",
"In our current work, we propose a multi-task model to extract both sentiment (i.e. positive or negative ) and emotion (i.e. anger , disgust , fear , happy , sad or surprise ) of a speaker in a video.",
"In multi-task framework, we aim to leverage the inter-dependence of these two tasks to increase the confidence of individual task in prediction.",
"For e.g., information about anger emotion can help in prediction of negative sentiment and vice-versa.",
"A speaker can utter multiple utterances (a unit of speech bounded by breathes or pauses) in a single video and these utterances can have different sentiments and emotions.",
"We hypothesize that the sentiment (or, emotion) of an utterance often has inter-dependence on other contextual utterances i.e. the knowledge of sentiment (or, emotion) for an utterance can assist in classifying its neighbor utterances.",
"We utilize all three modalities (i.e. text , acoustic and visual ) for the analysis.",
"Although all these sources of information are crucial, they are not equally beneficial for each individual instance.",
"Few examples are presented in Table 1.",
"In the first example, visual frames provide important clues than textual information for find-ing the sentiment of a sarcastic sentence Thanks for putting me on hold! I've all the time in the world. .",
"Similarly, the textual representation of second example I'm fine.",
"' does not reveal the exact emotion of a sad person.",
"For this particular case, acoustic or visual information such as low tone voice, facial expression etc. have bigger role to play for the classification.",
"Multi-task learning paradigm provides an efficient platform for achieving generalization.",
"Multiple tasks can exploit the inter-relatedness for improving individual performance through a shared representation.",
"Overall, it provides three basic advantages over the single-task learning paradigm",
"a).",
"it helps in achieving generalization for multiple tasks;",
"b).",
"each task improves its performance in association with the other participating tasks; and",
"c).",
"offers reduced complexity because a single system can handle multiple problems/tasks at the same time.",
"Sentiments (Pang et al., 2005) and emotions (Ekman, 1999) are closely related.",
"Most of the emotional states have clear distinction of being a positive or negative situation.",
"Emotional states e.g. anger ', fear ', disgust ', sad ' etc. belong to negative situations, whereas happy ' and surprise ' reflect the positive situations.",
"Motivated by the association of sentiment & emotion and the advantages of the multi-task learning paradigm, we present a multi-task framework that jointly learns and classifies the sentiments and emotions in a video.",
"As stated earlier, contextual-utterances and/or multi-modal information provide important cues for the classification.",
"Our proposed approach applies attention over both of these sources of information simultaneously (i.e., contextual utterance and inter-modal information), and aims to reveal the most contributing features for the classification.",
"We hypothesize that applying attention to contributing neighboring utterances and/or multimodal representations may assist the network to learn in a better way.",
"Our proposed architecture employs a recurrent neural network based contextual inter-modal attention framework.",
"In our case, unlike the previous approaches, that simply apply attention over the contextual utterance for classification, we take a different approach.",
"Specifically, we attend over the contextual utterances by computing correlations among the modalities of the target utterance and the context utterances.",
"This particularly helps us to distinguish which modalities of the relevant contextual utterances are more important for the classification of the target utterance.",
"The model facilitates this modality selection process by attending over the contextual utterances and thus generates better multi-modal feature representation when these modalities from the context are combined with the modalities of the target utterance.",
"We evaluate our proposed approach on the recent benchmark dataset of CMU-MOSEI (Zadeh et al., 2018c).",
"It is the largest available dataset (ap-prox. 23K utterances) for multi-modal sentiment and emotion analysis (c.f. Dataset Section).",
"The evaluation shows that contextual inter-modal attention framework attains better performance than the state-of-the-art systems for various combinations of input modalities.",
"The main contributions of our proposed work are three-fold:",
"a) we leverage the interdependence of two related tasks (i.e. sentiment and emotion) in improving each others performance using an effective multi-modal framework ;",
"b) we propose contextual inter-modal attention mechanism that facilitates the model to assign weightage to the contributing contextual utterances and/or to different modalities simultaneously .",
"Suppose, to classify an utterance u1 ' of 5 utterances video, visual features of u2 ' & u4 ', acoustic features of u3 ' and textual features of u1 ', u3 ' & u5 ' are more important than others.",
"Our attention model is capable of highlighting such diverse contributing features; and",
"c) we present the state-of-the-arts for both sentiment and emotion predictions .",
"A survey of the literature suggests that multimodal sentiment prediction is a relatively new area as compared to textual based sentiment prediction (Morency et al., 2011; Poria et al., 2017b; Zadeh et al., 2018a).",
"A good review covering the literature from uni-modal analysis to multi-modal analysis is presented in (Poria et al., 2017a).",
"Zadeh et al. (2016) introduced the multi-modal dictionary to understand the interaction between facial gestures and spoken words better when expressing sentiment.",
"In another work, Zadeh et al. (2017) proposed a Tensor Fusion Network (TFN) model to learn the intra-modality and inter-modality dynamics of the three modalities (i.e., text, visual and acoustic).",
"Authors reported improved accuracy using multi-modality on the CMU-MOSI dataset.",
"These works did not take contextual information into account.",
"Poria et al. (2017b) proposed a Long Short Term Memory (LSTM) based framework for sentiment classification that leverages the contextual information to capture the inter-dependencies between the utterances.",
"Zadeh et al. (2018a) proposed multi-attention blocks (MAB) to capture the information across the three modalities (text, visual and acoustic) for predicting the sentiments.",
"Authors evaluated their approach on the different datasets and reported improved accuracies in the range of 2-3% over the state-of-the-art models.",
"Blanchard et al. (2018) proposed a multi-modal fusion model that exclusively uses high-level visual and acoustic features for sentiment classification.",
"An application of multi-kernel learning based fusion technique was proposed in (Poria et al., 2016), where the authors employed deep convolutional neural network (CNN) for extracting the textual features and fused it with other modalities ( visual & acoustic ) for emotion prediction.",
"Ranganathan et al. (2016) proposed a convolutional deep belief network (CDBN) models for multi-modal emotion recognition.",
"The author used CDBN to learn salient multi-modal (acous-tic and visual) features of low-intensity expressions of emotions.",
"Hazarika et al. (2018) introduced a selfattention mechanism for multi-modal emotion detection by feature level fusion of text and speech.",
"Recently, Zadeh et al. (2018c) introduced the CMU-MOSEI dataset for multi-modal sentiment analysis and emotion recognition.",
"They effectively fused the tri-modal inputs through a dynamic fusion graph and also reported competitive performance w.r.t. various state-of-the-arts on MOSEI dataset for both sentiment and emotion classification.",
"The main difference between the proposed and existing methods is contextual inter-modal attention.",
"Systems (Poria et al., 2016; Zadeh et al., 2016, 2017; Blanchard et al., 2018) do not consider context for the prediction.",
"System (Po-ria et al., 2017b) uses contextual information for the prediction but without any attention mechanism.",
"In contrast, (Zadeh et al., 2018a) uses multi-attention blocks but did not account for contextual information.",
"Our proposed model is novel in the sense that our approach applies attention over multi-modal information of the contextual utterances in a single step.",
"Thus, it ensures to reveal the contributing features across multiple modalities and contextual utterances simultaneously for sentiment and emotion analysis.",
"Further, to the best of our knowledge, this is the first attempt at solving the problems of multi-modal sentiment and emotion analysis together in a multi-task framework.",
"The contextual inter-modal attention mechanism is not much explored in NLP domains as such.",
"We found one work that accounts for bi-modal attention for visual question-answering (VQA) (Teney et al., 2017).",
"However, its attention mechanism differs from our proposed approach in the following manner:",
"a) VQA proposed question guided image-attention, but our attention mechanism attends multi-modalities;",
"b) attention is applied over different positions of the image, whereas our proposed approach applies attention over multiple utterances and two-modalities at a time;",
"c).",
"our proposed attention mechanism attends a sequence of utterances (text, acoustic or vi-sual), whereas VQA applies attention in the spatial domain.",
"In another work, Ghosal et al. (2018) proposed an inter-modal attention framework for the multi-modal sentiment analysis.",
"However, the key differences with our current work are as follows:",
"a) Ghosal et al. (2018) addressed only sentiment analysis, whereas, in our current work, we address both the sentiment and emotion analysis;",
"b) Ghosal et al. (2018) handles only sentiment analysis in single task learning framework, whereas our proposed approach is based on multi-task learning framework, where we solve two tasks, i.e., sentiment analysis and emotion analysis, together in a single network;",
"c) we perform detailed comparative analysis over the single-task vs. multitask learning; and",
"Recognition and Sentiment Analysis In our proposed framework, we aim to leverage multi-modal and contextual information for predicting sentiment and emotion of an utterance simultaneously in a multi-task learning framework.",
"As stated earlier, a video consists of a sequence of utterances and their semantics often have inter-dependencies on each other.",
"We employ three bi-directional Gated Recurrent Unit (bi-GRU) network for capturing the contextual information (i.e., one for each modality).",
"Subsequently, we introduce pair-wise inter-modal attention mechanism (i.e. visual-text , text-acoustic and acoustic-visual ) to learn the joint-association between the multiple modalities & utterances.",
"The objective is to emphasize on the contributing features by putting more attention to the respective utterance and neighboring utterances.",
"Motivated by the residual skip connection (He et al., 2016) the outputs of pair-wise attentions along with the representations of individual modalities are concatenated.",
"Finally, the concatenated representation is shared across the two branches of our proposed networkcorresponding to two tasks, i.e., sentiment and emotion classification for prediction (one for each task in the multi-task frame-work).",
"Sentiment classification branch contains a softmax layer for final classification (i.e. positive & negative ), whereas for emotion classification we use sigmoid layer.",
"The shared representation will receive gradients of error from both the branches (sentiment & emotion) and accordingly adjust the weights of the models.",
"Thus, the shared representations will not be biased to any particular task, and it will assist the model in achieving generalization for the multiple tasks.",
"Empirical evidences support our hypothesis (c.f. Table 4).",
"Our contextual inter-modal attention framework works on a pair of modalities.",
"At first, we capture the cross-modality information by computing a pair of matching matrices M 1 , M 2 R u u , where u' is the number of utterances in the video.",
"Further, to capture the contextual dependencies, we compute the probability distribution scores ( N 1 , N 2 R u u ) over each utterance of cross-modality matrices M 1 , M 2 using a softmax function.",
"This essentially computes the attention weights for contextual utterances.",
"Subsequently, we apply soft attention over the contextual inter-modal matrices to compute the modalitiy-wise attentive representations ( O 1 & O 2 ).",
"Finally, a multiplicative gating mechanism (Dhingra et al., 2016) ( A 1 & A 2 ) is introduced to attend the important components of multiple modalities and utterances.",
"The concatenated attention matrix of A 1 & A 2 then acts as the output of our contextual inter-modal attention framework.",
"The entire process is repeated for each pair-wise modalities i.e. text-visual , acoustic-visual and text-acoustic .",
"We illustrate and summarize the proposed methodology in Figure 1 and Algorithm 1, respectively.",
"Algorithm 1 Multi-task Multi-modal Emotion and Sentiment (MTMM-ES) procedure MTMM-ES( t, v, a ) d 100 (cid:46) GRU dimension T biGRU T ( t, d ) V biGRU V ( v, d ) A biGRU A ( a, d ) Atn TV CIM-Attention ( T, V ) Atn AV CIM-Attention ( A, V ) Atn TA CIM-Attention ( T, A ) Rep [ Atn TV , Atn AV , Atn TA , T, V, A ] polarity Sentiment ( Rep ) emotion Emotion ( Rep ) return polarity, emotion procedure CIM-ATTENTION ( X, Y ) /*Cross-modality information*/ M 1 X.Y TM 2 Y.X T /*Contextual Inter-modal attention*/ for i, j 1 , ..., u do (cid:46) u = # utterances N 1 ( i, j ) e M 1( i,j ) (cid:80) uk =1 e M 1( i,k ) N 2 ( i, j ) e M 2( i,j ) (cid:80) uk =1 e M 2( i,k ) O 1 N 1",
"In this section, we describe the datasets used for our experiments and report the results along with necessary analysis.",
"We evaluate our proposed approach on the benchmark datasets of sentiment and emotion analysis, namely CMU Multi-modal Opinion Sentiment and Emotion Intensity (CMU-MOSEI) dataset (Zadeh et al., 2018c).",
"CMU-MOSEI dataset consists of 3,229 videos spanning over 23,000 utterances from more than 1,000 online YouTube speakers.",
"The training, validation & test set comprises of 16216, 1835 & 4625 utterances, respectively.",
"Each utterance has six emotion values associated with it, representing the degree of emotion for anger , disgust , fear , happy , sad and surprise .",
"Emotion labels for an utterance are identified as all non-zero intensity values, i.e. if an utterance has three emotions with non-zero values, we take all three emotions as multi-labels.",
"Further, an utterance that has no emotion label represents the absence of emotion.",
"For experiments, we adopt 7-classes (6 emotions + 1 no emotion ) and pose it as multi-label classification problem, where we try to Statistics Train Dev Test #Videos 2250 300 679 #Utterance 16216 1835 4625 #Positive 11499 1333 3281 #Negative 4717 502 1344 #Anger 3506 334 1063 #Disgust 2946 280 802 #Fear 1306 163 381 #Happy 8673 978 2484 #Sad 4233 511 1112 #Surprise 1631 194 437 #Speakers 1000 Table 2: Dataset statistics for CMU-MOSEI.",
"optimize the binary-cross entropy for each of the class.",
"A brief statistics for multi-label emotions is presented in Table 3.",
"In contrast, the sentiment values for each utterance are disjoint, i.e. value < 0 and value 0 represent the negative and positive sentiments, respectively.",
"A detailed statistics of the CMU-MOSEI dataset is shown in Table 2.",
"We use the CMU-Multi-modal Data SDK 1 for downloading and feature extraction.",
"The dataset was pre-tokenized and a feature vector was provided for each word in an utterance.",
"The textual , visual and acoustic features were extracted by GloVe (Pennington et al., 2014), Facets 2 & CovaRep (Degottex et al., 2014), respectively.",
"Thereafter, we compute the average of word-level features to obtain the utterance-level features.",
"We evaluate our proposed approach on the datasets of CMU-MOSEI.",
"We use the Python based Keras library for the implementation.",
"We compute F1-score and accuracy values for sentiment classification and F1-score and weighted accuracy (Tong et al., 2017) for emotion classification.",
"Weighted accuracy as a metric is chosen due to unbalanced samples across various emotions and it is also in line with the other existing works (Zadeh et al., 2018c).",
"To obtain multi-labels for emotion classification, we use 7 sigmoid neurons (corresponds to 6 emotions + 1 no-emotion) with binary cross-entropy loss function.",
"Finally, we take all the emotions whose respective values are above a threshold .",
"We optimize and cross-validate both the evaluation metrics (i.e. F1score and weighted accuracy) and set the threshold as 0 .",
"4 & 0 .",
"2 for F1-score and weighted accuracy, respectively.",
"We show our model configurations in Table 5.",
"As stated earlier, our proposed approach requires at least two modalities to compute bimodal attention.",
"Hence, we experiment with bimodal and tri-modal input combinations for the proposed approach i.e. taking text-visual , text-acoustic , acoustic-visual and text-visual-acoustic at a time.",
"For completeness (i.e., uni-modal in-1 https://github.com/A2Zadeh/ CMU-MultimodalDataSDK 2 https://pair-code.github.io/facets/ Parameters Values Bi-GRU 2 200 neurons , dropout=0.3 Dense layer 100 neurons , dropout=0.3 Activations ReLu Optimizer Adam ( lr=0.001 ) Output Softmax (Sent) & Sigmoid (Emo) Loss Categorical cross-entropy (Sent) Binary cross-entropy (Emo) Threshold 0.4 (F1) & 0.2 (W-Acc) for multi-label Batch 16 Epochs 50 Table 5: Model configurations puts), we also experiment with a variant of the proposed approach where we apply self-attention on the utterances of each modality separately.",
"The self-attention unit utilizes the contextual information of the utterances (i.e., it receives u d hidden representations), applies attention and forward it to the output layer for classification.",
"We report the experimental results of both single-task (STL) and multi-task (MTL) learning framework in Table 4.",
"In the single-task framework, we build separate systems for sentiment and emotion analysis, whereas in multi-task framework a joint-model is learned for both of these problems.",
"For sentiment classification, our single-task framework reports an F1-score of 77.67% and accuracy value of 79.8% for the tri-modal inputs.",
"Similarly, we obtain 77.71% F1-score and 60.88% weighted accuracy for emotion classification.",
"Comparatively, when both the problems are learned and evaluated in a multi-task learning framework, we observe performance enhancement for both sentiments as well as emotion classification.",
"MTL reports 78.86% F1-score and 80.47% accuracy value in comparison to 77.67% and 79.8% of STL with tri-modal inputs, respectively.",
"For emotion classification, we also observe an improved F-score (78.6 (MTL) vs. 77.7 (STL)) and weighted accuracy (62.8 (MTL) vs. 60.8 (STL)) T A V T+V T+A A+V T+A+V 65 70 75 80 F 1 -S c o r e T A V T+V T+A A+V T+A+V 707274767880 F 1 -S c o r e T A V T+V T+A A+V T+A+V 74 76 78 80 Sentiment A cc u r ac y T A V T+V T+A A+V T+A+V 55 60 65 Emotion W e i gh t e d A cc u r ac y STL MTL Figure 2: Single-task learning (STL) and Multi-task (MTL) learning frameworks for the proposed approach.",
"in the multi-task framework.",
"It is evident from Figure 2 that multi-task learning framework successfully leverages the inter-dependence of both the tasks in improving the overall performance in comparison to single-task learning.",
"The improvements of MTL over STL framework is also statistically significant with p -value < 0 .",
"05 (c.f. Table 7).",
"We also present attention heatmaps of the multitask learning framework in Figure 3.",
"For illustration, we take the video of the first utterance of Table",
"6. It has total six utterances.",
"We depict three pair-wise attention matrices of 2 (6 6) dimension-one each for text-visual , text-acoustics and acoustics-visual .",
"Solid lines in between represent the boundary of the two modalities, e.g. left Emotion Sentiment Anger Disgust Fear Happy Sad Surprise Average System F1 W-Acc F1 W-Acc F1 W-Acc F1 W-Acc F1 W-Acc F1 W-Acc F1 W-Acc F1 Acc Blanchard et al. (2018) -------63.2 60.0 Zadeh et al. (2018b) (cid:63) -71.4 65.2 89.9 --60.8 -85.4 53.3 -76.0 76.0 Nojavanasghari et al. (2016) (cid:63) 71.4 -67.0 ------Rajagopalan et al. (2016) (cid:63) -56.0 ------76.4 76.4 EF-LSTM (Zadeh et al., 2018c) (cid:63) ---56.7 -57.8 -59.2 ---TFN (Zadeh et al., 2017) (cid:63) -60.5 --66.6 66.5 -58.9 -52.2 --Random Forest (Breiman, 2001) (cid:63) 72.0 -73.2 -89.9 --61.8 -85.4 --SVM (Zadeh et al., 2016) (cid:63) ---60.0 -----Zadeh et al. (2018a) (cid:63) ---71.0 ----Zadeh et al. (2018c) 72.8 62.6 76.6 69.1 89.9 62.0 66.3 66.3 66.9 60.4 85.5 53.7 76.3 62.3 77.0 76.9 Proposed (Single-task learning) 75.6 64.5 81.0 72.2 87.7 51.5 59.3 61.6 67.3 65.4 86.5 53.0 76.2 61.3 77.6 79.8 Proposed (Multi-task learning) 75.9 66.8 81.9 72.7 87.9 62.2 67.0 53.6 72.4 61.4 86.0 60.6 78.6 62.8 78.8 80.5 Significance T -test w.r.t. SOTA------0.0240 0.0420 0.0012 0.0046 Significance T -test w.r.t. STL------0.0171 0.0312 0.0015 0.0278 Table 7: Comparative results: Proposed multi-task framework attains better performance as compared to the state-of-the-art (SOTA) systems in both the tasks i.e. emotion recognition (average) and sentiment analysis.",
"side of Figure 3a represents text modality and right side represents the visual modality.",
"The heatmaps represent the contributing features for the classification of utterances.",
"Each cell ( i,j ) of Figure 3 signifies the weights of utterance j ' for the classification of utterance i ' of the pair-wise modality matrices.",
"For example, for the classification of utterance u4 ' in Figure 3a, model puts more focus on the textual features of u2 ' and u6 ' than others and more-or-less equal focus on the visual features of all the utterances.",
"We compare our proposed approach against various existing systems (Nojavanasghari et al., 2016; Rajagopalan et al., 2016; Zadeh et al., 2017, 2018a,b,c; Blanchard et al., 2018) that made use of the same datasets.",
"A comparative study is shown in Table",
"7. We report the results of the top three existing systems (as reported in Zadeh et al. 2018c) for each case.",
"In emotion classification, the proposed multi-task learning framework reports the best F1-score of 78.6% as compared to the 76.3% and Weighted Accuracy of 62.8% as compared to the 62.3% of the state-of-the-art.",
"Similarly, for sentiment classification, the state-of-the-art system reports 77.0% F1-score and 76.9% accuracy value in the multi-task framework.",
"In comparison, we obtain the best F1-score and accuracy value of 78.8% and 80.4%, respectively, i.e., an improvement of 1.8% and 3.5% over the state-of-the-art systems.",
"During analysis, we make an important observation.",
"Small improvements in performance do not reveal the exact improvement in the number of instances.",
"Since there are more than 4.6K test samples, even the improvement by one point re-flects that the system improves its predictions for 46 samples.",
"We also perform test-of-significance ( T -test) and observe that the obtained results are statistically significant w.r.t. the state-of-the-art and proposed single-task results with p -values < 0 .",
"05 .",
"In this section, we present our analysis w.r.t. single-task and multi-task frameworks.",
"Table 8 lists a few example cases where the proposed multi-task learning framework shows how it yields better performance for both sentiment and emotion, while the single-task framework finds it nontrivial for the classification.",
"For example, first utterance has gold sentiment label as negative which was misclassified by STL framework.",
"However, the MTL framework improves this by correctly predicting positive '.",
"Similarly, in emotion analysis STL predicts three emotions i.e. disgust , happy and sad , out of which only one emotion ( disgust ) matches the gold emotions of anger and disgust .",
"In comparison, MTL predicts four emotions (i.e. anger , disgust , happy and sad ) for the same utterance.",
"The precision (2/4) and recall (2/2) for MTL framework is better than the precision (1/3) and recall (1/2) for the STL framework.",
"These analyses suggest that the MTL framework, indeed, captures better evidences than the STL framework.",
"In the second example, knowledge of sentiment helps in identifying the correct emotion label in the MTL framework.",
"For the gold sentiment ( positive ) and emotion ( happy and sad ) labels, STL correctly classifies one emotion (i.e. sad ), but fails to predict the other emotion (i.e. happy ).",
"In addition, it misclassifies another emotion (i.e. anger ).",
"Since, gold label happy corresponds to the posi-Sentiment Emotion Utterances Actual STL MTL Actual STL MTL 1 richardgereandsusanummyouireallydidn'tenjoythismovieatallitkindaboringfor Neg Pos Neg Anger, Disgust Disgust, Happy, Sad Anger, Disgust, Happy, Sad 2 we look forward to cooperating with the new government as it workstomakeprogressonawiderangeofissuesincludingfur-therdemocraticreformspromotionofhumanrightseconomicdevelopmentandnationalreconciliation Pos Pos Pos Happy, Sad Anger, Sad Happy, Sad 3 laughter and applause still there",
"tive scenario and predicted label anger is related to negative, knowledge of sentiment is a crucial piece of information.",
"Our MTL framework identifies this relation and leverage the predicted sentiment for the classification of emotion i.e. positive sentiment assists in predicting happy emotion.",
"This is an example of inter-dependence between the two related tasks and the MTL framework successfully exploits it for the performance improvement.",
"We also observe that the system puts comparatively more focus on some classes in MTL framework than the STL framework.",
"As an instance, MTL predicts anger ' class for 1173 utterances, whereas STL predicts it for 951 utterances (1063 anger utterances in the gold dataset).",
"Further, we observe contrasting behavior for the sad ' class, where MTL predicts 1618 utterances as sad ' compared to the 2126 utterances of STL.",
"For disgust ' and happy ' classes, both STL and MTL frameworks predict the approximately equal number of utterances.",
"Further, we observe that MTL performs poorly for the fear ' and surprise ' classes, where it could not predict a significant number of utterances.",
"A possible reason would be the under-representation of these instances in the given dataset.",
"In this paper, we have proposed a deep multitask framework that aims to leverage the interdependence of two related tasks, i.e., multi-modal sentiment and emotion analysis.",
"Our proposed approach learns a joint-representation for both the tasks as an application of GRU based inter-modal attention framework.",
"We have evaluated our proposed approach on the recently released benchmark dataset on multi-modal sentiment and emotion analysis (MOSEI).",
"Experimental results suggest that sentiment and emotion assist each other when learned in a multitask framework.",
"We have compared our proposed approach against the various existing systems and observed that multi-task framework attains higher performance for all the cases.",
"In the future, we would like to explore the other dimensions to our multi-task framework, e.g., Sentiment classification & intensity prediction, Emotion classification & intensity prediction and all the four tasks together.",
"Asif Ekbal acknowledges the Young Faculty Research Fellowship (YFRF), supported by Visves-varaya Ph.D. scheme for Electronics and IT, Ministry of Electronics and Information Technology (MeitY), Government of India, being implemented by Digital India Corporation (formerly Media Lab Asia)."
] | [
"abstain",
"method",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"objective",
"objective",
"other"
] |
[
"Abstract Deep learning (DL) techniques involving fine-tuning large numbers of model parameters have delivered impressive performance on the task of discriminating between language produced by cognitively healthy individuals, and those with Alzheimer's disease (AD).",
"However, questions remain about their ability to generalize beyond the small reference sets that are publicly available for research.",
"As an alternative to fitting model parameters directly, we propose a novel method by which a Transformer DL model (GPT-2) pre-trained on general English text is paired with an artificially degraded version of itself (GPT-D), to compute the ratio between these two models' perplexities on language from cognitively healthy and impaired individuals.",
"This technique approaches state-of-the-art performance on text data from a widely used \"Cookie Theft\" picture description task, and unlike established alternatives also generalizes well to spontaneous conversations.",
"Furthermore, GPT-D generates text with characteristics known to be associated with AD, demonstrating the induction of dementia-related linguistic anomalies.",
"Our study is a step toward better understanding of the relationships between the inner workings of generative neural language models, the language that they produce, and the deleterious effects of dementia on human speech and language characteristics.",
"Alzheimer's disease (AD) dementia affects every aspect of cognition, including language use.",
"Over 50 million people are currently diagnosed with AD dementia, and this number is expected to triple by 2050 (Organization et al., 2017; Patterson, 2018; Prince et al., 2016).",
"Furthermore, over half of the individuals living with dementia are undiagnosed (Lang et al., 2017).",
"While AD has no known cure, timely diagnosis can prevent or alleviate adverse outcomes ranging from anxiety over unexplained symptoms to family discord and catastrophic events (Stokes et al., 2015; Boise et al., 1999; Bond et al., 2005).",
"However, diagnosis of AD dementia is time-consuming and challenging for patients and physicians alike, and currently relies on patient and caregiver reports, extensive neuropsychological examinations, and invasive imaging and diagnostic procedures (Patterson, 2018).",
"Automated analysis of spoken language can potentially provide accurate, easy-to-use, safe and cost-effective tools for monitoring AD-related cognitive markers.",
"In particular, studies have demonstrated that supervised machine learning methods can learn to differentiate accurately between patients with dementia and healthy controls (Fraser et al., 2016; Orimaye et al., 2017), with particularly strong performance from recent deep learning (DL) models (Balagopalan et al., 2020; Roshanzamir et al., 2021).",
"However, the large number of parameters employed in DL presents a danger of overfitting to the small datasets concerned, and hinders interpretability of model predictions both critical concerns for clinical artificial intelligence applications (Graham et al., 2020).",
"As an alternative to fitting model parameters directly, we propose a novel method by which a pre-trained Transformer (Vaswani et al., 2017) model, GPT-2 (Radford et al., 2019) is paired with an artificially degraded version of itself (GPT-D), to compute the ratio of model perplexities on language from cognitively healthy and impaired individuals.",
"We anticipate that semantic information lost with dementia progression may be localized to particular layers of a neural language model, and that one can simulate this information loss by systematically modifying parameters in these layers.",
"Specifically, we hypothesize that impairing certain layers of a DL model can result in linguistic deficits that are also observed in dementia.",
"We further hypothesize 1866 that unlike prior work fitting model parameters to labeled Cooke Theft transcripts, this approach will detect task-agnostic linguistic anomalies, permitting evaluation of language from casual conversations.",
"We evaluate these hypotheses by targeting individual layers for induction of dementia-related linguistic anomalies, resulting in a degraded model GPT-D.",
"We then assess the ability of a paired perplexity approach combining GPT-2 with GPT-D to identify transcripts from participants with dementia.",
"In addition, we assess generalization performance, and consider the extent to which the best-performing degraded model reflects linguistic anomalies known to occur in AD dementia: usage of higher frequency words, and repetitiveness.",
"The contributions of this work can be summarized as follows:",
"a) we develop a novel method for automated detection of dementia-related linguistic anomalies, involving deliberate degradation of a pre-trained Transformer model;",
"b) this method exhibits state-of-the-art (SOTA) within-set performance for models trained on text alone, and is distinguished by its ability to generalize from cognitive tasks to conversational data;",
"c) the degradation process induces linguistic anomalies observed in dementia in language generated by GPT-D 1 .",
"Building on a rich body of evidence that machine learning methods can learn to distinguish between language from healthy controls and dementia patients (for a review, see Lyu (2018); Petti et al. (2020)), recent work leveraging pre-trained Transformer models has demonstrated improvements in performance over prior approaches.",
"Balagopalan et al. (2020) fine-tuned the BERT (Devlin et al., 2019) model on the training set of the AD Recognition through Spontaneous Speech (ADReSS) Challenge (Luz et al., 2020), which was developed, in part, to address the lack of standardized train/test splits and subset definitions in prior work using DementiaBank (Becker et al., 1994) (DB).",
"Balagopalan et al. (2020) report an accuracy of 83.3% on the test set, an improvement over machine learning models with expert-defined features.",
"Performance can also be further boosted by introducing more data from the same picture description task (Guo et al., 2021).",
"These findings suggest a promising direction, as models can be developed without 1 Our code is available at https://github.com/ LinguisticAnomalies/hammer-nets extensive feature engineering.",
"However, additional task-specific data are not always available.",
"DL models with millions of parameters are vulnerable to overfitting with small data sets, which may be difficult to detect as they are hard to interpret.",
"However, some DL models can be distilled into a single interpretable feature: language model (LM) perplexity (PPL).",
"PPL is a measurement of how well a language sample fits a trained LM.",
"Intuitively, a model trained on language from cognitively healthy participants should be surprised by language from participants with dementia, and the opposite should also be true.",
"Accordingly, the difference between the paired perplexities from cog-nitively healthy and dementia language models produces SOTA results on the task of identifying transcripts from participants with dementia (Fritsch et al., 2019; Cohen and Pakhomov, 2020), effectively condensing neural network parameters to a single diagnostically useful feature.",
"Contemporary deep LMs such as GPT-2 are already trained on large amounts of text, that has presumably been authored predominantly by cognitively healthy individuals.",
"The difficulty with leveraging these models within the paired perplexity paradigm arises from the lack of a correspondingly large set of text from participants with dementia.",
"We negotiate this difficulty by deliberately degrading a Transformer model to limit its semantic processing capabilities, obviating the need for large amounts of dementia-specific training data.",
"We show that the resulting models can effectively identify transcripts from participants with dementia, generalize across language samples and tasks, and generate text with linguistic characteristics of this condition.",
"We used three publicly available datasets 2 : DB, ADReSS, and the Carolinas Conversation Collection (CCC) (Pope and Davis, 2011).",
"Dataset characteristics are provided in Table 1. DB is a publicly available compendium of manually transcribed audio recordings of neuropsychological tests administered to healthy participants and patients with dementia.",
"A detailed description is available in Becker et al. (1994).",
"In brief, the tests include a 2 While the data used in this paper are publicly available, we are not able to redistribute any of these data as per Data Use agreement with Dementia Bank and the Carolinas Conversation Collection.",
"picture description task from the Boston Diagnostic Aphasia Examination (Goodglass and Kaplan, 1983), a widely-used diagnostic test for language abnormality detection.",
"In this task, the participants are presented with a Cookie Theft picture stimulus (see Figure 4 in Appendix), and are asked to describe everything they see occurring in the picture.",
"In other words, DB data are from tasks that were explicitly designed to detect language abnormalities in dementia patients.",
"We restricted the original set of 194 participants with any AD diagnosis only to those that were assessed as having probable AD, resulting in a set of 169 patients and 99 controls.",
"The ADReSS set is a subset of DB, which the controls and dementia participants were matched age and gender, resulting in a balanced dataset consisting of a total of 156 samples (78 with dementia and 78 controls) split into training and testing portions.",
"Unlike the two preceding datasets derived from picture description tasks, CCC is a collection of 646 transcribed recordings of interviews of 48 elderly cognitively normal individuals with non-dementia related chronic conditions, and 234 individuals with a diagnosis of dementia.",
"Interview topics vary considerably, and include discussions of the participant's health.",
"Additionally, we used a set of six synthetic Cookie Theft picture description narratives created by Bird et al. (2000) to study the impact of semantic dementia on verb and noun use in picture description tasks.",
"The transcripts were created to manipulate lexical frequency (which is also relevant in AD dementia, where words with higher lexical frequency tend to feature prominently (Al-mor et al., 1999)) by first compiling a composite baseline narrative from samples by healthy subjects, and then removing and/or replacing nouns and verbs in that baseline with words of higher lexical frequency (e.g., mother vs. woman vs. she).",
"Lexical frequency was calculated using the Celex Lexical Database (LDC96L14) and words were aggregated into groups based on four log frequency bands (0.5 1.0, 1.0 1.5, 1.5 2.0, 2.5 3.0: e.g., words in the 0.5 1.0 band occur in Celex more than 10 times per million).",
"We used these synthetic data to help with interpretation of the effects resulting from artificially impairing the GPT-2 model.",
"We performed basic pre-processing of transcripts in each dataset by which we removed speech artifact descriptions and converted non-ASCII characters to plain text.",
"We also excluded portions of transcripts that represented speech that did not belong to the participant.",
"We evaluated models for classification performance using the standard ADDReSS train/test splits.",
"We then performed cross-validation of GPT-D models to assess the stability of the best-performing configurations across folds.",
"For generalization performance, we evaluated how well models trained on one corpus performed on others.",
"We also assessed differences in text generation between GPT-2 and GPT-D, by estimating repetitiveness and lexical frequency, as well as through salience-based visualization .",
"We experimented with impairing the GPT-2 (small) model in two locations as illustrated in Figure 1 with various portions.",
"We found that impairing 50% of values in the corresponding location resulting in generally better performance, among 25%, 50%, 75% and 100% impairment.",
"The embedding layer (see (1) in Figure 1) is a 50,257 768 matrix where each row represents a token in the model's vocabulary.",
"The embedding layer was impaired by randomly masking 50% of the rows of of the embedding matrix.",
"The self-attention mechanism (denoted (2) in Figure 1) was impaired by masking the first 50% of columns in the Value matrix of the concatenated Query-Key-Value matrices.",
"We 1868 Figure 1: Impairment locations within the GPT-2 (small) model.",
"The self-attention mechanism multiplies vectors representing an input sequence by three identically-sized matrices, namely Query (Q), Key (K) and Value (V) each with dimension ( d ) of 768 768.",
"Q generates a representation of the current token which is compared with token representations derived from K, to calculate each token's influence on the contextual representation of the current one.",
"Multiplying by V generates a semantic representation of each token, which is added to the outgoing representation of the current token in accordance with this influence.",
"The attention weights are calculated by Equation 1, and the parameters of the matrices are updated during the training process.",
"The GPT-2 model's attention mechanism in each of the 12 decoder layers contains 12 attention heads that are represented as vectors of 64 parameters.",
"We impaired 50% of those parameters of V in various combinations of attention heads in each decoder layer by masking them as zeroes.",
"We only did this in V matrices, as their parameters directly determine the content of the vectors that are passed on to the subsequent feed-forward layer, while the Q and K matrices determine how this content is weighted when generating the representations to be propagated as weighted sums of vectors that have been transformed by the Value matrix.",
"We also experimented with three ways of introducing artificial impairment into the attention mechanism in single and multiple decoder layers: individual, cumulative, and combination.",
"The individual approach was to simply impair all 12 layers one at a time.",
"The cumulative approach consisted of impairing decoder layers sequentially starting with the bottom decoder layer (layer 0) and adding impairment to layers above it one at a time up to layer 11, resulting in total of 12 combinations of impairments.",
"The combination approach consisted of impairing all possible combinations of layers, one combination at a time, resulting in 4096 combinations.",
"The degraded models were subsequently used in combination with the original GPT-2 model to calculate the difference and ratio of PPLs between these two models on each input transcript.",
"Classification Performance: For the paired perplexity approach, we estimated the ratio of model PPLs ( PPL GPT-2 PPLGPT-D ) for each transcript.",
"These PPLs were averaged for participants with multiple transcripts.",
"All validation methods commenced with calculating the area under the receiver-operator characteristic (ROC) curve (AUC).",
"From this, accuracy (ACC) was determined at equal error rate (EER), a threshold where the false acceptance rate and false rejection rate from an ROC curves is equal.",
"We also calculated Pearson correlation between the ratio in perplexities of the GPT-2 and GPT-D models and the MMSE scores where available",
"(CORR).We used the original fixed single split between training and testing data provided by the creators of the dataset to compare our results to those published by others on ADReSS.",
"Cross-validation Performance: For all datasets (including ADReSS), we performed standard cross-validation by which we split each dataset into disjoint folds and first determined which combination of GPT-D attention layers results in best performance on the training portion of each fold and then tested that combination on the test portion of the fold averaging the AUC, ACC and CORR values (if available) across the folds.",
"We selected 5-fold cross-validation due to the relatively small size of the ADReSS, DB, and CCC datasets.",
"To ensure reproducibility across runs, data folds for cross-validation were extracted using the KFold method from the scikit-learn library (Pedregosa et al., 2011) with shuffling and a fixed random seed.",
"eralizability of the paired perplexity approach by evaluating its performance across datasets.",
"We first determined the best-performing pattern of impairment based on the highest AUC obtained on each dataset, and then applied the model impaired with that pattern to the remaining datasets.",
"Baseline Models: We compared our model performance on transcript classification with the previous text-only SOTA (Balagopalan et al., 2020), which was obtained with a 12-layer BERT model fine-tuned on the ADReSS training set, and evaluated on the test set.",
"To evaluate the generalization performance, we followed this work's hyperparam-eter choices and fine-tuned BERT and DistilBERT (Sanh et al., 2019) 3 , a distilled BERT base model that is compact and more efficient.",
"We fine-tuned these models on the entire ADReSS, DB and CCC datasets separately, then evaluate the three resulting models on every other set.",
"Language Generation: To prompt the GPT-2 and GPT-D models to generate text we utilized Bird et",
"al.'s synthetic Cookie Theft picture description narrative that represents a composite of narratives produced by healthy controls.",
"Table 5 (in Appendix) illustrates the text generated by GPT-2 and GPT-D in response to prompt sentences taken from the synthetic narrative.",
"Both GPT-2 and GPT-D models were induced to generate at least 20 additional tokens with a beam search (Wiseman and Rush, 2016) that keeps the top n hypotheses ( n = 5 in this case) at each time step and eventually returns the sequence of hypotheses that achieved the highest probability after reaching the end-of-sequence token.",
"Beam search also works well when the length of output is not predictable, which fits the nature of the language tasks represented by the corpora we tested.",
"However, one of the challenges of using beam search for text generation is that it tends to generate repeated words.",
"We added a penalty for generating repetitive unigrams and implemented the top-p algorithm (Welleck et al., 2019) to keep the set of potential words as small as possible while the cumulative probability of this set is greater than the specific probability p ( p = 0 . 9 in our case).",
"The penalty was applied equally to GPT-2 and GPT-D to avoid potentially biasing one of these models to produce more repetitions.",
"After the models generated five best predictions for each prompt, we chose the first non-empty pair of 3 Available on Huggingface https://huggingface.",
"as the final result.",
"Lexical frequency and repetitiveness: Previous work (Cohen and Pakhomov, 2020) suggests that neural language models are sensitive to lexical frequency.",
"We investigated whether GPT-D generates content of higher lexical frequency than the GPT-2 model.",
"To compute lexical frequency, we split each generated output into tokens with the help of the NLTK 4 .",
"We did not stem the tokens to avoid increasing lexical frequency by artificially merging different tokens with the same stem.",
"In addition to the stopwords provided by NLTK, we treated tokens with following part-of-speech tags",
"a) PRP (personal pronoun),",
"b) PRP$ (possessive pronoun),",
"c) WP$ (possessive wh-pronoun), and",
"d) EX (existential there) as stopwrods.",
"We also added the n t token and tokens starting with to the list of stopwords.",
"Log lexical frequency of each qualified generated token was calculated based on occurrence in the SUBTLEX us corpus (Brysbaert and New, 2009).",
"Tokens that do not appear in SUBTLEX us , were removed as out-of-vocabulary (OOV) items.",
"To asses the degree of repetition present in the generated text, we calculated the type-to-token ratio (TTR) as the number of word types divided by the number of word instances.",
"Salience Visualization: We used the gradient input saliency proposed in Denil et al. (2014), as implemented with the ecco 5 Python package for visualization.",
"Saliency is defined as || x i f c ( x 1: n ) x i || 2 , which is the L2 normalized back-propagated gradient with respect to",
"a) the dot product of the embedding vector of all previous input tokens ( x 1: n ), and",
"b) the model output of token x i ( f c ( x 1: n )), where c is the predicted token at time-step i .",
"A previous study (Serrano and Smith, 2019) found that raw attention weights were not interpretable for any intermediate representation of a language model.",
"Instead, Bastings and Filippova (2020) argued that saliency is the preferred method for interpretability as it takes the entire input into account and reveals the relevance of each input token to the next predicted token in the sequence.",
"To make the visualizations comparable for the two models, we repeatedly prompted both models with the same input until both models generated the same token as the prediction.",
"It is worth 4 https://www.nltk.org/ 5 https://github.com/jalammar/ecco 1870 Figure 2: Effects of artificial impairment on model perplexity in synthetic picture description narratives.",
"noting that ecco for visualization supports limited text generation arguments compared to the transformers package, which we used for language generation task.",
"Consequently, we only used the top-p algorithm currently supported by ecco for our visualizations.",
"Impairment Location: The contrast in the effects of artificial impairment on the embedding and attention layers (locations 1 and 2 in Figure 1, respectively) is illustrated in Figure 2. Impairing embeddings results in a distribution of perplexity values over the range of impairment in the synthetic narratives very similar to that of the GPT-2 model.",
"Impairing attention, however, results in a sharp decrease in PPL on the more perturbed narratives (those narratives simulating more impairment), which yields a monotonically increasing step-like function over PPL GPT-2 PPLGPT-D that lends itself well to thresholding for categorization.",
"These results were confirmed by testing on available corpora the discriminative ability of the paired perplexity approach by artificially impairing only the embedding layer, which resulted in near-random AUCs (close to 0.5 data not shown).",
"Consequently, in subsequent results we will show attention-based models only.",
"best training set performance was obtained by impairing 50% of each attention head in layers 0-5, 6, and 8-9.",
"This pattern achieved an AUC of 0.88 (ACC = 0.75, CORR = -0.55) on the test split.",
"The cumulative impairment method performed slightly better.",
"Impairing 50% of each attention head in the first 9 layers resulted in best performance on the training set, and AUC of 0.89 (ACC = 0.85, CORR = -0.64) on the test split.",
"We note that this accuracy exceeds the average result reported by Balagopalan et al. (2020), and approaches the performance of their best run.",
"Cross Validation: The results of within-set cross-validation are summarized in Table 2. Both combination and cumulative methods had small standard deviations ( 0 . 1 ) with over or near 0.7 mean AUC on all sets.",
"Estimates from the paired perplexity approach for both methods were negatively correlated with MMSE on the ADReSS (-0.52, -0.51) and DB (-0.45, -0.41) sets, respectively.",
"The best performance obtained with the individual approach resulted in AUC of 0.66 (ACC: 0.64) with impairment of layer 8 on the DB dataset; AUC of 0.70 (ACC: 0.66) with impairment of layer 8 on the ADReSS dataset; and AUC of 0.71 (ACC: 0.63) with impairment in layer 7 on CCC.",
"Generalization: The results of generalization evaluation are shown in Table 3. Both cumulative and combination methods yielded similar performance on CCC, where both AUC and ACC were 1871 Testing dataset Training method ADReSS DB CCC (Best pattern:AUC) AUC/ACC AUC/ACC AUC/ACC Cumulative Impairment Pattern ADReSS (0-8:0.80) 0.77/0.72 DB (0-4:0.82) 0.69/0.68 CCC (0-2:0.72) 0.70/0.63 0.74/0.63 Combination Impairment Pattern ADReSS(0-6,8:0.80) 0.76/0.71 DB (0-6,8:0.80) 0.76/0.71 CCC(1-3,5,7,9-11:0.79) 0.69/0.61 0.72/0.67 Fine-tuned BERT ADReSS 0.64/0.63 DB 0.67/0.6 CCC 0.71/0.66 0.7/0.65 Fine-tuned DistilBERT ADReSS 0.67/0.57 DB 0.67/0.6 CCC 0.65/0.62 0.47/0.45 Table 3: Generalizability of GPT-2/GPT-D approach compared to fine-tuning on BERT and DistilBERT.",
"close to or exceeded 0.7.",
"In contrast, fine-tuning BERT and DistilBERT resulted in near-random classification performance on the corresponding validation dataset.",
"While fine-tuning BERT on conversational discourse samples in CCC and applying it to the picture descriptions in ADReSS and DB generalized well as compared to the paired perplexity approach, it did not generalize in the opposite direction when BERT was fine-tuned on ADReSS and DB picture descriptions and applied to conversations in CCC.",
"Language Generation: Table 4 reports mean lexical frequency estimates for words contained in the text generated by GPT-2 and GPT-D models.",
"The GPT-D model was induced by using the best-performing patterns of impaired layers determined from cumulative and combination methods for pattern selection on the available datasets.",
"Both GPT-2 and GPT-D generate 1 OOV token on average for each prompt.",
"In general, the resulting GPT-D model generated text consisting of words with higher lexical frequency than words in the text generated by the GPT-2 model across all datasets and methods, even though some of the differences failed to reach statistical significance.",
"All GPT-D models also generated more repetitions, evident as lower TTRs .",
"The weight of the contribution of each token is shown as a percentage that can be interpreted as the amount of contribution the model derives from it.",
"We observe in Figure 3 that impairing GPT-2's attention heads leads to the redistribution of the model's contribution to the words in the prompt when making the prediction of the next word.",
"For the GPT-2 model, tokens ' boy ', ' climbed ', Figure 3: An informal illustration of differences in contributions of input tokens to generating the word The, for GPT-2 (top) and GPT-D (bottom) models.",
"and ' cookies ' contributed more when predicting ' the '.",
"However, for the GPT-D model those word tokens did not clearly stand out as substantially contributing to the prediction in either of these examples.",
"Furthermore, tokens corresponding to function words (e.g., ' on ', ' a ' and ' from ') contributed little to the predictions generated by the GPT-2 model; however, these tokens contributed more for predictions generated by GPT-D model.",
"As evident in the examples in Figure 3, the salience of the words in the prompt is much more diffuse when the GPT-D model is making the prediction i.e. the model is uncertain with respect to what it should consider as important.",
"In contrast, for the GPT-2 model the key elements of the Cookie Theft scenario ' cookie ', ' three-legged stool ', ' boy ' stand out as highly salient.",
"These observations, although informal and qualitative, indicate that the impairment of the self-attention mechanism in GPT-2 results in a behavior resembling that observed in all stages of AD dementia as a result of impaired selective attention that in turn reduces one's ability to encode new information in episodic memory (see Perry et al. (2000) for a comprehensive review).",
"Our key findings are as follows.",
"First, we show that the paired perplexity approach using the ratio between the GPT-2 and GPT-D model perplexities approaches SOTA performance on ADReSS, leveraging GPT-2's extensive pre-training without requiring a comparably large data set from dementia patients.",
"Second, this approach generalizes from Cookie Theft picture description data to casual conversation, in contrast to BERT/DistilBERT fine-tuning.",
"Finally, artificial impairment of GPT-2's self-attention induces linguistic anomalies observed in dementia.",
"The best-performing cumulative pattern for the ADReSS training set resulted in accuracy of 0.85 in the test set, exceeding the best BERT results reported on this test set ( x ACC = 0.833 (Balagopalan et al., 2020)).",
"However, our approach contrasts with approaches that train or fine-tune language models using a specific dataset, and test on held-out components of the same set.",
"While our approach does require some labeled data through which to determine the best-performing layers to impair, our results demonstrate generalization to other datasets and populations as well as a different type of discourse spontaneous conversations.",
"GPT-D is reliably less perplexed by dementia-related linguistic anomalies across all of these sets than GPT-2.",
"This facilitates broader application of the paired perplexity approach than was previously possible, and suggests our approach is more sensitive to task-agnostic dementia-related linguistic anomalies than BERT/DistilBERT fine-tuning.",
"In contrast to impairing embeddings or individual attention layers, the maximum discriminating effect was achieved by impairing multiple attention layers (either combinatorially or cumulatively), which is consistent with prior observations that Transformer layers encode different syntactic and semantic linguistic features in multiple lower and middle layers (Jo and Myaeng, 2020; Jawahar et al., 2019; Lin et al., 2019).",
"Thus, impairing a single layer may not be enough to achieve the full effect.",
"Since both syntactic and semantic context is encoded in the Transformer decoder layers we expected to find different patterns of artificial impairment to be most effective in vastly different types of discourse represented by the DB and CCC datasets; however, we were surprised to find that only impairing the self-attention layers had the desired effect on the results in contrast to impairing embeddings or feed-forward network components.",
"The results presented in Table 4 also align with previously published findings that both neural networks trained on language produced by participants with dementia, and the lexical-retrieval processes of patients affected by this condition are sensitive to lexical frequency effects (Cohen and Pakhomov, 2020; Pekkala et al., 2013).",
"Our results suggest that impairing the self-attention mechanism in a Transformer artificial neural network may induce similar sensitivity to lexical frequency.",
"By impairing the attention heads in a GPT-2, we observe significant differences in lexical frequency and TTR characteristics of the text generated by the GPT-2 and GPT-D, with the change in TTR ratio indicating that GPT-D has a greater tendency to produce repeated words when generating text, just as participants with dementia are more prone to repeat words in picture description tasks (Hier et al., 1985).",
"In other previous work on the DB and the ADReSS datasets, the authors attempted to predict individual MMSE scores in addition to discriminating between cases and controls (Yancheva et al., 2015; Luz et al., 2020).",
"We could not perform a comparable analysis in the current study on account 1873 of focusing on using the paired perplexity measure as a single threshold to distinguish between cases and controls, While predicting MMSE is not the main focus of our study, we did find negative correlations between the paired perplexity measures and the MMSE scores, providing additional evidence that artificially impairing the attention mechanism of the GPT-2 model simulates cognitive effects of dementia detectable in language.",
"Our findings are also consistent with previous work indicating that Transformer models are able to predict neural responses during language comprehension and generalize well across various datasets and brain imaging modalities (Schrimpf et al., 2021).",
"Thus, our work is another step in the direction of achieving better understanding of the relationship between the inner workings of generative artificial neural language models and the cognitive processes underlying human language production.",
"Impairing how contextual information is stored in the self-attention mechanism in silico creates similar deficits to what is observed in dementia.",
"The next important step is perhaps to investigate how contextual information encoding is impaired in vivo in AD dementia.",
"The encouraging results on the CCC dataset point to the possibility of developing a tool for analysing patients' daily spontaneous conversations in a task-agnostic fashion.",
"Generalizable across tasks and domains and easy-to-interpret language-based instruments for detecting anomalies potentially consistent with dementia can be most useful in clinical situations where the patient or family member raise a concern about unexplained changes in cognition.",
"A simple to administer (or self-administer) language-based instrument for objective confirmatory testing (either at a single point in time or over a period of time) would be helpful to a clinician working in an overburdened and time-constrained clinical environment (e.g., primary care) to be able to validate or refute those cognitive concerns with added confidence.",
"It is critical, however, that the instrument used for confirmatory testing makes as few assumptions as possible regarding the person's linguistic background or communicative style, or the type of discourse used for analysis (i.e., picture description vs. conversation).",
"The work presented here has several limitations.",
"The sizes of the datasets are small compared to those typically encountered in open domain NLP tasks.",
"In this paper, we did not focus on mild cognitive impairment but acknowledge that it is an important and active area of research that has shown promise in detecting early signs of dementia (Roark et al., 2011; Satt et al., 2014; Calz et al., 2021).",
"Also, all datasets are in American English, which could limit the applicability of our models to dementia-related differences in other forms of English, and would certainly limit their applicability to other languages.",
"In addition, behavioral characteristics including language anomalies can arise as a result of deficits in multiple brain mechanisms and, while they can contribute to a diagnosis of a neurodegenrative condition as a screening tool, they cannot be used in isolation to establish a definitive diagnosis.",
"While GPT-D resembles language behaviors commonly observed in dementia patients, GPT-2 and GPT-D should not be considered as accurate and comprehensive representations of human language and cognition, or as models that capture features specific to various forms of neurodegeneration.",
"Lastly, we also notice that the pre-trained LM is heavily gender-biased, a problem that we hope ongoing efforts to improve the fairness of AI (e.g. (Sheng et al., 2020)) will address over time.",
"We developed a novel approach to automated detection of linguistic anomalies in AD, involving deliberately degrading a pre-trained Transformer, with SOTA performance on the ADReSS test set, and generalization to language from conversational interviews.",
"This, and the detection of dementia-related linguistic characteristics in text generated by GPT-D, suggests that our method is sensitive to task-agnostic linguistic anomalies in dementia, broadening the scope of application of methods for automated detection of dementia beyond language from standardized cognitive tasks.",
"This research was supported by grants from the National Institute on Aging (AG069792) and Administrative Supplement (LM011563-S1) from the National Library of Medicine",
"We followed the Responsible NLP Research checklist and ACL code of ethics for this work."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"result",
"method",
"method",
"objective",
"objective",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"result",
"objective",
"method",
"other",
"method"
] |
[
"This paper presents the first large-scale meta-evaluation of machine translation (MT).",
"We annotated MT evaluations conducted in 769 research papers published from 2010 to 2020.",
"Our study shows that practices for automatic MT evaluation have dramatically changed during the past decade and follow concerning trends.",
"An increasing number of MT evaluations exclusively rely on differences between BLEU scores to draw conclusions, without performing any kind of statistical significance testing nor human evaluation, while at least 108 metrics claiming to be better than BLEU have been proposed.",
"MT evaluations in recent papers tend to copy and compare automatic metric scores from previous work to claim the superiority of a method or an algorithm without confirming neither exactly the same training, validating, and testing data have been used nor the metric scores are comparable.",
"Furthermore, tools for reporting standardized metric scores are still far from being widely adopted by the MT community.",
"After showing how the accumulation of these pitfalls leads to dubious evaluation, we propose a guideline to encourage better automatic MT evaluation along with a simple meta-evaluation scoring method to assess its credibility.",
"New research publications in machine translation (MT) regularly introduce new methods and algorithms to improve the translation quality of MT systems.",
"In the literature, translation quality is usually evaluated with automatic metrics such as BLEU (Papineni et al., 2002) and, more rarely, by humans.",
"To assess whether an MT system performs better than another MT system, their scores given by an automatic metric are directly compared.",
"While such comparisons between MT systems are exhibited in the large majority of MT papers, there are no well-defined guideline nor clear prerequisites under which a comparison between MT systems is considered valid.",
"Consequently, we assume that evaluation in MT is conducted with different degrees of thoroughness across papers and that evaluation practices have evolved over the years.",
"What could be considered, by the research community, as a good evaluation methodology ten years ago may not be considered good today, and vice versa.",
"This evolution has not been studied and whether MT evaluation has become better, or worse, is debatable.",
"On the other hand, several requirements for MT evaluation have been well-identified.",
"For instance, the limitations of BLEU are well-known (Callison-Burch et al., 2006; Reiter, 2018; Mathur et al., 2020) and the necessity to report automatic metric scores through standardized tools, such as SacreBLEU, has been recognized (Post, 2018).",
"Moreover, a trustworthy evaluation may adopt statistical significance testing (Koehn, 2004) and strong baselines (Denkowski and Neubig, 2017).",
"However, to what extent these requirements have been met in MT publications is unclear.",
"In this paper, we propose the first large-scale meta-evaluation of MT in which we manually annotated 769 research papers published from 2010 to 2020.",
"Our study shows that evaluation in MT has dramatically changed since 2010.",
"An increasing number of publications exclusively rely on BLEU scores to draw their conclusions.",
"The large majority of publications do not perform statistical significance testing, especially since 2016.",
"Moreover, an increasing number of papers copy and compare BLEU scores published by previous work while tools to report standardized metric scores are still far from being extensively adopted by the MT community.",
"We also show that compared systems are often trained, validated, or even evaluated, on data that are not exactly the same.",
"After demonstrating how the accumulation of these pitfalls leads to dubious evaluation, we propose a general guideline for automatic evaluation in MT and a simple scoring method to meta-evaluate an MT paper.",
"We believe that the adoption of these tools by authors or reviewers have the potential to reverse the concerning trends observed in this meta-evaluation.",
"We manually annotated the MT evaluation in research papers published from 2010 to 2020 at *ACL conferences.",
"1 To identify MT papers, we searched the ACL Anthology website 2 for the terms MT or translation in their titles 3 and analyzed among them the 769 papers that make comparisons of translation quality between at least two MT systems.",
"For each year between 2010 and 2020, we respectively annotated the following numbers of papers: 53, 40, 59, 80, 67, 45, 51, 62, 94, 115, and 103.",
"We annotated each paper as follows: A1.",
"All the automatic metrics used to evaluate the translation quality of MT systems.",
"We did not list variants of the same metric: e.g., chrF3 and chrF++ are labeled chrF (Popovic, 2015).",
"Moreover, we did not consider metrics which only target specific aspects of the translation quality, such as pronoun translation and rare word translation.",
"A2.",
"Whether a human evaluation of the translation quality has been conducted: yes or no.",
"If the human evaluation only targets specific types of errors and did not evaluate the translation quality of the entire text, we answered no. 4 A3.",
"Whether any kind of statistical significance testing of the difference between automatic metric scores has been performed: yes or no.",
"Potentially, some papers did perform significance testing without mentioning it, but due to the lack of evidences such papers have been annotated with no for this question.",
"1 We considered only *ACL main conferences, namely ACL, NAACL, EACL, EMNLP, CoNLL, and AACL, as they are the primary venues for publishing MT papers.",
"2 www.aclweb.org/anthology/ 3 There are potentially MT papers falling outside these search criteria but we considered the 769 papers we obtained to be representative enough for the purpose of this study.",
"4 Note that we only check here whether the automatic evaluation is supported by a human evaluation.",
"Previous work already studied pitfalls in human evaluation (L aubli et al., 2020).",
"A4.",
"Whether it makes comparisons with automatic metric scores directly copied from previous work to support its conclusion: yes or no.",
"Most papers copying scores (mostly BLEU) clearly mention it.",
"If there is no evidence that the scores have been copied, we annotated these papers with no for this question.",
"A5.",
"Whether SacreBLEU has been used: yes or no.",
"If there is no mention or reference to SacreBLEU, we assume that it has not been used.",
"Note that yes does not mean that the paper used SacreBLEU for all the MT systems evaluated.",
"A6.",
"If previous work has not been reproduced but copied, whether it has been confirmed that all the compared MT systems used exactly the same pre-processed training, validating, and testing data: yes or no.",
"Except for A6, the annotation was straightforward since most papers present a dedicated section for experimental settings with most of the information we searched for.",
"Answering A6 required to check the data exploited in the previous work used for comparison.",
"Note that answering yes to the questions from A2 to A6 may only be true for at least one of the comparisons between MT systems, while we did not evaluate how well it applies.",
"For instance, answering yes to A5 only means that at least one of the systems has been evaluated with SacreBLEU but not that the SacreBLEU signature has been reported nor that SacreBLEU scores have been correctly compared with other BLEU scores also computed with SacreBLEU.",
"Our annotations are available as a supplemental material of this paper.",
"To keep track of the evolution of MT evaluation, we will periodically update the annotations and will make it available online.",
"5 3 Pitfalls and Concerning Trends This section discusses the four main pitfalls identified in our meta-evaluation of MT: the exclusive use of BLEU, the absence of statistical significance testing, the comparison of incomparable results from previous work, and the reliance on comparison between MT systems that do not exploit exactly the same data.",
"We report on how often they affected MT papers and recent trends.",
"Based on previous 5 The up-to-date version can be found here: github.",
"work and supporting experiments, we show how each of these problems and their accumulation lead to scientifically dubious MT evaluation.",
"Automatic metrics for evaluating translation quality have numerous advantages over a human evaluation.",
"They are very fast and virtually free to run provided that a reference translation is already available.",
"Their scores are also reproducible.",
"As such, automatic metrics remained at the center of MT evaluation for the past two decades.",
"New metrics that better correlate with human judgments are regularly introduced.",
"We propose in this section to analyze the use of automatic metrics in MT research, relying on our annotations for A1 and A2.",
"This is probably the most expected finding in our study: the overwhelming majority of MT publications uses BLEU.",
"Precisely, 98.8% of the annotated papers report on BLEU scores.",
"As shown in Figure 1, the ratio of papers using BLEU remained stable over the years.",
"On the other hand, BLEU scores used to be more often supported by scores from other metrics, such as TER (Snover et al., 2006) and METEOR (Banerjee and Lavie, 2005), than they are now.",
"The large majority of papers, 74.3%, only used BLEU scores to evaluate MT systems, i.e., without the support of any other metrics nor human evaluation.",
"It increases to 82.1% if we consider only the years 2019 and 2020.",
"This tendency looks surprising considering that no less than 108 new metrics 6 have been proposed in the last decade.",
"They have been shown to better correlate with human judgments than BLEU.",
"Some are even easier to use and more reproducible 6 We did not count variants of the same metric and excluded metrics only proposed for an evaluation at segment level.",
"by being tokenization agnostic, such as chrF.",
"We counted 29 metrics proposed at *ACL conferences since 2010 while the remaining metrics were proposed at the WMT Metrics Shared Tasks.",
"89% of these 108 new metrics have never been used in an *ACL publication on MT (except in the papers proposing the metrics).",
"Among these metrics, only RIBES (Isozaki et al., 2010) and chrF have been used in more than two MT research paper.",
"When properly used, BLEU is a valid metric for evaluating translation quality of MT systems (Callison-Burch et al., 2006; Reiter, 2018).",
"Nonetheless, we argue that better metrics proposed by the research community should be used to improve MT evaluation.",
"To illustrate how wrong an evaluation can become by only relying on one metric, we computed with BLEU and chrF scores 7 of WMT20 submissions to the news translation shared task 8 (Barrault et al., 2020) using SacreBLEU and show rankings given by both metrics in Table",
"1. Results show that BLEU and chrF produce two different rankings.",
"For instance, for the Ja En task, NiuTrans system is the best according to BLEU by being 1.1 points better than the Tohoku-AIP-NTT system ranked second.",
"In most MT papers, such a difference in BLEU points would be considered as a significant evidence of the superiority of an MT system and as an improvement in translation quality.",
"Relying only on these BLEU scores without any statistical significance testing nor human evaluation would thus lead to the conclusion that NiuTrans system is the best.",
"However, according to another metric that better correlates with human 7 SacreBLEU (short) signatures: chrF2+l.",
"{ ja-en,zh-en } +n.6+s.false+t.wmt20+v.1.5.0 and BLEU+c.mixed+l.",
"{ ja-en,zh-en } +#.1+s.exp+t.wmt20+tok.13a+v.1.5.0 8 data.statmt.org/wmt20/ translation-task/ Rank Japanse-to-English (Ja En) Chinese-to-English (Zh En) BLEU System chrF System BLEU System chrF System 1 26.6 NiuTrans 0.536 Tohoku-AIP-NTT 36.9 WeChat AI 0.653 Volctrans 2 25.5 Tohoku-AIP-NTT 0.535 NiuTrans 36.8 Tencent Translation 0.648 (cid:7) Tencent Translation 3 24.8 (cid:7) OPPO 0.523 (cid:7) OPPO 36.6 DiDi NLP 0.645 (cid:7) DiDi NLP 4 22.8 (cid:7) NICT Kyoto 0.507 (cid:7) Online-A 36.6 Volctrans 0.644 (cid:7) DeepMind 5 22.2 (cid:7) eTranslation 0.504 (cid:7) Online-B 35.9 (cid:7) THUNLP 0.643 (cid:7) THUNLP Table 1: Rankings of WMT20 top 5 submissions for the News Translation Shared Tasks according to BLEU and chrF scores.",
"judgment, i.e., chrF, this does not hold: Tohoku-AIP-NTT system is better.",
"Similar observations are made for the Zh En task.",
"9 These observations have often been made by the MT community, for instance at WMT shared tasks, but nonetheless rarely seen in research papers.",
"We assume that MT researchers largely ignore new metrics in their research papers for the sake of some comparability with previous work or simply because differences between BLEU scores may seem more meaningful or easier to interpret than differences between scores of a rarely used metric.",
"Most papers even qualify differences between BLEU scores as small, large, or significant (not necessarily statistically), implying that there is a scientific consensus on the meaning of differences between BLEU scores.",
"As we show in the following sections, all these considerations are illusory.",
"Moreover, BLEU may also be directly requested by reviewers, or even worse, other metrics may be requested to be dropped.",
"10 We believe that the exclusive reliance on BLEU can be ended and the use of better metrics should be encouraged, in addition to or in lieu of BLEU, by the adoption of a guideline for automatic MT evaluation (see Section 4).",
"Statistical significance testing is a standard methodology designed to ensure that experimental results are not coincidental.",
"In MT, statistical significance testing has been used on automatic metric scores and more particularly to assess whether a particular difference of metric scores between two MT 9 For both Ja En and Zh En tasks, systems ranked first by chrF were also ranked first by the human evaluation.",
"10 Examples of such requests or related comments by reviewers can be found in the ACL 2017 review corpus ( github.com/allenai/PeerRead ), e.g., in the review ID 369 we read: I am also rather suspicious of the fact that the authors present only METEOR results and no BLEU. 0 10 20 30 40 50 60 70 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 % pub li c a t i on s Figure 2: Percentage of papers testing statistical significance of differences between metric scores.",
"systems is not coincidental.",
"Two methods are prevalent in MT: the paired bootstrap test (Koehn, 2004) and the approximate randomization test (Riezler and Maxwell, 2005), for instance respectively implemented in Moses 11 and MultEval.",
"12 Dror et al. (2018) report that while the natural language processing (NLP) community assigns a great value to experimental results, statistical significance testing is rarely used.",
"We verified if this applies to MT evaluations based on our annotations for A3.",
"Figure 2 shows the percentage of papers that performed statistical significance testing.",
"We found out that the observations by Dror et al. (2018) apply to MT since never more than 65.0% of the publications in a year (2011) performed statistical significance testing.",
"Furthermore, our meta-evaluation shows a sharp decrease of its use since 2016.",
"Most papers did not check whether their results are not coincidental but drew conclusions from them.",
"MT papers mainly relied on the amplitude of the differences between metric scores to state whether they are significant or not.",
"This was also observed by Dror et al. (2018) for NLP in general.",
"For illustration, we also performed statistical significance testing 13 with BLEU and chrF scores 11 github.com/moses-smt/mosesdecoder 12 github.com/jhclark/multeval 13 For all the statistical significance testing performed in this paper, we used the paired bootstrap test with 1,000 samples and 1,000 iterations.",
"on the WMT20 submissions in Table",
"1. For Ja En, NiuTrans system is significantly better in BLEU than Tohoku-AIP-NTT system.",
"In contrast, they are not significantly different in chrF.",
"Using only BLEU, we would conclude that NiuTrans system is significantly the best.",
"This is not confirmed by chrF hence we need to report on more than one metric score to conduct a credible evaluation, even when performing statistical significance testing.",
"Furthermore, to show that the significance of a difference between metric scores is independent from its amplitude, we performed additional experiments by modifying only one sentence, replacing it with an empty line or by the repetition of the same token many times, 14 from Tohoku-AIP-NTT and Volctrans systems' outputs.",
"Results in BLEU and chrF are reported in Table",
"2. We observe that a difference in only one sentence can lead to a difference in BLEU of 6.8 points (Ja En, Custom 2).",
"15 Nonetheless, our statistical significance tests did not find any system significantly better than the others.",
"While the importance of statistical significance testing is regularly debated by the scientific community (Wasserstein et al., 2019), it remains one of the most cost-effective tools to check how trustworthy a particular difference between two metric scores is. 16 14 This could be considered as a simulation of potential defects from an MT framework or model, e.g., when translating extremely long sequences.",
"15 For Custom 2, BLEU greatly penalized the increase of the number of tokens in the output.",
"This is indicated by the length ratio reported by SacreBLEU but rarely shown in MT papers.",
"16 Wasserstein et al. (2019) give several recommendations for a better use of statistical significance testing.",
"An MT paper may compare the automatic metric scores of proposed MT systems with the scores reported in previous work.",
"This practice has the advantage to save the time and cost of reproducing competing methods.",
"Based on our annotations for A4, we counted how often papers copied the scores from previous work to compare them with their own scores.",
"As pointed out by Figure 3, copying scores (mostly BLEU) from previous work was rarely done before 2015.",
"In 2019 and 2020, nearly 40% of the papers reported on comparisons with scores from other papers.",
"While many papers copied and compared metric scores across papers, it is often unclear whether they are actually comparable.",
"As demonstrated by Post (2018), BLEU, as for most metrics, is not a single metric.",
"It requires several parameters and is dependent on the pre-processing of the MT output and reference translation used for scoring.",
"In fact, Post (2018) pointed out that most papers do not provide enough information to enable the comparability of their scores with other work.",
"Post (2018) proposed a tool, SacreBLEU, to standardize metrics 17 in order to guarantee this comparability, provided that all the scores compared are computed with SacreBLEU.",
"18 This is the only tool of this kind used by the papers we annotated.",
"However, based on our annotations for A5, Figure 3 shows that SacreBLEU is still far from widely adopted by the MT community, even though it is gradually getting more popular since its emergence in 2018.",
"Moreover, papers that copy BLEU scores do not always use SacreBLEU, even in 2020.",
"17 Currently BLEU, chrF, and TER.",
"18 SacreBLEU also generates a signature to further ensure this comparability: two scores computed through SacreBLEU with an identical signature are comparable.",
"19 For all our processing, we used Moses (code version mmt-Processing Tohoku-AIP-NTT (Ja En) Volctrans (Zh En) BLEU chrF BLEU chrF original 25.5 0.536 36.6 0.653 fully lowercased 26.9 0.549 38.2 0.664 norm.",
"adopted by MT researchers, applied to some MT system outputs and reference translations of the WMT20 news translation shared tasks.",
"Our results are presented in Table",
"3. The first row presents original SacreBLEU scores, i.e., detokenized.",
"Second and third rows respectively show the impact of low-ercasing and punctuation normalization on metric scores.",
"Scores are increased.",
"Last three rows show the results on tokenized MT outputs.",
"Applying both punctuation normalization and aggressive tokenization with Moses scripts leads to BLEU scores several points higher than the original SacreBLEU scores.",
"Obviously, none of the scores in different rows are comparable.",
"Nonetheless, MT papers still often report on tokenized BLEU scores compared with tokenized, or even detokenized, BLEU scores from other papers without exactly knowing how tokenization has been performed.",
"Tokenized BLEU scores reported in MT papers are often computed using the multi-bleu script of Moses even though it displays the following warning: 20 The scores depend on your tokenizer, which is unlikely to be reproducible from your paper or consistent across research groups. Even though the work of Post (2018) is a well-acclaimed initiative towards better MT evaluation, we believe that it can only be a patch for questionable evaluation practices.",
"A comparison with a copied score is de facto associated with the absence of statistical significance testing since the MT output used to compute the copied score is not available.",
"We also observed several misuses of SacreBLEU, such as the comparison of scores obtained by SacreBLEU against scores obtained by mvp-v0.12.1-2851-gc054501) scripts.",
"other tools.",
"SacreBLEU signatures are also often not reported despite being required to ensure the comparability between SacreBLEU scores.",
"Ultimately, comparisons with copied scores must be avoided.",
"As we will show in the next section, copying scores also calls for more pitfalls.",
"In MT, datasets are mostly monolingual or parallel texts used in three different steps of an experiment: training a translation model, tuning/validating the model, and evaluating it.",
"Henceforth, we denote these datasets as training, validating, and testing data, respective to these three steps.",
"How these datasets are pre-processed strongly influences translation quality.",
"MT papers regularly propose new methods or algorithms that aim at better exploiting training and/or validating data.",
"Following the scientific method, we can then define these new methods/algorithms and datasets as independent variables of an MT experiment while the translation quality, approximated by metric scores, would be our dependent variable that we want to measure.",
"Testing the impact of a new algorithm on our dependent variable requires to keep all other independent variables, such as datasets, unchanged.",
"In other words, changing datasets (even slightly) and methods/algorithms in the same experiment cannot answer whether the change in metric scores is due to the datasets, methods/algorithms, or the combination of both.",
"Relying on our annotation for A6, we examined how often MT papers compared MT systems for which the datasets and/or their pre-processing 21 described in the papers are not exactly identical.",
"Note that we only performed this comparison for papers that copied and compared metric scores from previous work.",
"Here, we also excluded comparisons between systems performed to specifically evaluate the impact of new datasets, pre-processing methods, and human intervention or feedback (e.g., post-editing and interactive MT).",
"If we had any doubt whether a paper belongs or not to this category, we excluded it.",
"Consequently, our estimation can be considered as the lower bound.",
"To illustrate the impact of modifications of these datasets on metric scores, we conducted experiments using the training, validating, and testing data of the WMT20 news translation tasks.",
"We 21 For pre-processing, we checked, for instance, tokenization (framework and parameters), casing, subword segmentations (method and vocabulary size), data filtering, etc.",
"trained neural MT (NMT) systems with Marian 22 (Junczys-Dowmunt et al., 2018), using the hyper-parameters in Table 4, on all the provided parallel data (all configurations) and removed sentence pairs based on their length (Max Len.).",
"This simple filtering step is usually applied for a more efficient training or due to some limits of the framework, method, or algorithm used.",
"Yet, it is so common as a pre-processing step that it is rarely described in papers.",
"As shown in Table 5, we observed that BLEU scores vary by several points depending on the maximum length used for filtering.",
"Another common pre-processing step is the truecasing of the datasets.",
"While it is rather commonly performed by participants in the WMT translation shared tasks, how casing is handled is rarely mentioned in research papers.",
"In our experiments, applying this step changed BLEU scores by more than 0.5 points.",
"Further experiments applying language identification filtering or removing one corpus from the training data also lead to variations in metric scores.",
"The best configurations according to metric scores do not use truecasing and has a maximum sentence length set at 120 (second row).",
"A comparison of this configuration with the third row, which uses truecasing and a different maximum sentence length, cannot lead to the conclusion that truecasing decreases translation quality, since we changed two variables at the same time.",
"our meta-evaluation an increasing amount of MT papers (38.5% for the 20192020 period) drawing conclusions of the superiority of a particular method or algorithm while also using different data.",
"While their conclusions may be valid, the evaluation conducted in these papers is scientifically flawed and cannot support the conclusions.",
"We assume that this is mainly due to a rather common lack of detailed experimental settings.",
"Consequently, it makes a specific experiment often impossible to be reproduced identically.",
"In most cases, ensuring the comparability with the published scores of an MT system is only possible by replicating the MT system by ourselves.",
"There have been initiatives towards the release of preprocessed datasets for MT, for instance by the WMT conference that released pre-processed data for WMT19.",
"23 Nonetheless, we only noticed a very small number of papers exploiting pre-processed training/validating/testing data publicly released by previous work.",
"24 We believe that the current trend should be reversed.",
"Reviewers should also request more rigor to the authors by checking the configurations of the compared MT systems to make sure that their comparison can, indeed, answer whether the proposed method/algorithm improves MT inde-23 This effort has not been conducted for WMT20.",
"24 For instance, Ma et al. (2020) and Kang et al. (2020) used exactly the same pre-processed data for research on document-level NMT released by Maruf et al. (2019).",
"The MT community is well-aware of all the pitfalls described in Section",
"3. They have all been described by previous work.",
"Nonetheless, our meta-evaluation shows that most MT publications are affected by at least one of these pitfalls.",
"More puzzling are the trends we observed.",
"Figure 5 shows that an increasing number of publications accumulate questionable evaluation practices.",
"In the period 20192020, 17.4% (38 papers) of the annotated papers exclusively relied for their evaluation on differences between BLEU scores of MT systems, of which at least some have been copied from different papers, without using SacreBLEU nor statistical significance testing, while exploiting different datasets.",
"While these pitfalls are known and relatively easy to avoid, they are increasingly ignored and accumulated.",
"We believe that a clear, simple, and well-promoted guideline must be defined for automatic MT evaluation.",
"Such a guideline would be useful only if it is adopted by authors and its application is checked by reviewers.",
"For the latter, we also propose a simple scoring method for the meta-evaluation of MT. Note that the proposed guideline and scoring method only cover the aspects discussed in this paper.",
"Thus, their strict adherence can only guarantee a better evaluation but not a flawless evaluation.",
"This guideline and the scoring method that follows are proposed for MT papers that rely on automatic metric scores for evaluating translation quality.",
"1. An MT evaluation may not exclusively rely on BLEU.",
"Other automatic metrics that better correlate with human judgments, or a human evaluation, may be used in addition or in lieu of BLEU.",
"2. Statistical significance testing may be performed on automatic metric scores to ensure that the difference between two scores, whatever its amplitude, is not coincidental.",
"3. Automatic metric scores copied from previous work may not be compared.",
"If inevitable, copied scores may only be compared with scores computed in exactly the same way, through tools guaranteeing this comparability, while providing all the necessary information to reproduce them.",
"4. Comparisons between MT systems through their metric scores may be performed to demonstrate the superiority of a method or an algorithm only if the systems have been trained, validated, and tested with exactly the same pre-processed data, unless the proposed method or algorithm is indeed dependent on a particular dataset or pre-processing.",
"The purpose of the following scoring method is to assess the trustworthiness of an automatic evaluation performed in an MT paper.",
"Ultimately, it can be used for authors' self-assessment or by MT program committees to identify trustworthy papers.",
"Each yes answer to the following questions brings 1 point to the paper for a maximum of 4 points.",
"1. Is a metric that better correlates with human judgment than BLEU used or is a human evaluation performed?",
"2. Is statistical significance testing performed?",
"3. Are the automatic metric scores computed for the paper and not copied from other work?",
"If copied, are all the copied and compared scores computed through tools that guarantee their comparability (e.g., SacreBLEU)?",
"4. If comparisons between MT systems are performed to demonstrate the superiority of a method or an algorithm that is independent from the datasets exploited and their preprocessing, are all the compared MT systems exploiting exactly the same pre-processed data for training, validating, and testing?",
"(if not applicable, give 1 point by default)",
"We scored all the annotated papers, and report on the average score and score distribution for each year in Figure 6.",
"Based on this meta-evaluation, MT evaluation worsens.",
"Our meta-evaluation identified pitfalls in the MT evaluation in most of the annotated papers.",
"The accumulation of these pitfalls and the concerning trends we observed lead us to propose a guideline for automatic MT evaluation.",
"We hope this guideline, or a similar one, will be adopted by the MT community to enhance the scientific credibility of MT research.",
"This work also has its limitations since it does not cover all the pitfalls of MT evaluation.",
"For instance, we noticed that MT papers regularly rely on the same language pairs to claim general improvements of MT. They also almost exclusively focus on translation from or into English.",
"Another, more positive observation, is that MT papers tend to use stronger baseline systems, following some of the recommendations by Denkowski and Neu-big (2017), than at the beginning of the last decade when baseline systems were mostly vanilla MT systems.",
"For future work, we plan to extend our meta-evaluation of MT to publications at conferences in other research domains, such as Machine Learning and Artificial Intelligence.",
"As a final note, we would like to encourage NLP researchers to perform a similar meta-evaluation in their respective area of expertise.",
"As we showed, it can unveil pitfalls and concerning trends that can be reversed before becoming prevalent.",
"We would like to thank the reviewers for their insightful comments and suggestions.",
"This work was partly supported by JSPS KAKENHI grant numbers 20K19879 and 19H05660."
] | [
"objective",
"method",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"result",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"method",
"abstain",
"method",
"abstain",
"objective",
"method",
"result",
"other",
"other"
] |
[
"This paper presents the problem of conversational plotting agents that carry out plotting actions from natural language instructions.",
"To facilitate the development of such agents, we introduce CHARTDIALOGS , a new multi-turn dialog dataset, covering a popular plotting library, matplotlib .",
"The dataset contains over 15 , 000 dialog turns from 3 , 200 dialogs covering the majority of matplotlib plot types.",
"Extensive experiments show the best-performing method achieving 61 % plotting accuracy, demonstrating that the dataset presents a non-trivial challenge for future research on this task.",
"Advances in machine language understanding (Hirschberg and Manning, 2015) have sparked interest in using artificial intelligence to address difficult problems involving language.",
"In this work, we are interested in the problem of plotting via natural language instructions.",
"Plotting is a method for visualizing data and mathematical functions.",
"Plotting libraries such as matplotlib support functionality on a range of levels, from general, change the X-axis from linear to log scale, to specific, color this screen pixel red.",
"Yet, using such libraries can be difficult for novice users and time consuming even for experts.",
"This obstacle, coupled with the increasing popularity of the scientific method of gleaning information from data (Hey et al., 2009; Dhar, 2013), motivates our objective of designing natural language interfaces (NLIs) for plotting.",
"NLIs for plotting can be organized into three categories based on what the user is expected to describe: the data, the function, or the plot .",
"Describing the Data or the Function.",
"In the first category of plotting NLIs, users are expected to describe the data they would like to visualize, by posing queries such as: Show me medals for hockey and skating by country.",
"Queries may involve simple data analysis: Is there a seasonal trend for bike usage?",
"The system retrieves the relevant data, performs simple data analysis, and produces a visualization.",
"This category of NLIs has been studied in Human Computer Interaction and related areas (Gao et al., 2015; Setlur et al., 2016; Srinivasan and Stasko, 2017; Yu and Silva, 2019; Sun et al., 2010).",
"In the second category of plotting NLIs, users specify the function they would like to visualize.",
"In this category, commercial products such as wolfra-malpha.com yield results for queries such as plot the tangent to x 2 at x = 0 . 5 .",
"The system processes such queries by leveraging knowledge of functions and mathematical principles.",
"Describing the Plot.",
"In the two categories we have discussed, users only describe what data or function they would like to visualize without describing how to visualize it.",
"The system is in charge of all plotting details, which are not accessible to users.",
"We can think of a third, less explored, category of plotting NLIs, in which the user instructs the system on how they would like to manipulate a plot.",
"As an example, consider the following questions from a community question answering forum for matplotlib 1 : ( Q1 ):How does one change the font size for all elements (ticks, labels, title) on a matplotlib plot? ( Q2 ): I have a scatter plot graph . . . I would like the Y-Axis to start at the max value and go up to 0. ( Q3 ):Given a signal plot with time index ranging from 0 to 2.6(s), I want to draw vertical red lines indicating corresponding time index for the list. 1 https://stackoverflow.com/questions/tagged/matplotlib",
"For Q1 , the user's intent is to change the font size of the elements of a plot; for Q2 , to invert the Y-axis on the plot; and for Q3 , to add vertical lines to a plot.",
"All three questions seek to perform an action directly on the plot.",
"The large number of such questions online indicates that direct plot manipulation is a common technical need.",
"Crucially, expressing these intents in natural language is often faster than perusing the documentation of plotting library.",
"Therefore, there is an opportunity to automatically process such intents by mapping natural language to API calls.",
"This problem is the focus of our work.",
"Contributions.",
"The contributions of this work are as follows: 1) We identify and define the problem of conversational plotting agents .",
"2) To facilitate work on this problem, we present a large dataset, CHARTDIALOGS , that consists of written multiturn plotting dialogs.",
"An in-depth analysis of the data shows that it is linguistically diverse and compares favorably to existing datasets.",
"3) We conducted extensive experiments in the framework of goal-oriented dialog using various methods.",
"We also collected data on human performance, finding that there is a substantial gap between model and human performance, and therefore room for future work.",
"2 2 Problem Definition Our goal is to develop a conversational plotting agent that takes natural language instructions and updates the plot accordingly.",
"The agent is conversational because plots can be complex, making 2 We have released our dataset and code for experiments: https://github.com/sythello/ChartDialog it difficult to describe everything at once.",
"Users may want to fine-tune the appearance of their plot through multiple turns.",
"Goal-Oriented Dialog Problem.",
"We treat the conversational plotting agent problem as an instance of slot-based goal-oriented dialog.",
"The applicable slots are plot type specific.",
"Figure 1 illustrates example slots for some of the plot types.",
"Different plot types have different slots.",
"However, some slots are shared across plot types.",
"For example, the slot X-axis scale is relevant to the x-axis, thus it is applicable in any plot type with an x-axis, including line chart, bar plot, contour plot, etc.",
"This slot can take a value such as X-axis scale = log, as a result of a request such as change the x-axis scale from linear to log.",
"3 Illustrations of all CHARTDIALOGS plot types and their slots are provided in Appendix A. 3 Related Work Goal-oriented dialog datasets largely focus on ser-vice domains such as airlines (Hemphill et al., 1990; Seneff and Polifroni, 2000; Bennett and Rudnicky, 2002; Asri et al., 2017; Budzianowski et al., 2018; Wei et al., 2018), restaurant (Hender-son et al., 2014; Bordes et al., 2017), bus (Raux et al., 2005; Williams et al., 2013), technical support (Lowe et al., 2015), and car agents (Eric et al., 2017).",
"Recently, a multi-domain goal-oriented dataset covering restaurant, attraction, hospital, police, hotel, taxi and train domains was introduced in (Budzianowski et al., 2018).",
"Our dataset is focused 3 We wrote a simple script to take as input the plot type (as a special slot) and other slot-value pairs, to generate the actual plot image using matplotlib.",
"This script is included in the released dataset.",
"on a new domain, which is data plots.",
"Natural language interfaces to structured languages such as SQL have been explored in Databases (DB) (Li and Jagadish, 2014), Programming Languages (PL) (Yaghmazadeh et al., 2017), and NLP (Zelle and Mooney, 1996).",
"While the problem of language to SQL is different from language to plots, both problems need to deal with the difficulty of automatically interpreting natural language and mapping it to an unambiguous structured representation.",
"Closer to our work is the task of conversational image editing (Manuvinakurike et al., 2018b,a), whose aim is to enable queries like Can you please fix the glare on my dog's eyes.",
"Although both focus on image manipulation, the images and manipulations are different in the two domains.",
"Additionally, we provide structured representations from which the plot images are generated.",
"Our experiments show that such representations provide useful information for model training.",
"In contrast, the structured representation is not available in conversational image editing.",
"Furthermore, our dataset contains over 3 , 200 dialogs in comparison to the 129 dialogs for image edits.",
"Lastly, our task is different from full-fledged program synthesis, which takes natural language as input and produces computer programs in a language such as Python (Church, 1957; Solar-Lezama and Bodik, 2008).",
"Our task is simpler and more structured.",
"To facilitate data collection, we make use of structured representations which we call text plot specifications .",
"Definition 1 (Text Plot Specification, TPSpec) Let S t be the set of all relevant slots for a given plot type, t , where t takes on plot type values such as histogram, scatter, etc.",
"For each slot s i S t , let the set of values it can take be V ti .",
"A TPSpec of plot type t is given by: T P t = { ( s 1 : v 1 , s 2 : v 2 , . . . ) : s i S t ; v i V ti } Thus a TPSpec is a sequence of tokens and can be considered as a structured text representation of a plot.",
"This representation is invertible, i.e. a TPSpec can be mapped back to its corresponding slot-value pairs in a deterministic way.",
"The design of TPSpecs is similar to how structured representations are used for dialog state tracking (Kan et al., 2018).",
"We leverage TPSpecs in our data collection pipeline, which consists of two steps.",
"Step 1: Plot Generation.",
"The first step consists of generating a set of matplotlib plots.",
"Since there is a one-to-one mapping between Text Plot Specifications (TPSpecs) and plot images, we only need to generate TPSpecs.",
"Specifically, for each plot type t and all relevant slots s i S t , we design a value pool P ti V ti , from which we randomly sample slot values to generate TPSpec samples.",
"Step 2: Dialog Collection.",
"The second step involves collecting dialogs about the plots we generate in Step 1. A widely-used dialog collection scheme is the Wizard-of-Oz (WOZ) (Kelley, 1984), in which one worker plays the user and another worker plays the computer.",
"Successful dialog datasets have been collected using Wizard-of-Oz approach, including the Air Travel Information System (ATIS) corpus (Hemphill et al., 1990), and others (Budzianowski et al., 2018; Rojas-Barahona et al., 2017; Asri et al., 2017).",
"We designed Wizard-of-Oz 4 Mechanical Turk (MTurk) tasks to have a Describer worker, who plays the role of the user; and an Operator worker, who plays the role of the plotting agent 5 .",
"The Describer has access to a target plot which is the goal plot for the Operator to achieve, but it is not directly visible to the Operator; the Operator has access to an operation panel which consists of a changeable field for each slot.",
"The Operator can use this panel to execute a plot function on a server.",
"Both workers have access to the working plot which is the plot that the Operator has generated based on the Describer's requests.",
"It is initialized to a placeholder empty plot.",
"The Describer begins the conversation by writing a message in natural language, describing to the Operator a request that would take them closer to their goal of matching the working plot with the target plot.",
"The Describer could say invert the Y-axis.",
"The Operator can respond in natural language to ask clarification questions, or fill out slots in the operation panel and show the resulting plot to the Describer.",
"For example, the operator might select the slot corresponding to invert Y-axis=True and the working plot is updated for both workers to see.",
"The describer would continue 4 Our setting is slightly different from the usual Wizard-of-Oz in that users were informed that they were conversing with fellow humans.",
"5 Multi-worker MTurk tasks are implemented using ParlAI (Miller et al., 2017) DSTC2 SFX WOZ2.0 FRAMES KVRET M2M* ImageEdits CHARTDIALOGS [2014] [2014] [2017] [2017] [2017] [2018] [2018] [2019] (restaurant) (restaurant) (restaurant) (travel) (car) (movie,rest) (images) (plots) # Dialogues 1,612 1,006 600 1,369 2,425 1,500 129 3,284 Total # turns 23,354 12,396 4,472 19,986 12,732 14,796 8,890 15,754 Total # tokens 199,431 108,975 50,264 251,867 102,077 121,977 59,653 141,876 Avg.",
"by, for example, saying make the font size larger.",
"The two workers continue to have a dialog, taking turns until the working plot exactly matches the target plot.",
"Screenshots of our data collection UI are shown in Figure 7 and 8 in the Appendix.",
"If a pair of workers failed to successfully collaborate to match the target plot, the dialog is still kept in our dataset as negative examples.",
"However, in our exploratory method study in section 6, we skipped them for simplicity.",
"Mechanical Turk Cost and Statistics.",
"The dataset cost $8,244.18 to collect.",
"The average task completion time was 6 minutes.",
"In total, 419 workers engaged in this task; 338 of them completed at least 1 successful dialog.",
"Workers were provided a tutorial and had to complete a test before joining the task.",
"The collected dataset, CHARTDIALOGS , consists of 3 , 284 dialogs, 15 , 754 dialog turns and 141 , 876 tokens in total.",
"Comparison to other Datasets.",
"Table 1 compares our dataset to other goal-oriented datasets that are about a single domain, such as travel, restaurant, car, etc., on several key metrics.",
"In particular, we compare to: DSTC2 (Henderson et al., 2014), SFX (Gasic et al., 2014), WOZ (Wen et al., 2017), FRAMES (Asri et al., 2017), KVRET (Eric and Manning, 2017), M2M (Shah et al., 2018) and ImageEdits (Manuvinakurike et al., 2018b,a).",
"Table 1 shows that our corpus compares favorably to other datasets and is strong on two metrics: number of dialogs, and number of slots.",
"This is a positive indication, given the narrowness of our domain in comparison to other domains.",
"Naturalness of Utterances.",
"We took a pre-trained language model, the Generative Pre-trained Transformer (GPT-2) of OpenAI (Radford et al., 2019), to evaluate the naturalness of utterances in our dataset.",
"Although this language model is trained on Web text, which is different from our domain, it can be a good measure of language naturalness, at least for generic texts.",
"Figure 2a shows GPT-2 perplexity distribution for half of the utterances, 7 , 876 , in CHARTDIALOGS .",
"This half consists of the utterances with the lowest perplexity.",
"The second half with higher perplexity forms a long-tail distribution and is omitted for plot readability.",
"As shown in Figure 2a, the dataset contains utterances of varying degrees of naturalness, from pure natural language (please invert the Y-axis), to a line contour hist scatter pie bar stream matrix 3D 0 5 10 15 20 25 F r e q u e n c y ( % ) 3745 2695 299 395 2201 3180 2544 301 247 24% 17% 2% 3% 14% 20% 16% 2% 2% Fraction of Turns Per Plot Type",
"structured code-style language (Y-axis=inverted).",
"This is inline with our goal to have a conversational plotting agent that deals with requests with different levels of naturalness.",
"The average perplexity even on the first half is high at 399 .",
"77 .",
"The second half, not shown, has median perplexity of 3 , 776 .",
"0 , and mean perplexity of 77188 .",
"58 .",
"Figures 2b and 2c show the perplexity behavior for two utterances.",
"The figures show the average per-word surprise of a growing sentence as new words are added to the sentence.",
"For example, in Figure 2b, the perplexity for add a is low, increases for add a black, increases even more for add a black outline, and decreases for add a black outline to.",
"It is clear that high perplexity of the dataset is a result of plot-specific terms like outline' in Figure 2b and cap' in Figure 2c, arising in unexpected contexts in Web text.",
"Turns Per Plot Type.",
"Figure 3a shows the fraction of dialog turns per plot type.",
"Some plot types have more dialogs and more turns than others, which is a design choice we made in collecting the dataset.",
"Although not the subject of the current paper, we would like the plotting agent to generalize to plot types with few data points, and potentially, to plot types that were never seen before, as a challenge for few-shot or zero-shot learning methods.",
"Utterance Length.",
"Figures 3b shows that our dataset has utterances of varying lengths in terms of tokens.",
"The average number of tokens per utterance is 9 .",
"01 , which is comparable to the average among all the datasets reported in Table 1, which is 9 .",
"57 .",
"Utterance Syntactic Depth.",
"Figures 3c shows the distribution of constituency parse tree depths from the Stanford Parser.",
"The average tree depth is 4 .",
"5 .",
"Figure 4 shows two parse trees of different depths.",
"The parse tree in Figure 4a for the utterance Add a black outline to the chart has a tree depth of 4 , and reflects the nature of the average utterance.",
"On the ROOT S VP VB Add NP DT a JJ black NN outline PP TO to NP DT the NN chart .",
"other hand, the parse tree in Figure 4b for Increase the cap size of the error bar but don't touch the thickness shows a more complex utterance with a tree depth of 8 .",
"We also show the most common top-level constituent combinations in Figure 5 in the Appendix.",
"To study the feasibility of developing conversational plotting agent using CHARTDIALOGS , we assess the performance of various methods.",
"Seq2seq models employ two components: an encoder and a decoder.",
"The encoder produces hidden states of the input.",
"Attention is used to produce a weighted sum of the encoder hidden states, known as the context vector c t .",
"The decoder defines the joint probability of an output sequence y = (cid:0) y 1 , , y n y (cid:1) as: p ( y ) = T (cid:89) t =1 p ( y t | { y 1 , , y t 1 } , c t ) .",
"Input.",
"We treat each plot update as a separate datapoint.",
"For each datapoint, the input comes from three available sources:",
"i) current state as represented by the text plot specification (TPSpec),",
"ii) current state as represented by the plot image, and",
"iii) the dialog history.",
"In principle, the entire dialog history can be considered.",
"In our experiments, we consider all utterances from the last plot update to the current one from both interlocutors.",
"In other words, starting from the last plot update, the Describer's instruction and all the clarification questions and responses are concatenated and provided as the dialog history.",
"Output.",
"We formulate the model output as the update needed from the current TPSpec to the next TPSpec.",
"We denote such an update as TPSpec .",
"For example, if the current TPSpec is { (line width': thin'), (line color': black') } and the next TPSpec is { (line width': thin'), (line color': red') } , the corresponding TPSpec is { (line color': red') } .",
"As discussed below, the output module can be a sequence decoder, in which the TPSpec is predicted as a sequence; or a set of classifiers, each of which predicts the new value of a different slot.",
"[M1] S2S-PLOT+TXT.",
"The first method is a seq2seq method whose input consists of the current state as represented by both TPSpec and plot image, and the dialog history.",
"The TPSpec and dialog history are concatenated and fed to a seq2seq model.",
"For all methods involving a seq2seq model, we use a 2-layer Bi-LSTM for the text encoder and another 2-layer Bi-LSTM for the decoder.",
"To encode the plot image, we used a CNN followed by a row-wise LSTM.",
"The final representation of an image is a sequence of vectors and are concatenated with the text representations on the temporal dimension before they are fed to the decoder.",
"More details are provided in Appendix B. [M2] S2S-TXT.",
"[M3] S2S-NoState.",
"This is a seq2seq model whose input consists only of the dialog history.",
"The state in the form of current TPSpec or plot image is completely omitted.",
"The goal is to assess if the state is actually taken into account by the model.",
"[M4] S2S-NoUtterance.",
"This is a seq2seq model whose input consists only of the current state as represented by TPSpec.",
"The dialog history is completely omitted.",
"The goal is to assess if the dialog history is actually taken into account by the model.",
"[M5] MaxEnt.",
"We trained a logistic regression classifier to take as input the TPSpec and dialog history.",
"They are represented jointly as bag-of-words.",
"Classification predictions are made for each slot separately.",
"For each slot, the candidate label space is all possible labels that appeared in our dataset, along with a special label [unchanged] indicating not to change the value of this slot, i.e. using the value from current state.",
"Notice that bag-of-words features have a critical problem of ignoring word ordering.",
"For example, it cannot distinguish between red line with blue markers and blue line with red markers.",
"[M6] RNN + MLP.",
"This model is similar to MaxEnt except that features are extracted by an LSTM encoder, which considers word ordering.",
"It differs from the seq2seq models in that the prediction is made with MLP classifier heads for each slot separately, instead of an LSTM decoder for the whole output.",
"This exempts the model from the burden of generating a structured sequence; on the other hand the model is no longer equipped to learn the dependencies between different slots.",
"We use a 2-layer Bi-LSTM encoder for the input representation.",
"Each MLP consists of 2 fully-connected layers.",
"[M7] Transformer + MLP.",
"We consider another alternative where instead of an RNN, we use a transformer encoder, in particular, BERT (Devlin et al., 2019).",
"The final layer output of the special BERT token [CLS] is used as the input representation and fed to MLP classifier heads.",
"The structure of MLP classifier heads is the same as in RNN+MLP.",
"We conducted experiments for the following purposes: (P1) to evaluate the performance of the above-mentioned methods; (P2) to establish the quality of our dataset; and (P3) to establish a gold",
"human performance as the upper bound of expected model performance.",
"Train, Dev, and Test Splits.",
"We used 2,628 dialogs for training, 328 for validation and 329 for testing.",
"In terms of datapoints, there are 11,903 for training, 1,562 for validation and 1,481 for testing.",
"Token Granularity for Prediction.",
"We consider three different token granularity settings for mapping between TPSpecs and actual token sequences on both the input and output side: PAIR, SINGLE and SPLIT.",
"In the PAIR strategy, the token for the slot name and slot value are concatenated to create one single token of the form: slot name:slot value.",
"In SINGLE, each slot name and slot value is predicted independently.",
"In SPLIT, slot and value names are split into actual words.",
"For example, predicting that the slot x axis scale takes on the value log under the PAIR strategy involves one prediction, x axis scale:log.",
"Under SINGLE, this involves two predictions, x axis scale and then log.",
"Under SPLIT, the expected prediction becomes x, axis, scale, : and log.",
"6 Due to the BPE encoding used in BERT, SINGLE and PAIR inputs are tokenized to be almost identical as SPLIT, therefore we do not report their performance.",
"We evaluate performance using two metrics: Exact Match (EM) and Slot change F1 .",
"Exact Match measures how accurate the models are at updating the plots exactly as expected.",
"It is defined as the percentage of datapoints whose current TPSpec, when updated with the model-predicted TPSpec, can exactly match the gold target TPSpec.",
"Slot change F1 measures accuracy on individual slots.",
"Let S p be the set of slot-value pairs in the predicted TPSpec and S g be the set of slot-value pairs in the gold TPSpec, precision P = | S p S g | | S p | , recall R = | S p S g | | S g | and F 1 = 2 PRP + R .",
"We report Exact Match performance in Table 2, and Slot change F1 in Table 3. From the tables, it is clear that seq2seq-based models generally perform better than classification models.",
"A possible reason is that, by modeling TPSpec as a whole in the decoder, the models implicitly learned dependencies between different slots and thus improved the overall performance.",
"Also, neural classification methods including RNN+MLP and Transformer+MLP displayed poor performance, not even beating MaxEnt with bag-of-words.",
"Further, as an ablation study, the S2S-NoState and S2S-NoUtterance performed significantly worse than S2S-TXT, confirm-ing that both the current state and the user utterance are necessary to seq2seq methods in performing this task.",
"Both S2S-TXT and S2S-PLOT+TXT perform the best at the SINGLE token granularity.",
"On this granularity, there is no significant difference between their performance on exact match.",
"For slot F1, S2S-TXT even performs significantly better than S2S-PLOT+TXT, with p = 0 .",
"033 in an unequal variance T-test, which implies that for seq2seq methods adding the image modality does not add much on top of the text modality in this task.",
"Table 4 shows performance of the best performing methods, S2S-PLOT+TXT and S2S-TXT, per plot type.",
"We ran 5 experiments and reported the means and standard deviations in order to gain a better comparison between their performances.",
"We can see that, as expected for our above results, for most plot types, performance of the two methods is similar.",
"In order to further inspect the quality and difficulty of our dataset, we sampled a subset of 444 partial dialogs.",
"Each partial dialog consists of the first several turns of a dialog, and ends with a Describer utterance.",
"The corresponding Operator response (plot update) is omitted.",
"Thus, the human has to predict what the Operator (the plotting agent) will plot, given this partial dialog.",
"We created a new MTurk task, where we presented each partial dialog to 3 workers and collected their responses.",
"We calculated the agreements between the newly collected responses and the original Operator response, results shown in Table 5. The cases in which the majority of the workers (3/3 or 2/3) exactly match the original Operator, corresponding to the first two rows, happen 72.6 % of the time.",
"The cases when at least 3 out of all 4 humans (including the original Operator) agree, corresponding to row 1, 2 and 5, happen 80.6 % of the time.",
"This setting is also worth considering because the original Operator is another MTurk worker, who can also make mistakes.",
"Both of these numbers show that a large fraction of the utterances in our dataset are intelligible implying an overall good quality dataset.",
"Fleiss' Kappa among all 4 humans is 0.849; Co-hen's Kappa between the original Operator and the majority among 3 new workers is 0.889.",
"These numbers indicate a strong agreement as well.",
"The gold human performance was obtained by having one of the authors perform the same task as described in the previous subsection, on a subset",
"of 180 samples.",
"The result is a 76.8 % exact match.",
"That is, our best model is 15.5 percentage points behind gold human performance, showing there is room for models to improve on this dataset.",
"The best accuracy reported on the aforementioned conversational image editing dataset was 74% on intent classification, ignoring actual attribute values (Manuvinakurike et al., 2018a).",
"This result is not directly comparable to the best accuracy 61.3% on our dataset due to the difference in accuracy definition.",
"To our knowledge, no comparable results has been reported on the image editing dataset, and the dataset is not publicly available.",
"We inspected the output of our best-performing models in order to identify the most common",
"causes of errors.",
"Here we used S2S-TXT with SINGLE granularity as a representative; the error categories are similar for S2S-PLOT+TXT or other granularity.",
"Sometimes the Describer utterance is ambiguous and makes different actions all reasonable.",
"We spotted two kinds of ambiguities: the unspecified new slot and the value , exemplified in Table 6a and 6b respectively.",
"1) Unspecified new slot.",
"The Describer added a new component to the plot (the grid lines), which activated new slots (grid line type) whose values are unspecified.",
"Therefore, any value for these slots should be correct.",
"2) Ambiguous value.",
"The Describer asked to change the size of a component (the font), but did not specify the value.",
"As in the example, the font size was large; to make it smaller, both medium and small are correct.",
"We report some of the errors that are due to mistakes made by MTurk workers.",
"Operators can overlook part of the Describer's instruction.",
"These erroneous actions are recorded and in turn be counted as errors of models in our automatic evaluation process.",
"In addition to human errors, many cases were also due to the model itself.",
"We show examples of model errors in Table 7. 1) Multi-turn dialog history.",
"In most samples, the dialog history consists of only one utterance, the Describer's instruction.",
"As a result, when confronted with multiple utterances concatenated, the model may get confused.",
"2) Complex slot value.",
"Some slot values are relatively hard to describe in natural language, such as colormap in example 7b.",
"They can cause the models to make mistakes.",
"3) Infrequent expressions.",
"When the user expresses their request in an unusual way (in example 7c, log style for log scale), the model may not understand since it is rarely seen in the training data.",
"In this paper, we defined the problem of conversational plotting agents, which is of great practical",
"importance considering the large volume of questions online about plotting library usage.",
"We also presented a dataset, CHARTDIALOGS , to facilitate the development of such agents.",
"Our experiments have demonstrated the feasibility of seq2seq-based methods to produce working models for dataset; however, there is still a large gap between our best performing methods and human performance.",
"Future work includes methods that get closer to human performance on the dataset.",
"A practical line of future work is embedding our plotting agent in interactive environments such as Jupyter Lab.",
"This work was partially supported by a Hellman Fellowship.",
"We also appreciate constructive feedback from our anonymous reviewers."
] | [
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"objective",
"result",
"objective",
"other",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"objective",
"abstain",
"method",
"other",
"other"
] |
[
"Recently, many works have tried to augment the performance of Chinese named entity recognition (NER) using word lexicons.",
"As a representative, Lattice-LSTM (Zhang and Yang, 2018) has achieved new benchmark results on several public Chinese NER datasets.",
"However, Lattice-LSTM has a complex model architecture.",
"This limits its application in many industrial areas where real-time NER responses are needed.",
"In this work, we propose a simple but effective method for incorporating the word lexicon into the character representations.",
"This method avoids designing a complicated sequence modeling architecture, and for any neural NER model, it requires only subtle adjustment of the character representation layer to introduce the lexicon information.",
"Experimental studies on four benchmark Chinese NER datasets show that our method achieves an inference speed up to 6.15 times faster than those of state-of-the-art methods, along with a better performance.",
"The experimental results also show that the proposed method can be easily incorporated with pre-trained models like BERT.",
"1 1 Introduction Named Entity Recognition (NER) is concerned with the identification of named entities, such as persons, locations, and organizations, in unstructured text.",
"NER plays an important role in many downstream tasks, including knowledge base construction (Riedel et al., 2013), information retrieval (Chen et al., 2015), and question answering (Diefenbach et al., 2018).",
"In languages where words are naturally separated (e.g., English), NER has been conventionally formulated as a sequence Equal contribution.",
"1 The source code of this paper is publicly available at https://github.com/v-mipeng/ LexiconAugmentedNER .",
"labeling problem, and the state-of-the-art results have been achieved using neural-network-based models (Huang et al., 2015; Chiu and Nichols, 2016; Liu et al., 2018).",
"Compared with NER in English, Chinese NER is more difficult since sentences in Chinese are not naturally segmented.",
"Thus, a common practice for Chinese NER is to first perform word segmentation using an existing CWS system and then apply a word-level sequence labeling model to the segmented sentence (Yang et al., 2016; He and Sun, 2017b).",
"However, it is inevitable that the CWS system will incorrectly segment query sentences.",
"This will result in errors in the detection of entity boundary and the prediction of entity category in NER.",
"Therefore, some approaches resort to performing Chinese NER directly at the character level, which has been empirically proven to be effective (He and Wang, 2008; Liu et al., 2010; Li et al., 2014; Liu et al., 2019; Sui et al., 2019; Gui et al., 2019b; Ding et al., 2019).",
"A drawback of the purely character-based NER method is that the word information is not fully exploited.",
"With this consideration, Zhang and Yang, (2018) proposed Lattice-LSTM for incorporating word lexicons into the character-based NER model.",
"Moreover, rather than heuristically choosing a word for the character when it matches multiple words in the lexicon, the authors proposed to preserve all words that match the character, leaving the subsequent NER model to determine which word to apply.",
"To realize this idea, they introduced an elaborate modification to the sequence modeling layer of the LSTM-CRF model (Huang et al., 2015).",
"Experimental studies on four Chinese NER datasets have verified the effectiveness of Lattice-LSTM.",
"However, the model architecture of Lattice-LSTM is quite complicated.",
"In order to introduce lexicon information, Lattice-LSTM adds several additional edges between nonadjacent characters in the input sequence, which significantly slows its training and inference speeds.",
"In addition, it is difficult to transfer the structure of Lattice-LSTM to other neural-network architectures (e.g., convolutional neural networks and transformers) that may be more suitable for some specific tasks.",
"In this work, we propose a simpler method to realize the idea of Lattice-LSTM, i.e., incorporating all the matched words for each character to a character-based NER model.",
"The first principle of our model design is to achieve a fast inference speed.",
"To this end, we propose to encode lexicon information in the character representations, and we design the encoding scheme to preserve as much of the lexicon matching results as possible.",
"Compared with Lattice-LSTM, our method avoids the need for a complicated model architecture, is easier to implement, and can be quickly adapted to any appropriate neural NER model by adjusting the character representation layer.",
"In addition, ablation studies show the superiority of our method in incorporating more complete and distinct lexicon information, as well as introducing a more effective word-weighting strategy.",
"The contributions of this work can be summarized as follows: We propose a simple but effective method for incorporating word lexicons into the character representations for Chinese NER.",
"The proposed method is transferable to different sequence-labeling architectures and can be easily incorporated with pre-trained models like BERT (Devlin et al., 2018).",
"We performed experiments on four public Chinese NER datasets.",
"The experimental results show that when implementing the sequence modeling layer with a single-layer Bi-LSTM, our method achieves considerable improvements over the state-of-the-art methods in both inference speed and sequence labeling performance.",
"In this section, we introduce several previous works that influenced our work, including the Softword technique and Lattice-LSTM.",
"The Softword technique was originally used for incorporating word segmentation information into downstream tasks (Zhao and Kit, 2008; Peng and",
"Dredze, 2016).",
"It augments the character representation with the embedding of its corresponding segmentation label: x cj [ x cj ; e seg ( seg ( c j ))] .",
"Here, seg ( c j ) Y seg denotes the segmentation label of the character c j predicted by the word segmentor, e seg denotes the segmentation label embedding lookup table, and typically Y seg = { B , M , E , S } .",
"However, gold segmentation is not provided in most datasets, and segmentation results obtained by a segmenter can be incorrect.",
"Therefore, segmentation errors will inevitably be introduced through this approach.",
"Lattice-LSTM designs to incorporate lexicon information into the character-based neural NER model.",
"To achieve this purpose, lexicon matching is first performed on the input sentence.",
"If the subsequence { c i , , c j } of the sentence matches a word in the lexicon for i < j , a directed edge is added from c i to c j .",
"All lexicon matching results related to a character are preserved by allowing the character to be connected with multiple other characters.",
"Intrinsically, this practice converts the input form of a sentence from a chain into a graph.",
"However, in order to model the graph-based input, Lattice-LSTM introduces an elaborate modification to the normal LSTM.",
"Specifically, let s < ,j> denote the list of sub-sequences of sentence s that match the lexicon and end with c j , h < ,j> denote the corresponding hidden state list { h i , s <i,j> s < ,j> } , and c < ,j> denote the corresponding memory cell list { c i , s <i,j> s < ,j> } .",
"In Lattice-LSTM, the hidden state h j and memory cell c j of c j are now updated as follows: h j , c j = f ( h j 1 , c j 1 , x cj , s < ,j> , h < ,j> , c < ,j> ) , (3) where f is a simplified representation of the function used by Lattice-LSTM to perform memory update.",
"From our perspective, there are two main advantages to Lattice-LSTM.",
"First, it preserves all the possible lexicon matching results that are related to a character, which helps avoid the error propagation problem introduced by heuristically choosing a single matching result for each character.",
"Second, it introduces pre-trained word embeddings to the system, which greatly enhances its performance.",
"However, efficiency problems exist in Lattice-LSTM.",
"Compared with normal LSTM, Lattice-LSTM needs to additionally model s < ,j> , h < ,j> , and c < ,j> for memory update, which slows the training and inference speeds.",
"Additionally, due to the complicated implementation of f , it is difficult for Lattice-LSTM to process multiple sentences in parallel (in the published implementation of Lattice-LSTM, the batch size was set to 1).",
"These problems limit its application in some industrial areas where real-time NER responses are needed.",
"In this work, we sought to retain the merits of Lattice-LSTM while overcoming its drawbacks.",
"To this end, we propose a novel method in which lexicon information is introduced by simply adjusting the character representation layer of an NER model.",
"We refer to this method as SoftLexicon .",
"As shown in Figure 1, the overall architecture of the proposed method is as follows.",
"First, each character of the input sequence is mapped into a dense vector.",
"Next, the SoftLexicon feature is constructed and added to the representation of each character.",
"Then, these augmented character representations are put into the sequence modeling layer and the CRF layer to obtain the final predictions.",
"For a character-based Chinese NER model, the input sentence is seen as a character sequence s = { c 1 , c 2 , , c n } V c , where V c is the character vocabulary.",
"Each character c i is represented using a dense vector (embedding): x ci = e c ( c i ) , (4) where e c denotes the character embedding lookup table.",
"Char + bichar.",
"In addition, Zhang and Yang, (2018) has proved that character bigrams are useful for representing characters, especially for those methods not using word information.",
"Therefore, it is common to augment the character representations with bigram embeddings: x ci = [ e c ( c i ); e b ( c i , c i +1 )] , (5) Match in the lexicon (Language) (Linguistic) (National language) (Chinese language) (Chinese language) (Language; Say) C 3 f 3,4 f 3,5 f 1,4 f 2,3 f 1,3 f 3,3 w e i g h t B M E S C 4 B M E S C 5 B M E S C 2 B M E S C 1 B M E S Bi-LSTM / CNN / Transformer layer B-LOC E-LOC O O O Char emb SoftLexicon feature Concatenation Sequence encoding layer CRF layer Predictions Input sequence Figure 1: The overall architecture of the proposed method.",
"The problem with the purely character-based NER model is that it fails to exploit word information.",
"To address this issue, we proposed two methods, as described below, to introduce the word information into the character representations.",
"In the following, for any input sequence s = { c 1 , c 2 , , c n } , w i,j denotes its sub-sequence { c i , c i +1 , , c j } .",
"The first conducted method is an intuitive extension of the Softword method, called ExSoftword .",
"Instead of choosing one segmentation result for each character, it proposes to retain all possible segmentation results obtained using the lexicon: x cj [ x cj ; e seg ( segs ( c j )] , (6) where segs ( c j ) denotes all segmentation labels related to c j , and e seg ( segs ( c j )) is a 5-dimensional multi-hot vector with each dimension corresponding to an item of { B, M, E, S, O } .",
"As an example presented in Figure 2, the character c 7 ( ) occurs in two words, w 5 , 8 ( ) and w 6 , 7 ( ), that match the lexicon, and it occurs in the middle of and the end of .",
"Therefore, its corresponding segmentation result is { M , E } , and its character representation is enriched as follows: x c 7 [ x c 7 ; e seg ( { M, E } )] .",
"Here, the second and third dimensions of e seg ( ) are set to 1, and the rest dimensions are set to 0.",
"The problem of this approach is that it cannot fully inherit the two merits of Lattice-LSTM.",
"First, it fails to introduce pre-trained word embeddings.",
"Second, it still losses information of the matching results.",
"As shown in Figure 2, the constructed ExSoftword feature for characters { c 5 , c 6 , c 7 , c 8 } is {{ B } , { B , M , E } , { M , E } , { E }} .",
"However, given this constructed sequence, there exists more than one corresponding matching results, such as { w 5 , 6 ( ), w 5 , 7 ( ), w 6 , 8 ( ) } and { w 5 , 6 ( ), w 6 , 7 ( ), w 5 , 8 ( ) } .",
"Therefore, we cannot tell which is the correct result to be restored.",
"Based on the analysis on Exsoftword, we further developed the SoftLexicon method to incorporate the lexicon information.",
"The SoftLexicon features are constructed in three steps.",
"Categorizing the matched words.",
"First, to retain the segmentation information, all matched words of each character c i is categorized into four word sets BMES, which is marked by the four segmentation labels.",
"For each character c i in the input sequence = { c 1 , c 2 , , c n } , the four set is constructed by: B(c i ) = { w i , k , w i , k L , i < k n } , M(c i ) = { w j , k , w j , k L , 1 j < i < k n } , E(c i ) = { w j , i , w j , i L , 1 j < i } , S(c i ) = { c i , c i L } .",
"(8) Here, L denotes the lexicon we use in this work.",
"Additionally, if a word set is empty, a special word NONE is added to the empty word set.",
"An example of this categorization approach is shown in Figure",
"3. Noted that in this way, not only we Li = \" \" = {\" \"} = \" \" = \" \" = \"\" = \"\" = \" \" = \" \" Soft-lexicon method Zhong Hill East Road On Live Ming ZhongshanCity West ZhongshanRoad Ming Li (person name) c 2 8 7 6 5 4 3 , , , , Shanxi Province Figure 3: The SoftLexicon method.",
"can introduce the word embedding, but also no information loss exists since the matching results can be exactly restored from the four word sets of the characters.",
"Condensing the word sets.",
"After obtaining the BMES word sets for each character, each word set is then condensed into a fixed-dimensional vector.",
"In this work, we explored two approaches for implementing this condensation.",
"Here, S denotes a word set and e w denotes the word embedding lookup table.",
"However, as shown in Table 8, the results of empirical studies revealed that this algorithm does not perform well.",
"Therefore, a weighting algorithm is introduced to further leverage the word information.",
"To maintain computational efficiency, we did not opt for a dynamic weighting algorithm like attention.",
"Instead, we propose using the frequency of each word as an indication of its weight.",
"Since the frequency of a word is a static value that can be obtained offline, this can greatly accelerate the calculation of the weight of each word.",
"Specifically, let z ( w ) denote the frequency that a lexicon word w occurs in the statistical data, the weighted representation of the word set S is obtained as follows: v s ( S ) = 4 Z (cid:88) w S z ( w ) e w ( w ) , (10) where Z = (cid:88) w B M E S z ( w ) .",
"Here, weight normalization is performed on all words in the four word sets to make an overall comparison.",
"In this work, the statistical data set is constructed from a combination of training and developing data of the task.",
"Of course, if there is unlabelled data in the task, the unlabeled data set can serve as the statistical data set.",
"In addition, note that the frequency of w does not increase if w is covered by another sub-sequence that matches the lexicon.",
"This prevents the problem in which the frequency of a shorter word is always less than the frequency of the longer word that covers it.",
"Combining with character representation.",
"The final step is to combine the representations of four word sets into one fix-dimensional feature, and add it to the representation of each character.",
"In order to retain as much information as possible, we choose to concatenate the representations of the four word sets, and the final representation of each character is obtained by: e s (B , M , E , S) = [ v s (B); v s (M); v s (E); v s (S)] , x c [ x c ; e s (B , M , E , S)] .",
"(11)",
"Here, v s denotes the weighting function above.",
"With the lexicon information incorporated, the character representations are then put into the sequence modeling layer, which models the dependency between characters.",
"Generic architectures for this layer including the bidirectional long-short term memory network(BiLSTM), the Convolutional Neural Network(CNN) and the trans-former(Vaswani et al., 2017).",
"In this work, we implemented this layer with a single-layer Bi-LSTM.",
"Here, we precisely show the definition of the forward LSTM: i t f t o t (cid:101) c t = tanh (cid:18) W (cid:20) x ct h t 1 (cid:21) + b (cid:19) , c t = (cid:101) c t (cid:12) i t + c t 1 (cid:12) f t , h t = o t (cid:12) tanh( c t ) .",
"where is the element-wise sigmoid function and (cid:12) represents element-wise product.",
"W and b are trainable parameters.",
"The backward LSTM shares the same definition as the forward LSTM Datasets Type Train Dev Test OntoNotes Sentence 15.7k 4.3k 4.3k Char 491.9k 200.5k 208.1k MSRA Sentence 46.4k -4.4k Char 2169.9k -172.6k Weibo Sentence 1.4k 0.27k 0.27k Char 73.8k 14.5 14.8k Resume Sentence 3.8k 0.46 0.48k Char 124.1k 13.9k 15.1k Table 1: Statistics of datasets.",
"yet model the sequence in a reverse order.",
"The concatenated hidden states at the i th step of the forward and backward LSTMs h i = [ h i ; h i ] forms the context-dependent representation of c i .",
"On top of the sequence modeling layer, it is typical to apply a sequential conditional random field (CRF) (Lafferty et al., 2001) layer to perform label inference for the whole character sequence at once:",
"Here, Y s denotes all possible label sequences of s , and t ( y (cid:48) , y | s ) = exp( w Ty (cid:48) ,y h t + b y (cid:48) ,y ) , where w y (cid:48) ,y and b y (cid:48) ,y are trainable parameters corresponding to the label pair ( y (cid:48) , y ) , and denotes model parameters.",
"For label inference, it searches for the label sequence y with the highest conditional probability given the input sequence s : y = y p ( y | s ; ) , (14) which can be efficiently solved using the Viterbi algorithm (Forney, 1973).",
"Most experimental settings in this work followed the protocols of Lattice-LSTM (Zhang and Yang, 2018), including tested datasets, compared baselines, evaluation metrics (P, R, F1), and so on.",
"To make this work self-completed, we concisely illustrate some primary settings of this work.",
"The methods were evaluated on four Chinese NER datasets, including OntoNotes (Weischedel et al., 2011), MSRA (Levow, 2006), Weibo NER (Peng",
"and Dredze, 2015; He and Sun, 2017a), and Resume NER (Zhang and Yang, 2018).",
"OntoNotes and MSRA are from the newswire domain, where gold-standard segmentation is available for training data.",
"For OntoNotes, gold segmentation is also available for development and testing data.",
"Weibo NER and Resume NER are from social media and resume, respectively.",
"There is no gold standard segmentation in these two datasets.",
"Table 1 shows statistic information of these datasets.",
"As for the lexicon, we used the same one as Lattice-LSTM, which contains 5.7k single-character words, 291.5k two-character words, 278.1k three-character words, and 129.1k other words.",
"In addition, the pre-trained character embeddings we used are also the same with Lattice-LSTM, which are pre-trained on Chinese Giga-Word using word2vec.",
"In this work, we implement the sequence-labeling layer with Bi-LSTM.",
"Most implementation details followed those of Lattice-LSTM, including character and word embedding sizes, dropout, embedding initialization, and LSTM layer number.",
"Additionally, the hidden size was set to 200 for small datasets Weibo and Resume, and 300 for larger datasets OntoNotes and MSRA.",
"The initial learning rate was set to 0.005 for Weibo and 0.0015 for the rest three datasets with Adamax (Kingma and Ba, 2014) step rule 2 .",
"Table 2 shows the inference speed of the SoftLexicon method when implementing the sequence modeling layer with a bi-LSTM layer.",
"The speed was evaluated based on the average number of sentences processed by the model per second using a GPU (NVIDIA TITAN X).",
"From the 2 Please refer to the attached source code for more implementation detail of this work and access https:// github.com/jiesutd/LatticeLSTM for pre-trained word and character embeddings.",
"table, we can observe that when decoding with the same batch size (=1), the proposed method is considerably more efficient than Lattice-LSTM and LR-CNN, performing up to 6.15 times faster than Lattice-LSTM.",
"The inference speeds of Soft-Lexicon(LSTM) with bichar are close to those without bichar, since we only concatenate an additional feature to the character representation.",
"The inference speeds of the BERT-Tagger and SoftLexicon (LSTM) + BERT models are limited due to the deep layers of the BERT structure.",
"However, the speeds of the SoftLexicon (LSTM) + BERT model are still faster than those of Lattice-LSTM and LR-CNN on all datasets.",
"To further illustrate the efficiency of the SoftLexicon method, we also conducted an experiment to evaluate its inference speed against sentences of different lengths, as shown in Table",
"4. For a fair comparison, we set the batch size to 1 in all of the compared methods.",
"The results show that the proposed method achieves significant improvement in speed over Lattice-LSTM and LR-CNN when processing short sentences.",
"With the increase of sentence length, the proposed method is consistently faster than Lattice-LSTM and LR-CNN despite the speed degradation due to the recurrent architecture of LSTM.",
"Overall, the proposed SoftLexicon method shows a great advantage over other methods in computational efficiency.",
"Tables 3 6 3 show the performances of our method against the compared baselines.",
"In this study, the sequence modeling layer of our method was 3 In Table 3 5, indicates that the model uses external labeled data for semi-supervised learning.",
"means that the model also uses discrete features.",
"OntoNotes.",
"Table 3 shows results 4 on the OntoNotes dataset, where gold word segmentation is provided for both training and testing data.",
"The methods of the Gold seg and the Auto seg groups are all word-based, with the former input building on gold word segmentation results and the latter building on automatic word segmentation results by a segmenter trained on OntoNotes training data.",
"The methods used in the No seg group are character-based.",
"From the table, we can make several observations.",
"First , when gold word segmentation was replaced by automatically generated word segmentation, the F1 score decreases from 75.77% to 71.70%.",
"This reveals the problem of treating the predicted word segmentation result as the true result in the word-based Chinese NER.",
"Second , the F1 score of the Char-based (LSTM)+ExSoftword model is greatly improved from that of the Char-based (LSTM) model.",
"This indicates the feasibility of the naive ExSoftword method.",
"However, it still greatly underperforms relative to Lattice-LSTM, which reveals its deficiency in utilizing word information.",
"Lastly , the proposed SoftLexicon method outperforms Lattice-LSTM by 1.76% with respect to the F1 score, and obtains a greater improvement of 2.28% combining the bichar 4 A result in boldface indicates that it is statistically significantly better ( p < 0 . 01 in pairwise t test) than the others in the same box.",
"feature.",
"It even performs comparably with the word-based methods of the Gold seg group, verifying its effectiveness on OntoNotes.",
"MSRA/Weibo/Resume.",
"Tables 4, 5 and 6 show results on the MSRA, Weibo and Resume datasets, respectively.",
"Compared methods include the best statistical models on these data set, which leveraged rich handcrafted features (Chen et al., 2006; Zhang et al., 2006; Zhou et al., 2013), character embedding features (Lu et al., 2016; Peng and Dredze, 2016), radical features (Dong et al., 2016), cross-domain data, and semi-supervised data (He and Sun, 2017b).",
"From the tables, we can see that the performance of the proposed Soft-lexion method is significant better than that of Lattice-LSTM and other baseline methods on all three datasets.",
"Table 7 shows the performance of the SoftLexicon method when implementing the sequence modeling layer with different neural architecture.",
"From the table, we can first see that the LSTM-based architecture performed better than the CNNand transformerbased architectures.",
"In addition, our method with different sequence modeling layers consistently outperformed their corresponding ExSoftword baselines.",
"This confirms the superiority of our method in modeling lexicon information in different neural NER models.",
"We also conducted experiments on the four datasets to further verify the effectiveness of SoftLexicon in combination with pre-trained model, the results of which are shown in Tables 3 6.",
"In these experiments, we first use a BERT encoder to obtain the contextual representations of each sequenc, and then concatenated them into the character representations.",
"From the table, we can see that the SoftLexicon method with BERT outperforms the BERT tagger on all four datasets.",
"These results show that the SoftLexicon method can be effectively combined with pre-trained model.",
"Moreover, the results also verify the effectiveness of our method in utilizing lexicon information, Models OntoNotes MSRA Weibo Resume SoftLexicon (LSTM) 75.64 93.66 61.42 95.53 M group 75.06 93.09 58.13 94.72 Distinction 70.29 92.08 54.85 94.30 Weighted pooling 72.57 92.76 57.72 95.33 Overall weighting 74.28 93.16 59.55 94.92 Table 8: An ablation study of the proposed model.",
"which means it can complement the information obtained from the pre-trained model.",
"To investigate the contribution of each component of our method, we conducted ablation experiments on all four datasets, as shown in table 8.",
"(1) In Lattice-LSTM, each character receives word information only from the words that begin or end with it.",
"Thus, the information of the words that contain the character inside is ignored.",
"However, the SoftLexicon prevents the loss of this information by incorporating the Middle group of words.",
"In the M' group experiment, we removed the Middle group in SoftLexicon, as in Lattice-LSTM.",
"The degradation in performance on all four datasets indicates the importance of the M group of words, and confirms the advantage of our method.",
"(2) Our method proposed to draw a clear distinction between the four BMES categories of matched words.",
"To study the relative contribution of this design, we conducted experiments to remove this distinction, i.e., we simply added up all the weighted words regardless of their categories.",
"The decline in performance verifies the significance of a clear distinction for different matched words.",
"(3) We proposed two strategies for pooling the four word sets in Section 3.2.",
"In the Weighted pooling experiment, the weighted pooling strategy was replaced with mean-pooling, which degrades the performance.",
"Compared with mean-pooling, the weighting strategy not only succeeds in weighing different words by their significance, but also introduces the frequency information of each word in the statistical data, which is verified to be helpful.",
"(4) Although existing lexicon-based methods like Lattice-LSTM also use word weighting, unlike the proposed Soft-lexion method, they fail to perform weight normalization among all the matched words.",
"For example, Lattice-LSTM only normalizes the weights inside the B group or the E group.",
"In the Overall weighting experiment, we performed weight normalization inside each BMES group as Lattice-LSTM does, and found the resulting performance to be degraded.",
"This result shows that the ability to perform overall weight normalization among all matched words is also an advantage of our method.",
"In this work, we addressed the computational efficiency of utilizing word lexicons in Chinese NER.",
"To obtain a high-performing Chinese NER system with a fast inference speed, we proposed a novel method to incorporate the lexicon information into the character representations.",
"Experimental studies on four benchmark Chinese NER datasets reveal that our method can achieve a much faster inference speed and better performance than the compared state-of-the-art methods.",
"The authors wish to thank the anonymous reviewers for their helpful comments.",
"This work was partially funded by China National Key RD Program (No. 2018YFB1005104, 2018YFC0831105, 2017YFB1002104), National Natural Science Foundation of China (No. 61976056, 61532011, 61751201), Shanghai Municipal Science and Technology Major Project (No.2018SHZDZX01), Science and Technology Commission of Shanghai Municipality Grant (No.18DZ1201000, 16JC1420401, 17JC1420200)."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"method",
"result",
"objective",
"objective",
"method",
"result",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"method",
"objective",
"result",
"other",
"other"
] |
[
"We present a new dataset comprised of 210,532 tokens evenly drawn from 100 different English-language literary texts annotated for ACE entity categories (person, location, geo-political entity, facility, organization, and vehicle).",
"These categories include non-named entities (such as the boy, the kitchen) and nested structure (such as [[the cook]'s sister]).",
"In contrast to existing datasets built primarily on news (focused on geopolitical entities and organizations), literary texts offer strikingly different distributions of entity categories, with much stronger emphasis on people and description of settings.",
"We present empirical results demonstrating the performance of nested entity recognition models in this domain; training natively on in-domain literary data yields an improvement of over 20 absolute points in F-score (from 45.7 to 68.3), and mitigates a disparate impact in performance for male and female entities present in models trained on news data.",
"Computational literary analysis works at the intersection of natural language processing and literary studies, drawing on the structured representation of text to answer literary questions about character (Underwood et al., 2018), objects (Tenen, 2018) and place (Evans and Wilkens, 2018).",
"Much of this work relies on the ability to extract entities accurately, including work focused on modeling (Bamman et al., 2014; Iyyer et al., 2016; Chaturvedi et al., 2017).",
"And yet, with notable exceptions (Vala et al., 2015; Brooke et al., 2016), nearly all of this work tends to use NER models that have been trained on non-literary data, for the simple reason that labeled data exists for domains like news through standard datasets like ACE (Walker et al., 2006), CoNLL (Tjong Kim Sang and De Meulder, 2003) and OntoNotes (Hovy et al., 2006)and even historical non-fiction (De-Lozier et al., 2016; Rayson et al., 2017)but not for literary texts.",
"This is naturally problematic for several reasons: models trained on out-of-domain data surely degrade in performance when applied to a very different domain, and especially for NER, as Au-genstein et al. (2017) has shown; and without in-domain test data, it is difficult to directly estimate the severity of this degradation.",
"At the same time, literary texts also demand slightly different representations of entities.",
"While classic NER models typically presume a flat entity structure (Finkel and Manning, 2009), relevant characters and places (and other entities) in literature need not be flat, and need not be named: The cook's sister ate lunch contains two PER entities ([The cook] and [The cook's sister]).",
"We present in this work a new dataset of entity annotations for a wide sample of 210,532 tokens from 100 literary texts to help address these issues and help advance computational work on literature.",
"These annotations follow the guidelines set forth by the ACE 2005 entity tagging task (LDC, 2005) in labeling all nominal entities (named and common alike), including those with nested structure.",
"In evaluating the stylistic difference between the texts in ACE 2005 (primar-ily news) and the literary texts in our new dataset, we find considerably more attention dedicated to people and settings in literature; this attention directly translates into substantially improved accuracies for those classes when models are trained on them.",
"The dataset is freely available for download under a Creative Commons ShareAlike 4.0 license at https://github.com/dbamman/ litbank .",
"We draw our corpus from the public-domain texts on Project Gutenberg, selecting individual works of fiction (both novels and short stories) that include a mix of high literary style (e.g., Edith Whar-ton's Age of Innocence , James Joyce's Ulysses ) and popular pulp fiction (e.g., H. Rider Haggard's King Solomon's Mines , Horatio Alger's Ragged Dick ).",
"All texts are published before 1923 (the current threshold for public domain in the United States), with the majority falling between 1852 and 1911.",
"We adopt the ACE 2005 guidelines for entity annotation, focusing on the subset of people ( PER ), natural locations ( LOC ), built facilities ( FAC ), geopolitical entities ( GPE ), organizations ( ORG ) and vehicles ( VEH ).",
"1 While traditional named entity recognition presumes a flat structure in which entity labels cannot be embedded within each other, we allow for nested structure, as in the following (from Jane Austen's Emma ): . . . PER (cid:122) (cid:125)(cid:124) (cid:123) the elder brother of PER (cid:122) (cid:125)(cid:124) (cid:123) PER (cid:122) (cid:125)(cid:124) (cid:123) Isabella 's husband This nested structure is in fact quite common in our data, with entities that contain at least one level of nesting accounting for 13.8% of the annotations86.2% contain no nesting (as in Isabella above), 12.5% contain one level ( Isabella's husband ), 1.2% contain two ( the elder brother of Isabella's husband ), and 0.1% contain three.",
"The dataset contains a total of 13,912 entity annotations.",
"We generally follow the ACE annotation guidelines for each of the entity classes and restrict our annotations to proper and common noun phrases (i.e., excluding pronouns or WH-question words); table 1 illustrates examples for each class.",
"PER.",
"By person we describe a single person indicated by a proper name (Tom Saywer) or common entity (the boy); or set of people, such as her daughters and the Ashburnhams .",
"FAC.",
"ACE guidelines define a facility as a functional, primarily man-made structure designed for human habitation (buildings, muse-ums), storage (barns, parking garages), transportation infrastructure (streets, highways), and maintained outdoor spaces (gardens) (LDC, 2005).",
"We adopt the ACE threshold for taggability here as well, and rooms and closets within a house as the smallest possible facility.",
"GPE.",
"Geo-political entities are single units that contain a population, government, physical location, and political boundaries (LDC, 2005).",
"In literary data, this includes not only cities that have known geographical locations within the real world (London, New York), or nations (England, the United States), but also both named and common imagined entities as well (the town, the vil-lage).",
"LOC.",
"Locations describe entities with physicality but without political entities.",
"In our dataset, this includes named regions without political organization (New England, the South) and planets (Mars).",
"The most common class, however, are geologically designated areas describing natural settings, such as the sea, the river, the country, the valley, the woods, and the forest.",
"VEH.",
"Literary texts include a number of vehicles defined as a physical device primarily designed to move an object from one location to an-other (LDC, 2005); ships, trains, and carriages dominate since nearly all texts were written before the rise of automobiles.",
"ORG.",
"Organizations are defined by the criterion of formal association and are relatively rare in literary data, comprising the least frequently occurring entity class.",
"The most frequent organizations include the army and the Church (as an administrative entity, distinct from the church as a facility with a physical location).",
"Literary language in particular presents several unique challenges to entity annotation, including metaphor, personification and metonymy.",
"Metaphor.",
"For non-figurative texts, predicative structures like John is a doctor nearly always entail the two comparands to be identical in their entity type (here, John and a doctor are both PER ).",
"Literary texts, however, are awash in figurative metaphor, such as the young man was not really a poet; but surely he was a poem (Chesterton, The Man Who Was Thursday ).",
"In such cases where the metaphor takes a predicative structure of x is y , we annotate only those phrases whose type describes an entity class (in this case, labeling a poet as a PER , but not a poem ).",
"Personification.",
"Several works, such as Lon-don's The Call of the Wild and Sewell's Black Beauty , feature personified animals as main characters, with dialogue and evident cognition.",
"We expand the criteria for PER to include such characters who engage in dialogue or have reported internal monologue, regardless of their human status (this includes depicted non-human life forms in science fiction, such as aliens and robots, as well).",
"Metonymy.",
"Metonymy is a rhetorical device of describing one aspect of a concept by a closely related one (such as the White House to refer to the organization of government it houses).",
"We see many examples of metonymy in literature, such as the following: Them men would eat and drink if we was all in our graves,' said the indignant cook, who indeed had a real grievance; and the outraged sentiment of the kitchen was avenged by a bad and hasty",
"dinner. (Oliphant, Miss Mar-joribanks) Following ACE, we annotate such examples by annotating the evoked entity class; in this case, annotating the kitchen as a PER (describing a set of cooks who feel outrage) rather than as a FAC .",
"Two co-authors annotated all 100 texts with a single pass between them after an initial phase of discussions about the annotation process, difficulties encountered and formalizing annotation decisions specific to literary texts.",
"At the end of annotating, the inter-annotator agreement was calculated by double-annotating the same five texts and measuring the F1 score.",
"We find that agreement rate to be high (86.0 F), likely due to the existence of thorough previous guidelines in ACE that both annotators were able to reference during the process of annotation.",
"2 4 Comparison with ACE We can compare the properties of this dataset to those of the ACE 2005 annotated data.",
"To enable an apples-to-apples comparison, we filter the ACE data to exclude entity labels for tokens that are marked with a mention type of pronoun ( PRO ) or WH-question ( WHQ ) and remove all weapon ( WEA ) labels; we consider only the subsets for broadcast conversation (bc), broadcast news (bn), newswire (nw) and weblog (wl), as in past work (Lu and Roth, 2015; Muis and Lu, 2017; Ju et al., 2018).",
"Figure 2 plots the difference in entity label distributions between the ACE 2005 data and our literary data: literature has a proportionally higher ratio of person and facility mentions, and much lower mentions for GPEs and organizations.",
"To understand how this different distribution of entity types impacts the performance of models trained on these different sources, we evaluate the performance of a state-of-the-art model for nested",
"2 Note we report F-score since we are measuring the agreement rates between annotators not only in their choice of labels (for which a categorical chance-corrected measure like Cohen's would be appropriate), but also the spans in text to which they apply.",
"entity recognition (Ju et al., 2018).",
"We create training, development and test splits on the 100 literary books by stratifying at the document level, with 80 books in training, 10 books in development and 10 books in test.",
"To preprocess ACE, we tokenize and split sentences using the Stanford tokenizer (Manning et al., 2014), and create training, development and test partitions again stratified by document, so that sentences from the same document do not appear in both train and test.",
"As above, we adapt the ACE annotations to our format by removing pronoun ( PRO ) and WH-question ( WHQ ) annotations and remove all weapon ( WEA ) labels, and consider only the subsets for broadcast conversation, broadcast news, newswire and weblogs.",
"We present results with 95% confidence intervals using the bootstrap.",
"When trained on ACE and tested on ACE, the layered bidirectional LSTM-CRF of Ju et al. (2018) achieves an F-score of 68.8.",
"When that same model (trained on ACE) is evaluated on our literature data, performance drops precipitously (23 absolute points in F-score).",
"This alone that cross-domain performance can be so strikingly worseis a significant result, providing the first estimate of how performance degrades across these domains for this task.",
"However, when we train an identically parameterized model on the training partition of the literary data and evaluate it on the literary test partition, performance naturally improves substantially to an F-score of 68.3.",
"As table 2 shows, performance improves dramatically for nearly all entity classes; the classes with the most statistically significant improvement are PER and FAC both of which improve by 20 absolute points.",
"To better understand the ways in which a model trained on ACE data differs in its predictions from an identically parameterized model trained on literary data, we used the two models described above to generate predictions for nested entities in a random sample of 1,000 full-text books from Project Gutenberg not in our training, development or test data (a total of 78M tokens).",
"We then analyzed a simple difference in frequencies between the predictions of the two models on that same data; for a given entity e (e.g., the boy ) and category t (e.g., PER ), we calculate the frequency f as the number of times e was tagged by each model as t , and measure the difference: f LIT ( e, t ) f ACE ( e, t ) The ten terms with the strongest positive difference in frequencies for the PER classthose phrases that are found significantly more often in a model trained on literary data than a model trained on ACEare",
"Mrs., Miss, Lady, Aunt, Sir, Captain, no one, Mr, Madame and nobody , suggesting a potential gender bias in the predictions; indeed, while ACE 2005 contains 47 instances of Mr. , it contains no mentions of Mrs. or honorific Miss (and only three instances of Ms. ).",
"While other work has demonstrated the gender bias present in word embeddings (Bolukbasi et al., 2016; Caliskan et al., 2017; Zhao et al., 2019) and in such NLP tasks as coreference resolution (Rudinger et al., 2018; Zhao et al., 2018), sentiment analysis (Kiritchenko and Mohammad, 2018), and speech recognition (Tatman, 2017), we can investigate the same phenomenon here: does a model trained on ACE result in a disparate impact in its performance recognizing men and women entities in text?",
"To answer this question, we annotate the gender for all PER entities in the literary test data (a total of 969 entities) and measure the recall of each model as a function of the gender of the true entity (measuring, for example, how many women in the gold literary data each model was able to identify, and how many men).",
"Table 4 lists these results: while a model trained on literary data recognizes male and female entities at roughly equal rates, ACE data shows a strong disparate performance, with female entities recognized at a rate over 11 points worse than male entities.",
"This difference is significant at p < 0 .",
"001 under a permutation test (randomly shuffling the gender labels assigned to entities to generate a non-parametric null distribution, with 100,000 permutations).",
"If we remove the obvious entities from the gold data that begin with Mrs. and Miss (the honorifics that are rarely attested in ACE) along with those that begin with Mr. , we still see a sizable disparity in performance, suggesting that this result is more pervasive than the simple absence of those honorifics from the training data.",
"We present in this work a new dataset of nested entity annotations for literature; such data allows us to measure the performance of existing NER systems when evaluated on literary data, train new models optimized for literature as a domain, and explore the stylistic differences in entity attention that help define literature as a genre.",
"In addition to helping advance the state-of-the-art in NLP for literary texts, we provide this dataset to advance modeling for entity recognition generally; as Sgaard (2013) argues, the robustness of performance improvements for methods in NLP is best estimated by performance across a range of domains; we would expect a robust model that shows improvement on news entities in ACE and proteins in GENIA to show improvements on recognizing characters and settings in literature as well.",
"All data is freely available for public use under a Creative Commons Sharealike license and is available at: https://github.com/dbamman/ litbank ; code to support this work can be found at: https://github.com/dbamman/ NAACL2019-literary-entities .",
"We thank the anonymous reviewers, Matt Sims and Jon Gillick for their valuable feedback.",
"The research reported in this article was supported by an Amazon Research Award, a grant from the Digital Humanities at Berkeley initiative and by resources provided by NVIDIA."
] | [
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"other",
"other",
"other"
] |
[
"In Neural Machine Translation (and, more generally, conditional language modeling), the generation of a target token is influenced by two types of context: the source and the prefix of the target sequence.",
"While many attempts to understand the internal workings of NMT models have been made, none of them explicitly evaluates relative source and target contributions to a generation decision.",
"We argue that this relative contribution can be evaluated by adopting a variant of Layerwise Relevance Propagation (LRP).",
"Its underlying conservation principle' makes relevance propagation unique: differently from other methods, it evaluates not an abstract quantity reflecting token importance, but the proportion of each token's influence.",
"We extend LRP to the Transformer and conduct an analysis of NMT models which explicitly evaluates the source and target relative contributions to the generation process.",
"We analyze changes in these contributions when conditioning on different types of prefixes, when varying the training objective or the amount of training data, and during the training process.",
"We find that models trained with more data tend to rely on source information more and to have more sharp token contributions; the training process is non-monotonic with several stages of different nature.",
"1 1 Introduction With the success of neural approaches to natural language processing, analysis of NLP models has become an important and active topic of research.",
"In NMT, approaches to analysis include probing for linguistic structure (Belinkov et al., 2017; Conneau et al., 2018), evaluating via contrastive translation pairs (Sennrich, 2017; Burlot and Yvon, 2017; Rios Gonzales et al., 2017; Tang 1 We release the code at https://github.com/ lena-voita/the-story-of-heads . et al., 2018), inspecting model components, such as attention (Ghader and Monz, 2017; Voita et al., 2018; Tang et al., 2018; Raganato and Tiedemann, 2018; Voita et al., 2019) or neurons (Dalvi et al., 2019; Bau et al., 2019), among others.",
"Unfortunately, although a lot of work on model analysis has been done, a question of how the NMT predictions are formed remains largely open.",
"Namely, the generation of a target token is defined by two types of context, source and target, but there is no method which explicitly evaluates the relative contribution of source and target to a given prediction.",
"The ability to measure this relative contribution is important for model understanding since previous work showed that NMT models often fail to effectively control information flow from source and target contexts.",
"For example, adding context gates to dynamically control the influence of source and target leads to improvement for both RNN (Tu et al., 2017; Wang et al., 2018) and Trans-fomer (Li et al., 2020) models.",
"A more popular example is a model's tendency to generate hallucinations (fluent but inadequate translations); it is usually attributed to the inappropriately strong influence of target context.",
"Several works observed that, when hallucinating, a model fails to properly use source: it produces a deficient attention matrix, where almost all the probability mass is concentrated on uninformative source tokens (EOS and punctuation) (Lee et al., 2018; Berard et al., 2019).",
"We argue that a natural way to estimate how the source and target contexts contribute to generation is to apply Layerwise Relevance Propagation (LRP) (Bach et al., 2015) to NMT models.",
"LRP redistributes the information used for a prediction between all input elements keeping the total contribution constant.",
"This conservation principle' makes relevance propagation unique: differently from other methods estimating influence of individual tokens (Alvarez-Melis and Jaakkola, 2017; He et al., 2019a; Ma et al., 2018), LRP evaluates not an abstract quantity reflecting a token importance, but the proportion of each token's influence.",
"We extend one of the LRP variants to the Transformer and conduct the first analysis of NMT models which explicitly evaluates the source and target relative contributions to the generation process.",
"We analyze changes in these contributions when conditioning on different types of prefixes (refer-ence, generated by a model or random translations), when varying training objective or the amount of training data, and during the training process.",
"We show that models suffering from exposure bias are more prone to over-relying on target history (and hence to hallucinating) than the ones where the exposure bias is mitigated.",
"When comparing models trained with different amount of data, we find that extra training data teaches a model to rely on source information more heavily and to be more confident in the choice of important tokens.",
"When analyzing the training process, we find that changes in training are non-monotonic and form several distinct stages (e.g., stages changing direction from decreasing influence of source to increasing).",
"Our key contributions are as follows: we show how to use LRP to evaluate the relative contribution of source and target to NMT predictions; we analyze how the contribution of source and target changes when conditioning on different types of prefixes: reference, generated by a model or random translations; by looking at the contributions when conditioning on random prefixes, we observe that models suffering from exposure bias are more prone to over-relying on target history (and hence to hallucinating); we find that",
"(i) with more data, models rely on source information more and have more sharp token contributions,",
"(ii) the training process is non-monotonic with several distinct stages.",
"Layer-wise relevance propagation is a framework which decomposes the prediction of a deep neural network computed over an instance, e.g. an image or sentence, into relevance scores for single input dimensions of the sample such as subpixels of an image or neurons of input token embeddings.",
"The original LRP version was developed for computer vision models (Bach et al., 2015) and is not directly applicable to the Transformer (e.g., to the attention layers).",
"In this section, we explain the general idea behind LRP, specify which of the existing LRP variants we use, and show how to extend LRP to the NMT Transformer model.",
"2 2.1 General Idea: Conservation Principle In its general form, LRP assumes that the model can be decomposed into several layers of computation.",
"The first layer are the inputs (for example, the pixels of an image or tokens of a sentence), the last layer is the real-valued prediction output of the model f .",
"The l -th layer is modeled as a vector x ( l ) = ( x ( l ) i ) V ( l ) i =1 with dimensionality V ( l ) .",
"Layerwise relevance propagation assumes that we have a relevance score R ( l +1) i for each dimension x ( l +1) i of the vector x at layer l + 1 .",
"The idea is to find a relevance score R ( l ) i for each dimension x ( l ) i of the previous layer l such that the following holds: f = ... = (cid:88) i R ( l +1) i = (cid:88) i R ( l ) i = ... = (cid:88) i R (1) i .",
"This equation represents a conservation principle , which LRP exploits to back-propagate the prediction.",
"Intuitively, this means that the total contribution of neurons at each layer is constant.",
"Assume that we know the relevance R ( l +1) j of a neuron j at network layer l +1 for the prediction f ( x ) .",
"Then we would like to decompose this relevance into messages R ( l,l +1) i j sent from the neuron j at layer l + 1 to each of its input neurons i at layer l .",
"For the conservation principle to hold, these messages R ( l,l +1) i j have to satisfy the constraint: R ( l +1) j = (cid:88) i R ( l,l +1) i j .",
"Then we can define the relevance of a neuron i at layer l by summing all messages from neurons at layer ( l + 1) : R ( l ) i = (cid:88) R ( l,l +1) i j .",
"(3) Equations (2) and (3) define the propagation of relevance from layer l +1 to layer l .",
"The only thing that is missing is specific formulas for computing the 2 Previous work applying one of the LRP variants to NMT (Ding et al., 2017; Voita et al., 2019) do not describe extensions beyond the original LRP rules (Bach et al., 2015).",
"Several versions of LRP satisfying equation (4) (and, therefore, the conservation principle) have been introduced: LRP , LRP and LRP(Bach et al., 2015; Binder et al., 2016; Montavon et al., 2019).",
"We use LRP (Bach et al., 2015; Binder et al., 2016), which defines relevances at each step in such a way that they are positive.",
"with non-linear activation functions, namely z ij = x ( l ) i w ij , z j = (cid:88) i z ij + b i , x ( l +1) j = g ( z j ) , where w ij is a weight connecting the neuron x ( l ) i to neuron x ( l +1) j , b j is a bias term, and g is a nonlinear activation function.",
"Let z + j = (cid:88) i z + ij + b + j , z j = (cid:88) i z ij + b j , where (cid:3) + = max(0 , (cid:3) ) and (cid:3) = min(0 , (cid:3) ) .",
"Then the -rule (Bach et al., 2015; Binder et al., 2016) is given by the equation R ( l,l +1) i j = R ( l +1) j (cid:32) z + ij z + j + z ij z j (cid:33) , (5) where + = 1 .",
"Note that all terms in the brackets are always positive: negative signs of z j and z ij cancel out when evaluating the ratio.",
"This propagation method allows to control manually the importance of positive and negative evidence by choosing different and .",
"For example, , = 1 2 treats positive and negative contributions as equally important, while = 1 , = 0 considers only positive contributions.",
"In our experiments, both versions lead to the same observations.",
"These layers include linear, convolutional and max-pooling operations.",
"Additionally, pointwise monotonic activation functions g j (e.g., ReLU) are ignored by LRP (Bach et al., 2015).",
"Propagating relevance through attention layers.",
"For the structures that do not fit the form (6), the weighting v ij can be obtained by performing a first order Taylor expansion of a neuron x ( l +1) j (Bach et al., 2015; Binder et al., 2016).",
"For attention layers in the Transformer, we extend the approach by Binder et al. (2016).",
"Namely, let x ( l +1) j = f ( x ( l ) ) , f ( x ) = f ( x 1 , . . . , x n ) .",
"Then by Taylor expansion at some point x = ( x 1 , . . . , x n ) , we get f ( x ) f ( x ( l ) ) + (cid:88) i j f x i ( x ( l ) ) ( x i x ( l ) i ) , x ( l +1) j = f ( x ( l ) ) f ( x )+ (cid:88) i j f x i ( x ( l ) ) ( x ( l ) i x i ) .",
"Elements of the sum can be assigned to incoming neurons, and the zero-order term can be redistributed equally between them.",
"This leads to the following decomposition: z ij = 1 nf ( x ) + f x i ( x ( l ) ) ( x ( l ) i x i ) .",
"We use the zero vector in place of x .",
"Equation (7), along with the standard redistribution rules (5), defines relevance propagation for complex non-linear layers.",
"In the Transformer, we apply equation (7) to the softmax operations in the attention layers; all other operations inside the attention layers are linear functions, and the rule (5) can be used.",
"Given a source sequence x = ( x 1 , . . . , x S ) and a target sequence y = ( y 1 , . . . , y T ) , standard autoregressive NMT models (or, in a more broad sense, conditional language models) are trained to predict words in the target sequence, word by word.",
"Formally, at each generation step such models predict p ( y t | x 1: S , y 1: t 1 ) relying on both source tokens x 1: S and already generated target tokens y 1: t 1 .",
"Using LRP, we evaluate relative contribution of all tokens, source and target, to the current prediction.",
"Propagating through decoder and encoder.",
"At first glance, it can be unclear how to apply a layerwise method to a not completely layered architecture (such as encoder-decoder).",
"This, however, is rather straightforward and is done in two steps: 1. total relevance is propagated through the decoder.",
"from the final encoder layer, part of the relevance leaks' to the encoder; this happens at each decoder layer; 2. relevance leaked to the encoder is propagated through the encoder layers.",
"The total contribution of neurons in each decoder layer is not preserved (part of the relevance leaks to the encoder), but the total contribution of all tokens across the source and the target prefix remains equal to the model prediction.",
"We evaluate relevance of input neurons to the top-1 logit predicted by a model.",
"Then token relevance (or its contribution) is the sum of relevances of its neurons.",
"Notation.",
"Without loss of generality, we can assume that the total relevance for each prediction equals 1. 3 Let us denote by R t ( x i ) and R t ( y j ) the contribution of source token x i and target token y j to the prediction at generation step t , respectively.",
"Then source and target contributions are defined as R t ( source ) = (cid:80) i R t ( x i ) , R t ( target ) = t 1 (cid:80) j =1 R t ( y j ) .",
"Note that t R t ( source )+ R t ( target )=1; R 1 ( source ) = 1 , R 1 ( target ) = 0 , and j t R t ( y j )=0 .",
"Model.",
"We follow the setup of Transformer base model (Vaswani et al., 2017) with the standard training setting.",
"More details on hyperparameters and the optimizer can be found in the appendix.",
"Data.",
"We use random subsets of the WMT14 En-Fr dataset of different size: 1m, 2.5m, 5m, 10m, 20m, 30m sentence pairs.",
"In Sections 4 and 7, we report results for the model trained on the 1m subset.",
"In Section 6, we show how the results depend on the amount of training data.",
"Evaluating LRP.",
"The -LRP we use requires choosing values for and , + = 1 .",
"We tried treating positive and negative contributions as equally important ( = = 12 ), or considering only positive contributions ( = 1 , = 0 ).",
"The observed patterns in behavior were the same for these two versions.",
"In the main text, we use = 1 ; in the appendix, we provide results for = = 12 .",
"3 More formally, if we evaluate relevance for top-1 logit predicted by a model, then the total relevance is equal to the value of this logit.",
"However, the conservation principle allows us to assume that this logit is equal to 1 and to consider relative contributions.",
"Reporting results.",
"All presented results are averaged over an evaluation dataset of 1000 sentence pairs.",
"In each evaluation dataset, all examples have the same number of tokens in the source, as well as in the target (e.g., 20 source and 23 target tokens; the exact number for each experiment is clear from the results).",
"4 4 Getting Acquainted In this section, we explain general patterns in model behavior and illustrate the usage of LRP by evaluating different statistics within a single model.",
"Later, we will show how these results change when varying the amount of training data (Section 6) and during model training (Section 7).",
"Here we evaluate changes in the source contribution during generation, and in contributions of source tokens at different positions to entire output.",
"Source target(k).",
"For each generation step t , we evaluate total contribution of source R t ( source ) .",
"Note that this is equivalent to evaluating total contribution of prefix since R t ( prefix ) = 1 R t ( source ) (Section 2.3).",
"Results are shown in Figure",
"1(a).",
"5 We see that, during the generation process, the influence of source decreases (or, equivalently, the influence of the prefix increases).",
"This is expected: with a longer prefix, the model has less uncertainty in deciding which source tokens to use, but needs to control more for fluency.",
"There is also a large drop of source influence for the last token: apparently, to 4 Note that we have to fix the number of tokens in the source and target to get reliable comparisons.",
"We choose sentences of length 20 and 23 because these are among the most frequent sentence lengths in the dataset.",
"We also looked at sentences with 16, 25, 29 tokens observed patterns were the same.",
"5 Since the first token is always generated solely relying on the source, we plot starting from the second token.",
"Source(k) target.",
"Now we want to understand if there is a tendency to use source tokens at certain positions more than tokens at the others.",
"For each source token position k , we evaluate its total contribution to the whole target sequence.",
"To eliminate the effect of decreasing source influence during generation, at each step t we normalize source contributions R t ( x k ) over the total contribution of source at this step R t ( source ) .",
"Formally, for the k -th token we evaluate T (cid:80) t =1 R t ( x k ) / R t ( source ) .",
"For convenience, we multiply the result by ST : this makes the average total contribution of each token equal to 1. Figure",
"1(b) shows that, on average, source tokens at earlier positions influence translations more than tokens at later ones.",
"This may be because the alignment between English and French languages is roughly monotonic.",
"We leave for future work investigating the changes in this behavior for language pairs with more complex alignment (e.g., English-Japanese).",
"Now let us look at how sharp' contributions of source or target tokens are at different generation steps.",
"For each step t , we evaluate entropy of (normalized) source or target contributions: { R t ( x i ) / R t ( source ) } Si =1 or { R t ( y j ) / R t ( target ) } t 1 j =1 .",
"Entropy of source contributions.",
"Figure",
"2(a) shows that during generation, entropy increases until approximately 2 / 3 of the translation is generated, then decreases when generating the remaining part.",
"Interestingly, for the last punctuation mark and the EOS token, entropy of source contributions is very high: the decision to complete the sentence",
"Entropy of target contributions.",
"Figure",
"2(b) shows that entropy of target contributions is higher for longer prefixes.",
"This means that the model does use longer contexts in a non-trivial way.",
"Let us now look at how model behavior changes when feeding different types of prefixes: prefixes of reference translations, translations generated by the model, and random sentences in the target language.",
"6 As in previous experiments, we evaluate relevance for top-1 logit predicted by the model.",
"Reference vs model prefixes.",
"When feeding model-generated prefixes, the model uses source more (Figure",
"3(a)) and has more focused source contributions (lower entropy in Figure",
"3(b)) than when generating the reference.",
"This may be because model-generated translations are eas-ier' than references.",
"For example, beam search translations contain fewer rare tokens (Burlot and Yvon, 2018; Ott et al., 2018), are simpler syntactically (Burlot and Yvon, 2018) and, according to the fuzzy reordering score (Talbot et al., 2011), model translations have significantly less reordering compared to the real parallel sentences (Zhou et al., 2020).",
"As we see from our experiments, these simpler model-generated prefixes allow for the model 6 Random prefixes come from the same evaluation set, but with shuffled target sentences.",
"to rely on the source more and to be more confident when choosing relevant source tokens.",
"Reference vs random prefixes.",
"Results for random sentence prefixes are given in Figures 3c, 3d.",
"The reaction to random prefixes helps us study the self-recovery ability of NMT models.",
"Previous work has found that models can fall into a hallucination mode where the decoder ignores context from the encoder and samples from its language mode (Koehn and Knowles, 2017; Lee et al., 2018).",
"In contrast, He et al. (2019b) found that a language model is able to recover from artificially distorted history input and generate reasonable samples.",
"Our results show evidence for both.",
"At the beginning of the generation process, the model tends to rely more on the source context when given a random prefix compared to the reference prefix, indicating a self-recovery mode.",
"However, when the prefix becomes longer, the model choice shifts towards ignoring the source and relying more on the target: Figure 3c shows a large drop of source influence for later positions.",
"Figure 3d also shows that with a random prefix, the entropy of source contributions is high and is roughly constant.",
"The results in the previous section agree with some observations made in previous work studying self-recovery and hallucinations.",
"In this section, we illustrate more explicitly how our methodology can be used to shed light on the effects of exposure bias and training objectives.",
"Wang and Sennrich (2020) empirically link the hallucination mode to exposure bias (Ranzato et al., 2016), i.e. the mismatch between the gold history seen at training time, and the (potentially erroneous) model-generated prefixes at test time.",
"The authors hypothesize that exposure bias leads to an over-reliance on target history, and show that Minimum Risk Training (MRT), which does not suffer from exposure bias, reduces hallucinations.",
"However, they did not directly measure this overreliance on target history.",
"Our method is able to directly test whether there is indeed an over-reliance on the target history with MLE-trained models, and more robust inclusion of source context with MRT.",
"We also consider a simpler heuristic, word dropout, which we hypothesize to have a similar effect.",
"Minimum Risk Training (Shen et al., 2016) is a sentence-level objective that inherently avoids exposure bias.",
"It minimises the expected loss (risk') with respect to the posterior distribution: R ( ) = (cid:88) ( x,y ) (cid:88) y Y ( x ) P ( y | x, )( y, y ) , where Y ( x ) is a set of candidate translations for x , ( y, y ) is the discrepancy between the model prediction y and the gold translation y (e.g., a negative smoothed sentence-level BLEU).",
"More details on the method can be found in Shen et al. (2016) or Edunov et al. (2018); training details for our models are in the appendix.",
"Word Dropout is a simple data augmentation technique.",
"During training, it replaces some of the tokens with a special token (e.g., UNK) or a random token (in our experiments, we replace 10% of the tokens with random).",
"When used on the target side, it may serve as the simplest way to alleviate exposure bias: it exposes a model to something other than gold prefixes.",
"This is not true when used on the source side, but for analysis, we consider both variants.",
"We consider two types of prefixes: model-generated and random.",
"Random prefixes are our main interest here.",
"We feed prefixes that are flu-ent but unrelated to the source and look whether a model is likely to fall into a language modeling regime, i.e., to what extent it ignores the source.",
"For model-generated prefixes, we do not expect to see large differences in contributions: this mode is easy' for the model and the source contributions are high (see Section 4.3).",
"The results are shown in Figures 4 and 5.",
"Model-generated prefixes.",
"MRT causes more prominent changes in contributions (Figure 4).",
"see the largest difference in the beginning and the end of the generation process, which may be expected when comparing models trained with token-level and sequence-level objectives.",
"The direction of change, i.e. decreasing influence of source, is rather unexpected; we leave a detailed investigation of this behavior to future work.",
"For word dropout, changes in the amount of contributions are less noticeable; we see, however, that target-side word dropout makes the model more confident in the choice of relevant source tokens (Figure 4b).",
"Random prefixes.",
"We see that, among all models, the MRT model has the highest influence of source (Figure 5a) and the most focused source contributions (Figure 5b).",
"This agrees with our expectations: by construction, MRT removes exposure bias completely.",
"Therefore, it is confused by random prefixes less than other models.",
"Additionally, this also links to Wang and Sennrich (2020) who showed that MRT reduces hallucinations.",
"When using word dropout, both its variants also increase the influence of source, but to a much lesser extent (Figure 5a).",
"As expected, since target-side word dropout slightly reduces exposure bias (in contrast to source-side word dropout), it leads to a larger increase of source influence.",
"Experiments in this section highlight that the methodology we propose can be applied to study exposure bias, robustness, and hallucinations, both in machine translation and more broadly for other language generation tasks.",
"In this work, however, we want to illustrate more broadly the potential of this approach.",
"In the following, we will compare models trained with varying amounts of data and will look into the training process.",
"In this section, we show how the results from Section 4 change when increasing the amount of train-Figure",
"train-Figure 6:",
"(a) source contribution,",
"(b) entropy of source contributions.",
"The arrows show the direction of change when increasing data amount.",
"(For clarity, in",
"(a) the last two positions (punct. and EOS) are not shown).",
"ing data.",
"The observed patterns are the same when evaluating on datasets with reference translations or the ones generated by the corresponding model (in each case, all sentences in the evaluation dataset have the same length).",
"In the main text, we show figures for references.",
"More data = higher source contribution.",
"Figure",
"6(a) shows the source contribution at each generation step.",
"We can see that, generally, models trained with more data rely on source more heavily.",
"Surprisingly, this increase is not spread evenly across positions: at approximately 80% of the target length, models trained with more data use source more, but at the last positions, they switch to more actively using the prefix.",
"More data = more focused contributions.",
"Figure",
"6(b) shows that at each generation step, entropy of source contributions decreases with more data.",
"This means that with more training data, the model becomes more confident in the choice of important tokens.",
"In the appendix, we show that this is also the case for target contributions.",
"Now we turn to analyzing the training process of an NMT model.",
"Specifically, we look at the changes in how the predictions are formed: changes in the amount of source/target contributions and in the entropy of these contributions.",
"Our findings are summarized in Figure 7.",
"In the following, we explain them in more detail.",
"In Section 7.1, we draw connections between our training stages (shown in Figure 7) and the ones found in previous work focused on validating the lottery ticket hypothesis.",
"Contributions converge early.",
"First, we evaluate how fast the contributions converge, i.e., how quickly a model understands which tokens are the most important for prediction.",
"For Figure 7: Training timeline.",
"this, at each generation step t we evaluate the KL divergence in token influence distributions ( R t ( x 1 ) , . . . , R t ( x S ) , R t ( y 1 ) , . . . , R t ( y t 1 )) from the final converged model to the model in training.",
"Figure",
"8(a) shows that contributions converge early.",
"After approximately 12k batches, the model is very close to its final state in the choice of tokens to rely on for a prediction.",
"Changes in training are not monotonic.",
"Figures 8(b-d) show how the amount of source contribution and the entropy of source and target contributions change in training.",
"We see that all three figures have the same distinct stages (shown with vertical lines).",
"First, source influence decreases, and both source and target contributions become more focused.",
"In this stage, most of the change happens (Figure",
"8(a)).",
"In the second stage, the model also undergoes substantial change, but all processes change their direction: source influence increases and the model learns to rely on broader context (entropy is increasing).",
"Finally, in the third stage, the direction of changes remains the same, but very little is going on the model slowly converges.",
"These three stages correspond to the first three stages shown in Figure 7; at this point, the model trained on 1m sentence pairs converges.",
"With more data (e.g., 20m sentence pairs), we further observed the next stage (the last one in Figure 7), where the entropy of both source and target contributions is decreasing again.",
"However, this last stage is much slower than the third, and the final state does not differ much from the end of the third stage.",
"Early positions change more.",
"Figures 9(a-b) show how source contributions and their entropy changes for each target position.",
"We see that earlier positions are the ones that change most actively: at these positions, we see the largest decrease at the first stage and the largest following increase at the subsequent stages.",
"If we look at how accuracy for each position changes in training (Figure 10), we see that at the end of the first stage, early tokens have the highest accuracy.",
"7 This is not surprising: one could expect early positions to train faster because they are observed more frequently in training.",
"Previously such intuition motivated the usage of sentence length as one of the criteria for curriculum learning (e.g., Kocmi and Bojar (2017)).",
"Interestingly, our stages in Figure 7 agree with the ones found by Frankle et al. (2020) for ResNet-20 trained on CIFAR-10 when investigating, among other things, the lottery ticket hypothesis (Frankle and Carbin, 2019).",
"Their stages were defined based on the changes in gradient magnitude, in the weight space, in the performance, and in the effectiveness of rewinding in search of the winning' subnetwork (for more details on the lottery ticket hypothesis 7 Accuracy is the proportion of cases where the correct token is the most probable choice.",
"and the rewinding, see the work by Frankle et al. (2019)).",
"Comparing the stages by Frankle et al. (2020) with ours, we see that (1) their relative sizes in the corresponding timelines match well, (2) the rewinding starts to be effective at the third stage; for our model, this is when the contributions have almost converged.",
"In future work, it would be interesting to further investigate this relation.",
"To estimate the influence of source to an NMT prediction, Ma et al. (2018) trained an NMT model with an auxiliary second decoder where the encoder context vector was masked.",
"Then the source influence was measured as the KL divergence between predictions of the two decoders.",
"However, the ability of an auxiliary decoder to generate similar distribution is not equivalent to the main model not using source.",
"More recently, as a measure of individual token importance, He et al. (2019a) used Integrated Gradients (Sundararajan et al., 2017).",
"In machine translation, LRP was previously used for visualization (Ding et al., 2017) and to find the most important attention heads in the Transformer's encoder (Voita et al., 2019).",
"Similar to our work, Voita et al. (2019) evaluated LRP on average over a dataset (and not for a single prediction) to extract patterns in model behaviour.",
"Both works used the more popular -LRP, while for our analysis, the LRP was more suitable (Section 2).",
"For language modeling, Calvillo and Crocker (2018) use LRP to evaluate relevance of neurons in RNNs for a small synthetic setting.",
"We show how to use LRP to evaluate the relative contributions of source and target to NMT predictions.",
"We illustrate the potential of this approach by analyzing changes in these contributions when conditioning on different types of prefixes (refer-ences, model predictions or random translations), when varying training objectives or the amount of training data, and during the training process.",
"Some of our findings are: (1) models trained with more data rely on source information more and have more sharp token contributions; (2) the training process is non-monotonic with several distinct stages.",
"These stages agree with the ones found in previous work focused on validating the lottery ticket hypothesis, which suggests future investigation of this connection.",
"Additionally, we show that models suffering from exposure bias are more prone to over-relying on target history (and hence to hallucinating) than the ones where the exposure bias is mitigated.",
"In future work, our methodology can be used to measure the effects of different and novel training regimes on the balance of source and target contributions.",
"We would like to thank the anonymous reviewers for their comments.",
"The work is partially supported by the European Research Council (Titov, ERC StG BroadSem 678254), Dutch NWO (Titov, VIDI 639.022.518) and EU Horizon 2020 (GoURMET, no. 825299).",
"Lena is supported by the Facebook PhD Fellowship.",
"Rico Sennrich acknowledges support of the Swiss National Science Foundation (MUTAMUR; no. 176727)."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"result",
"result",
"other",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"other",
"objective",
"objective",
"objective",
"abstain",
"result",
"objective",
"other",
"other",
"other",
"other"
] |
[
"Neural architectures are the current state of the art in Word Sense Disambiguation (WSD).",
"However, they make limited use of the vast amount of relational information encoded in Lexical Knowledge Bases (LKB).",
"We present Enhanced WSD Integrating Synset Embeddings and Relations (EWISER), a neural supervised architecture that is able to tap into this wealth of knowledge by embedding information from the LKB graph within the neural architecture, and to exploit pretrained synset embeddings, enabling the network to predict synsets that are not in the training set.",
"As a result, we set a new state of the art on almost all the evaluation settings considered, also breaking through, for the first time, the 80% ceiling on the concatenation of all the standard all-words English WSD evaluation benchmarks.",
"On multilingual all-words WSD, we report state-of-the-art results by training on nothing but English.",
"There is a growing body of research dealing with the integration of prior knowledge into neural networks for Natural Language Processing (NLP) tasks, be it through pretraining on self-supervised tasks such as language modeling (Peters et al., 2018; Devlin et al., 2019), or through the incorporation of information from knowledge bases (Peters et al., 2019; Logan et al., 2019).",
"In Word Sense Disambiguation (WSD), i.e., the task of associating a word in context with the most appropriate meaning from a finite set of possible choices (Navigli, 2009), the gap between supervision and knowledge (Nav-igli, 2018) has been overcome by several efforts directed at learning effective vector representations (Loureiro and Jorge, 2019; Scarlini et al., 2020) in the same space as contextualized embeddings, and exploring the usage of definitional knowledge in supervised sequence learning neural architectures (Luo et al., 2018; Kumar et al., 2019; Huang et al., 2019).",
"However, the Lexical Knowledge Bases (LKBs) from which such information is retrieved, such as WordNet (Miller, 1995) and BabelNet (Navigli and Ponzetto, 2012), also provide a great wealth of relational knowledge in structured form (i.e., hypernymy, meronymy, similarity, etc.), which is often neglected due to the non-trivial integration of data of this kind into neural architectures.",
"Even though such information can, instead, be exploited by knowledge-based WSD algorithms (Agirre and Soroa, 2009; Moro et al., 2014), rivaling supervised pre-contextualized embedding approaches (Maru et al., 2019), the performances still lag behind (Huang et al., 2019; Vial et al., 2019).",
"Building on Extended WSD Integrating Sense Embeddings (EWISE) (Kumar et al., 2019), a neural WSD system incorporating prior knowledge through synset embeddings, we present Enhanced WSD Integrating Synset Embeddings and Relations (EWISER), a hybrid knowledge-based and supervised approach to WSD that integrates explicit relational information from the WordNet LKB.",
"Our approach offers the following contributions: 1. We introduce the novel structured logits mechanism, which enables the exploitation of concept relatedness as determined by LKB edges.",
"In our method, pre-softmax scores are a weighted combination of synset-specific scores, and can be computed via dot product with a sparse adjacency matrix.",
"2. We generalise the sense vector dot product technique from EWISE, showing that off-the-shelf pretrained embeddings can be used.",
"3. We show that the structured logits mechanism and the use of sense embeddings are orthogonal and can be exploited jointly.",
"Our approach is simple and extensible, does not require fine tuning of contextualized embeddings, and has a very modest parameter budget apart from synset embeddings.",
"EWISER achieves a new state of the art in all-words English WSD.",
"Moreover, we obtain state-of-the-art performances on the cross-lingual all-words WSD evaluation, without using non-English training data.",
"Supervised WSD Supervised systems have to rely on expensive hand-labeled data to achieve good results (Pasini, 2020).",
"The best approaches currently rely on neural networks.",
"The model presented by Raganato et al. (2017) formulates the task as a token classification problem, with an LSTM with attention classifier producing a probability distribution over both words and senses.",
"Subsequent work has shown that better results can be obtained by only having scores for senses or synsets (Vial et al., 2019).",
"Shallower, simpler networks can achieve even better performances (Uslu et al., 2018).",
"Contextualized vectors can be exploited in token tagging architectures (Vial et al., 2019; Bevilacqua and Navigli, 2019; Hadiwinoto et al., 2019).",
"However, purely supervised systems are dependent on the data they are trained on, therefore when some sense is underrepresented in the training corpus it is not easy for them to predict it.",
"LKBs in Supervised WSD More closely related to the core of our contribution, LKB information, such as natural language definitions of word meaning, can be exploited in neural token tagging architectures.",
"For example, in GlossBERT (Huang et al., 2019) a pretrained BERT encoder is fed both the context sentence and the gloss, and is trained to predict whether the gloss correctly describes the use of the target word.",
"Successful results have been obtained by encoding glosses in dense vectors (Luo et al., 2018).",
"In EWISE (Kumar et al., 2019), WSD is performed in a two-step process: first, gloss embeddings are produced through a training procedure that also takes into account the WordNet's graph structure; then, the gloss embeddings are scored via dot product with a contextual vector computed with an LSTM model, which is trained through regular categorical cross-entropy.",
"Our work builds on top of EWISE in that it generalizes its sense vector dot product approach, but features a novel mechanism that injects relational knowledge into the architecture through a simple additional sparse dot product operation.",
"Moreover, we show that better performances can be obtained by training the output embedding matrix, and that different sense/synset vectors can be used to initialize the output embeddings.",
"Note that our approach is different from that of Vial et al. (2019), in that we do not conflate senses together through the use of WordNet hypernymy; rather, we mantain all the original meaning distinctions, and exploit the logit scores over the full vocabulary in a second, distinct step.",
"WSD can be treated as a simple token classification problem, similar to POS tagging or Named Entity Recognition.",
"As such, abstracting away from all the intricacies of any particular supervised model, we need to produce a vector representation h R d of a target word in a given context, and use it to yield a probability distribution over all its possible labels, i.e., its senses or synsets.",
"The simplest way to do this is to learn a weight matrix O R d | (cid:86) | , where (cid:86) is the output vocabulary 1 , and compute a vector of unnormalized scores z as the product of h T and O .",
"Having multiple instances to classify packed into the matrix H , we can compute all the scores at the same time by a single dot product followed by a sum over columns with a bias vector: Z = HO + b (1) Finally, Z is transformed into a probability distribution through a standard softmax activation function.",
"Typically, O is randomly initialized, and just trained end-to-end with the rest of the architecture (Raganato et al., 2017; Vial et al., 2019; Bevilacqua and Navigli, 2019).",
"During training the categorical cross-entropy loss is computed for each instance Z i .",
"At inference time, the model predicts the synset s with the highest probability among the set S ( w i ) (cid:86) of possible synsets for word w i : s i = argmax s S ( w i ) Z i,s (2) where, for each w i , S ( w i ) depends on both the lemma and its part-of-speech, and is determined by the WordNet inventory.",
"1 We use synsets as output vocabulary.",
"We now describe a simple neural WSD architecture to be used as the core on top of which we will integrate the EWISER additions.",
"For each word to disambiguate, our network takes as input the sum of the outputs of the last 4 layers of BERT Large (cased) and uses a 2-layer feedforward to compute the logit scores Z : B = B 4 + B 3 + B 2 + B 1 H 0 = BatchNorm ( B ) H 1 = swish ( H 0 W + b ) Z = H 1 O (3) where W , b are parameters of the models, and B 4 to B 1 are BERT hidden states 2 .",
"We employ the swish activation function (Ramachandran et al., 2018), which has shown very promising results in NLP (Eger et al., 2018).",
"Note that, while our architecture is very simple, it would be straightforward to incorporate powerful additions such as a sequence encoder like an LSTM or a Transformer (Vaswani et al., 2017) classifier.",
"While this might indeed produce better performances, improvements of this kind are not directly pertinent to our contribution.",
"The matrix multiplication in Equation 1 is wasteful during both training and inference, as it produces scores over the entire vocabulary (cid:86) , even though the number of possible synsets is much smaller than the cardinality of (cid:86) .",
"Since the model is equally penalized by the cross-entropy loss when it gives a high score to a synset either related or unrelated to the correct one, there is little incentive to learn similar vectors for related synsets.",
"Moreover, computing logits over the whole vocabulary does not bring any benefit in inference, as each score is computed independently, without taking into account connections between output classes.",
"We address this issue by devising an architecture, i.e., EWISER, that can inject into the network relatedness knowledge as encoded in an arbitrary graph, and use it in training as well as in inference.",
"As LKBs are structured into graphs, we want to be able to exploit, when computing the probability",
"distribution vector over (cid:86) for a target word, the explicit information of an arbitrary weighted graph G = (cid:104) V, E, w (cid:105) , where w : E R , and the vertices V = (cid:86) i.e., the nodes are synsets.",
"Instead of using the vector z for prediction, we compute another vector q where for each component, i.e..",
"for each synset s , the score synset q s is a function of both the hidden score z s for s , and the hidden scores z s (cid:48) for all synsets s (cid:48) such that there is an edge (cid:104) s (cid:48) , s (cid:105) E .",
"In order to do this, we calculate q s as z s plus the sum of the products of z (cid:48) s and the weight of the edge (cid:104) s (cid:48) , s (cid:105) .",
"As a result, q s is a weighted combination of the scores for all the output vocabulary.",
"In Figure 1 we show this process visually.",
"The most natural way to encode the graph G is with the adjacency matrix A , in which A s 1 s 2 = w ( (cid:104) s 1 , s 2 (cid:105) ) .",
"If A s 1 s 2 = 0 there is no edge between the two synsets.",
"The new logits matrix Q can be obtained efficiently by simply computing the dot product between the hidden logits Z and the transposed adjacency matrix AT , summing Z to the results.",
"Z = HO + b Q = ZAT + Z (5) Finally, we apply the softmax function to Q to get the probabilities.",
"In our case, we build the graph and adjacency matrix A from the relations between synsets or senses in WordNet.",
"As WordNet relations are not weighted, for every synset s we set A s (cid:48) ,s to 1 /N , where N is the number of incoming connections.",
"In this way we avoid imbalanced predictions towards synsets with more incoming connections.",
"We experiment with including different relations in A .",
"Our base configuration includes similarity , verb group , and derivationally related 3 edges.",
"As for hypernymy and its inverse, hyponymy , we experiment with different possible ways of including them in A :",
"(i) including only hypernymy ( hyper );",
"(ii) only hyponymy ( hypo );",
"(iii) both hypernymy 3 We connect two synsets with a derivationally related edge if at least one pair of senses therein is connected via a derivationally related edge.",
"and hyponymy ( hyper+hypo );",
"(iv) the transitive closure over hypernymy (the set of relations that are obtained by following hypernymy paths) ( hyper* );",
"(v) the transitive closure over hypernymy and hyponymy ( hyper+hypo* ); Informally, hypernymy and hyponymy correspond to different kinds of reasoning, which might be characterized as, respectively, inductive (if it is an electronic device, then it might be a mouse) and deductive (if it is a mouse, then it is an electronic device).",
"The closures are a way to flatten the hierarchy, thus enabling multi-hop reasoning by making the q s score dependent on the z scores for synsets whose path distance to s is greater than 1 in the original graph.",
"Fine-tuning the adjacency matrix If weights in A are frozen, every connected synset gives an equal contribution to the final score q s .",
"However, it is also reasonable to assume that not all synsets are equally relevant.",
"For example, the score for inanimate object should be less relevant than that for device for predicting the hardware meaning of mouse .",
"Thus, we experiment on fine-tuning A by only updating non-zero weights.",
"While O can be seen as just the final linear map in the network, it is also reasonable to think about it as a counterpart of an embedding matrix.",
"Whereas in the intermediate layers of the neural network there is no one-to-one mapping between values of the matrix and input or output classes, in O there is a distinct column for each of the elements in (cid:86) .",
"As a matter of fact, the logit of synset s ( z s ) is just the scalar product between h and OT s , i.e., the column in O associated with s .",
"So, just as with word embeddings, O can be seen as a collection for vector representations that have one-to-one mappings to output classes.",
"Thus, it is possible to use synset embeddings to provide a better initialization for O than random.",
"This idea has already been exploited by EWISE (Kumar et al., 2019), in which logit scores over (cid:86) are computed by dot product between the hidden vector h and the gloss embedding vector g ( s ) as follows: z s = h T g ( s ) + b T g ( s ) (6) where b is a learned bias vector.",
"Note that if we pack the synset gloss vector g ( s ) for every s (cid:86) into the O matrix, this looks almost identical to the canonical linear layer in Eq.",
"1, with the only difference being the fact that the bias is now the result of the dot product between b and O , rather than being directly parametrized as a vector R | (cid:86) | .",
"In EWISE, the sense embeddings are learned independently from the WSD system and kept frozen during training.",
"It is worth exploring whether better results can be achieved by allowing further refining of the weights during training.",
"We expect initialization and freezing (which we refer to as, respectively, O -init and O -freeze) to have different effects depending on whether the gold synset is found in the training set.",
"If weights are initialized and then updated during training, the columns in O corresponding to unattested synsets will only receive a nega-tive signal from the cross-entropy loss; conversely, attested synsets can be further refined and predicted more accurately.",
"If weights are frozen, the architecture will have to accommodate to the pretrained synset representations, meaning that, especially if there is no learned bias, it will be easier to predict unseen classes.",
"No fine-tuning may, however, result in diminished performance, as the pre-trained synset representations are not tailored to WSD.",
"An additional possibility to achieve better transfer between the information in the embeddings and the WSD system is to use a freeze-then-thaw scheme, similar to the chain-thaw method of Howard and Ruder (2018).",
"The approach entails training an O -freeze model, restoring the best checkpoint, and then doing further training with O thawed, i.e., with trainable weights.",
"We assess the performance of EWISER in all-words English WSD, against both a simple but competitive baseline, i.e., the simple feedforward network taking BERT hidden states as input described in Section 3.2, and state-of-art approaches.",
"We first experiment separately on the integration of explicit relational information through structured logits (Section 4.1), and the integration of synset embeddings through the initialization of O (Section 4.2).",
"Then, building on the results of these experiments, we evaluate the full EWISER architecture (Section 4.3).",
"Finally, we assess our approach on cross-lingual WSD (Section 4.4), training on English and evaluating on French, German, Italian and Spanish.",
"As explained in Section 3.3.2, in EWISER, relational knowledge is integrated through a dot product between the logits matrix Z and the transposed adjacency matrix AT .",
"We perform experiments with different configurations that vary according to which edges are included in A .",
"We experiment with the edge sets which are listed in Section 3.3.3.",
"For each configuration we evaluate two different training runs, one in which A is frozen ( A -freeze ), and the other where edge weights are trained ( A -train ).",
"We contrast the per-Model Arch.",
"We train the baseline and the configurations under comparison on SemCor (Miller et al., 1994) for 20 epochs, with a batch size of 4000 tokens.",
"We do not employ sentences as context.",
"Rather, we split documents in chunks of at most 100 tokens.",
"The hidden size of the 2 -layer feedforward is 512 , with a dropout value of 0 .",
"2 .",
"The optimizer is Adam (Kingma and Ba, 2015), which we employ with a learning rate of 10 4 .",
"Following Bevilacqua and Navigli (2019), we select as development set (to select the best epoch) the SemEval-2015 dataset (Moro and Navigli, 2015).",
"As customary, we report the results on the concatenation ( ALL ) of all the evaluation datasets from Senseval-2 (Edmonds and Cotton, 2001), Senseval-3 (Snyder and Palmer, 2004), SemEval-2007 (Pradhan et al., 2007), SemEval-2013 (Navigli et al., 2013), and the aforementioned SemEval-2015.",
"In addition, we report performances on ALL with all instances from the development set removed ( No15 ), and on the subset of No15 whose gold synsets do not appear in SemCor ( No15 ).",
"We report in Table 1 the results of the experiments on the addition of structured logits to the baseline architecture.",
"As can be seen, the use of hypernyms brings the biggest gain to performances, with the strongest improvement against the baseline reported with simple hypernymy and fine-tuning of A : 1 .",
"7 points on ALL and 1 .",
"6 on No15.",
"The closures, i.e., hyper* and hyper+hypo*, do not seem to be very beneficial, achieving slightly worse results than the simple counterpart.",
"Much of the improvement seems to come from the increased performance of the unseen split No15 where the gold is not in SemCor, with an absolute improvement of 7 .",
"6 points with hypernymy edges and no fine-tuning, and of 7 points with hypernymy edges and fine-tuning.",
"Fine-tuning A makes for better results than keeping the weights of the adjacency matrix fixed on both ALL and No15, but results in slight-to-moderate decreases on No15 , as the network is able to adjust the weights in order to bring down the q scores for unseen synsets.",
"As in EWISE, in EWISER logits are computed by a dot product between a matrix of hidden scores and output synset embeddings.",
"However, we do not train our own synset embeddings: rather, we employ off-the-shelf vectors.",
"In this section we evaluate the performance of different options both in the choice of the embeddings and in how they are integrated into the network.",
"We contrast the performance with our baseline, in which the O matrix is randomly initialized and the embeddings are trained.",
"We experiment with different options for the initialization of O :",
"Deconf 300 d We use the 300-dimensional vectors released by Pilehvar and Collier (2016), which are built from Word2Vec Google news word embeddings.",
"LMMS 2048 d We use the 2048-dimensional vectors produced by Loureiro and Jorge (2019), built as the concatenation of BERT Large cased states' centroids for instances in SemCor with the synset gloss vector, computed from BERT Large states as well.",
"We normalize the vectors to unit length.",
"Since LMMS vectors are quite big, we reduce the number of dimensions to 512 with truncated SVD.",
"SensEmBERT+LMMS 2048 d SensEmBERT (Scarlini et al., 2020) enhances LMMS by exploiting BabelNet and Wikipedia.",
"SensEmBERT only includes nouns, but its vectors are in the same space as LMMS, so we use the former in combination with verbs, adjectives and adverbs from the latter.",
"We employ the same preprocessing as with LMMS.",
"For each sense embedding system, we report results with four different training schemes: plain initialization ( O -init ); initialization and freezing ( O -freeze ); restore the best O -freeze, then thaw the weights of O ( O -thaw ); the same as for O -thaw, but reducing the learning rate to 10 5 ( O -thaw* ).",
"In all cases, synset embeddings are computed as the centroid of the senses contained in the synset.",
"We train our baseline and O -init models for 20 epochs.",
"The O -freeze model, which is much slower to converge, is trained for a maximum of 80 epochs.",
"O -thaw and O -thaw* are trained for 10 epochs.",
"The data on which we train and report the performances are the same as in Section 4.1.2.",
"We report in Table 2 the results of the evaluation of the use of synset embeddings for the initialization of the O output embeddings matrix.",
"In general, the approach enables much better F1 scores compared to the baseline, but is very dependent on the quality of the embeddings, and on whether they incorporate supervision from SemCor.",
"When using Deconf, which uses the WordNet graph to deconflate word-level Word2Vec vectors, with no use of training corpora, the O -freeze strategy produces the best result on No15 , i.e., 72 .",
"2 , with an absolute increase of 20 points over the baseline.",
"However, O -freeze with Deconf also achieves the worst result on both ALL and No15, indicating that some form of biasing towards the most frequent synsets, which is an effect of corpus supervision, is required for the global evaluation.",
"Fine-tuning O enables the model to obtain a decent S G G + E System ALL No15 No15 S2 S3 S7 S13 S15 N V A R (cid:88) (cid:88) -Kumar et al. (2019) 71.8 70.9* -73.8 71.1 67.3 69.4 74.5 74.0 60.2 78.0 82.1 (cid:88) (cid:88) -Loureiro and Jorge (2019) 75.4 75.2* -76.3 75.6 68.1 75.1 77.0 --(cid:88) -Hadiwinoto et al. (2019) 73.7* 73.2* -75.5 73.6 68.1 71.1 76.2 --(cid:88) (cid:88) -Huang et al. (2019) 77.0 (cid:63) 76.2* -77.7 75.2 72.5 76.1 80.4 --(cid:88) (cid:88) -Scarlini et al. (2020) Sup.",
"F1 score, with the exception of O -thaw*, where the training run was underfitting.",
"With LMMS, higher results are obtained, especially when freezing the weights.",
"SensEmBERT with the LMMS backoff achieves the best results on both ALL and No15, with O -thaw* reaching at least 76 .",
"6 on ALL and No15.",
"Probably due to the fact that SensEmBERT relies less on the supervision from SemCor, very strong results are obtained on No15 as well, with a margin of over 12 points above the baseline.",
"As for the training scheme adopted, the best results are obtained from the freeze-then-thaw strategy with learning rate reduction ( O -thaw*) and from the simple freezing of O .",
"Thawing consistently raises the accuracy on ALL and No15, but lowers it on No15 , meaning that the fine-tuning of O shifts the balance of the trade-off between performances on seen and unseen synsets to the benefit of the former.",
"O -init still improves over the baseline, but is less effective than its alternatives.",
"Bringing everything together, we now evaluate the joint exploitation of the O initialization and structured logits in EWISER.",
"Building on the results of the previous experiments, we limit the number of model variants by only including the configurations that separately yielded the best results, namely:",
"(i) the use of hypernyms (EWISER hyper ) or hypernyms plus hyponyms (EWISER hyper + hypo ) in the graph encoded in A , training the adjacency matrix, and",
"(ii) the combination of SensEmBERT and LMMS for the output embeddings, trained according to the O thaw* scheme, i.e., the freeze-then-thaw approach, with the learning rate set to 10 5 .",
"In order to make the results of EWISER comparable to those of the state-of-the-art approaches to WSD, we report results when training not only on SemCor ( S ), but also on the union of SemCor and untagged WordNet glosses ( G ), and on the union of SemCor, tagged WordNet glosses ( G + ), and WordNet examples ( E ) as well.",
"When training on glosses, we prepend the lemma of the main sense and a semicolon to the raw gloss, and treat the added word as a tagged instance.",
"We evaluate the model on the datasets mentioned in Section 4.1.2.",
"In Table 3 we report the results of the unified evaluation.",
"In addition to our systems, we include in the comparison the best systems from the literature, grouping the two sets together in two internally comparable blocks:",
"(i) systems trained on SemCor, possibly making use of LKB information such as untagged glosses or the WordNet graph;",
"(ii) systems that also make use of tagged glosses and examples;",
"(iii) the best performing knowledge-based systems.",
"In almost every setting compared, EWISER outperforms the previous state of the art.",
"Among systems in the first block (S/G) EWISER hyper + hypo trained on S+G obtains the best results on all the datasets except for SemEval-2015, with a margin over the two best performing systems, i.e., GlossBERT and the ensemble of 8 models of Vial et al. (2019), of, respectively, 1 .",
"3 and 1 .",
"6 points on ALL, and of 2 .",
"0 and 1 .",
"7 on No15, which does not include our dev set.",
"Even if they do not train on untagged glosses, both EWISER hyper and EWISER hyper + hypo show comparable performances to GlossBERT on ALL, and better on No15 without fine-tuning BERT, and with much less compute power required.",
"The results on No15 , where EWISER hyper + hypo with glosses achieves an F1 of 69 .",
"1 , almost 10 points more than when not using them, show that definitional knowledge is beneficial for the zero-shot setting.",
"Adding tagged glosses and WordNet examples further boosts performances, with the best configuration, EWISER hyper , breaking through the 80 points ceiling on ALL, an estimated upper bound on human inter-annotator agreement that is often quoted as the glass ceiling for WSD performance (Navigli, 2009).",
"The only model we can compare with, i.e., the one of Vial et al. (2019), is outperformed on every dataset except for SemEval-2015.",
"On ALL and No15, however, we outscore the competitor by a margin of 1 .",
"1 and 1 .",
"4 points, establishing a new state of the art in English all-words WSD.",
"The bigger training set improves performances on No15 , though the gap is not quite closed.",
"Not surprisingly, even the best knowledge-based systems do not offer competitive performances, since they cannot take advantage of training corpus supervision.",
"To see whether the strong performances of EWISER carry over to the multilingual setting, we retrain the best global configuration, i.e., EWISER hyper trained on SemCor, WordNet's tagged glosses and usage examples, with BERT multilingual cased.",
"We compare our system against",
"(i) the state of the art in multilingual WSD, i.e. SensEmBERT, which can, however, only disambiguate nouns;",
"(ii) the best performing all-PoS system, i.e. SyntagRank (Scozzafava et al., 2020), a knowledge-based system;",
"(iii) the feedforward baseline.",
"We report results on the French, German, S13 S15 DE ES FR IT ES IT Scozzafava et al. (2020) 76.4 74.1 70.3 72.1 63.4 69.0 Scarlini et al. (2020) 79.2* 73.4* 77.8* 69.8* -Ours (baseline) 81.7 76.6 80.8 77.2 67.3 70.6 Ours (EWISER) 80.9 78.8 83.6 77.7 69.5 71.8 Table 4: Evaluation of the joint use of structured logits and O -thaw* on cross-lingual WSD.",
"Italian and Spanish all-words evaluation datasets from SemEval-2013, which contain only nouns, and the Italian and Spanish datasets from SemEval-2015, which contain all PoS.",
"We use the revised version of the evaluation datasets 4 , which is updated to be consistent with the 4.0.1 release of the BabelNet graph.",
"As a result, we can test on a larger number of instances than previously possible.",
"We show the results in Table 4.",
"As can be seen, we outperform SensEmBERT in the four datasets from SemEval-2013, sometimes by a large margin, i.e., by almost 8 points on the Italian dataset.",
"On SemEval-2015 we outperform SyntagRank by 6 .",
"1 points on the Spanish dataset and by 2 .",
"8 points on Italian one.",
"We also show noticeable improvements over the baseline in 5 out of 6 benchmarks.",
"The evaluation demonstrates that the EWISER approach is robust in the cross-lingual setting as well, outperforming competitors across the board and setting a new state of the art.",
"Moreover, the results provide the empirical grounds for believing that, in addition to the results achieved in the languages featured in the evaluation datasets, comparable fig-ures could also be attained for other languages, at least for several European ones.",
"In this section we provide a qualitative analysis of our approach.",
"Specifically, we are interested in the capability of the model to predict unseen synsets, thanks to the prior knowledge that is encoded in both the output embeddings O and the adjancency matrix A .",
"Consider the following sentences: (1)",
"a. Corporate debt defaults predicted to increase.",
"b. Though people are free to change the default , they usually don't.",
"In Table 5 we report the predictions for the target default in sentences (1a) and (1b) of our best sys-4 github.com/SapienzaNLP/mwsd-datasets .",
"tem trained on SemCor only, i.e., EWISER hyper .",
"In both cases, the correct synsets, respectively, default.n.02/nonpayment.n.02 and default option.n.01 , are not in the training set.",
"However, the model is still able to give the correct answer.",
"In the first case, the embedding intialization is enough to predict nonpayment.n.02 (with default.n.02 having the second highest score), as its score in z is already the highest among possible predictions.",
"In the latter, it is the contribution from the synset pointing to default option.n.01 , i.e., option.n.02 , that enables the network to make the correct prediction.",
"However, we must note that the model still over-relies on corpus supervision.",
"Because of this, even though our best overall model, i.e., EWISER hyper trained on SemCor, tagged glosses and examples, is able to distinguish and predict correctly the two well-attested mathematical meanings of root as equation solution and root as the number x such that y = x 2 in sentences (2a) and (2b) below, it is not able to correctly detect the tooth sense of root (2c), which never occurs in SemCor: (2)",
"a. The n roots of a polynomial of degree n depend continuously on the coefficients.",
"b. The root of 4 is 2.",
"c. There's no need to be worried if your dentist prescribes a root canal procedure.",
"Thus, while the EWISER model is indeed very effective, with the best configuration outdoing the upper bound on inter-annotator agreement, we are still far from having solved the task.",
"We presented EWISER, a new neural WSD architecture that, by embedding information from the WordNet graph within the neural architecture, can also make use of the relational information that is usually only exploited by knowledge-based systems.",
"Thanks to the joint exploitation of the WordNet graph and to the use of pretrained synset embeddings, EWISER is able to predict meanings which are not found in the training set, thus mitigating the knowledge acquisition bottleneck.",
"On almost all the evaluation settings, our system beats the previous state of the art.",
"Most notably, our model is the first to break through the 80 F1 ceiling on the overall evaluation, the estimated upper bound on the task.",
"On the multilingual setting, even with no training data besides the English corpora, EWISER sets the new state of the art.",
"We leave it as future work to explore ways to raise accuracy on unseen synsets without harming performances on frequent synsets.",
"We release the code used in the experiments, as well as pretrained models at github.com/SapienzaNLP/ewiser .",
"The authors gratefully acknowledge the support of the ERC Consolidator Grant MOUSSE No. 726487 under the European Union's Horizon 2020 research and innovation programme.",
"This work was supported in part by the MIUR under the grant Dipartimenti di eccellenza 2018-2022 of the Department of Computer Science of the Sapienza University of Rome."
] | [
"abstain",
"abstain",
"method",
"objective",
"result",
"other",
"other",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"method",
"other",
"other"
] |
[
"Conversational context information, higher-level knowledge that spans across sentences, can help to recognize a long conversation.",
"However, existing speech recognition models are typically built at a sentence level, and thus it may not capture important conversational context information.",
"The recent progress in end-to-end speech recognition enables integrating context with other available information (e.g., acoustic, linguistic resources) and directly recognizing words from speech.",
"In this work, we present a direct acoustic-to-word, end-to-end speech recognition model capable of utilizing the conversational context to better process long conversations.",
"We evaluate our proposed approach on the Switchboard conversational speech corpus and show that our system outperforms a standard end-to-end speech recognition system.",
"Many real-world speech recognition applications, including teleconferencing, and AI assistants, require recognizing and understand long conversations.",
"In a long conversation, there exists the tendency of semantically related words or phrases reoccur across sentences, or there exists topical coherence.",
"Thus, such conversational context information, higher-level knowledge that spans across sentences, provides important information that can improve speech recognition.",
"However, the long conversations typically split into short sentence-level audios to make building speech recognition models computationally feasible in current state-of-the-art recognition systems (Xiong et al., 2017; Saon et al., 2017).",
"Over the years, there have been many studies have attempted to inject a longer context information into language models.",
"Based on a recurrent neural network (RNNs) language models (Mikolov et al., 2010), (Mikolov and Zweig, 2012; Wang and Cho, 2015; Ji et al., 2015; Liu and Lane, 2017; Xiong et al., 2018), proposed using a context vector that would encode the longer context information as an additional network input.",
"However, all of these models have been developed on text data, and therefore, it must still be integrated with a conventional acoustic model which is built separately without a longer context information, for speech recognition on long conversations.",
"Recently, new approaches to speech recognition models integrate all available information (e.g. acoustic, linguistic resources) in a so-called end-to-end manner proposed in (Graves et al., 2006; Graves and Jaitly, 2014; Hannun et al., 2014; Miao et al., 2015; Bahdanau et al., 2014; Chorowski et al., 2014, 2015; Chan et al., 2015; Kim et al., 2017).",
"In these approaches, a single neural network is trained to recognize graphemes or even words from speech directly.",
"Especially, the model using semantically meaningful units, such as words or sub-word (Sennrich et al., 2015), rather than graphemes have been showing promising results (Audhkhasi et al., 2017b; Li et al., 2018; Soltau et al., 2016; Zenkel et al., 2017; Palaskar and Metze, 2018; Sanabria and Metze, 2018; Rao et al., 2017; Zeyer et al., 2018).",
"In this work, motivated by such property of the end-to-end speech recognition approaches, we propose to integrate conversational context information within a direct acoustic-to-word, end-to-end speech recognition to better process long conversations.",
"Thus far, the research in speech recognition systems has focused on recognizing sentences and to the best of our knowledge, there have been no studies of word-based models incorporating conversational context information.",
"There has been recent work attempted to use the conversational context information from the preceding graphemes (Kim and Metze, 2018), however, it is limited to encode semantically meaningful context representation.",
"Another recent work attempted to use a context information (Pundak et al., 2018), however, their method requires a list of phrases at inference (i.e. personalized contact list).",
"We evaluate our proposed approach on the Switchboard conversational speech corpus (Godfrey and Hol-liman, 1993; Godfrey et al., 1992), and show that our model outperforms the sentence-level end-to-end speech recognition model.",
"We perform end-to-end speech recognition using a joint CTC/Attention-based approach (Kim et al., 2017; Watanabe et al., 2017).",
"The neural network is trained by both CTC (Graves et al., 2006) and Attention-based sequence-to-sequence (seq2seq) objectives (Bahdanau et al., 2014) to combine the strength of the two.",
"With CTC, it preserves left-right order between input and output and with attention-based seq2seq, it learns the language model jointly without relying on the conditional independence assumption.",
"As an output, we use word-level symbols which generated from the bite-pair encoding (BPE) algorithm (Sennrich et al., 2015).",
"This method creates the target units based on the frequency of occurrence in training sets.",
"Similar to (Zeyer et al., 2018; Palaskar and Metze, 2018; Sanabria and Metze, 2018), we use BPE-10k which contains roughly 10k units (9,838), including 7,119 words and 2719 sub-words.",
"In order to use conversational context information within the end-to-end speech recognition framework, we extend the decoder sub-network to predict the output additionally conditioning on conversational context.",
"To do so, we encode the preceding sentence into a single vector, a conversational context vector, then inject to decoder network as an additional input at every output step.",
"Let we have K sentences in a conversation.",
"For k -th sentence, s k , we have T k -length input acoustic feature ( x k ) and U k -length output words.",
"Our proposed decoder generates the probability distribution over words ( y ku ), conditioned on 1) high-level representation ( h k ) of input ( x k ) generated from encoder, and 2) all the words seen previously ( y k 1: u 1 ), and 3) previous decoder state ( d ku 1 ) 4) Figure 1: The architecture of our end-to-end speech recognition model with conversational context information.",
"additionally conditioning on conversational context vector ( c k 1 ), which represents the information of the preceding sentence ( k 1 ): h k = Encoder ( x k ) (1) y ku Decoder ( h k , y k 1: u 1 , d ku 1 , c k 1 ) (2) We represent the context vector, c k 1 , from the preceding sentence in two different ways:",
"(a) mean of word embedding, and",
"(b) attentional word embedding.",
"We first generate one-hot word vectors, and then we simply take the mean over word vectors to obtain a single vector in method",
"(a), or we use attention mechanism over word vectors to obtain the weight over the words and then perform the weighted-sum.",
"The parameter of the attention mechanism is optimized towards minimizing the conversation ID classification error similar to (Kim and Metze, 2018).",
"The context vector is merged with a decoder state at every output step as follows: d ku 1 = tanh( W d ku 1 + V c k 1 + b ) (3) y ku softmax ( LSTM ( d ku 1 , h ku , y k 1: u 1 ))) (4) where W, V, b are trainable parameters.",
"In order to learn and use the conversational-context during training and decoding, we serialize the sentences based on their onset times and their conversations rather than the random shuffling of data.",
"We shuffle data at the conversation level and create mini-batches that contain only one sentence of each conversation.",
"We investigated the performance of the proposed model on the Switchboard LDC corpus (97S62) which has a 300 hours training set.",
"We split the Switchboard data into two groups, then used 285 hours of data (192 thousand sentences) for model training and 5 hours of data (4 thousand sentences) for hyper-parameter tuning.",
"The evaluation was carried out on the HUB5 Eval 2000 LDC corpora (LDC2002S09, LDC2002T43), which have 3.8 hours of data (4.4 thousand sentences), and we show separate results for the Callhome English (CH) and Switchboard (SWB) evaluation sets.",
"We denote train nodup, train dev, SWB, and CH as our training, development, and two evaluation datasets for CH and SWB, respectively.",
"There are 2,402 conversations in training sets and 20 conversations in CH, and 20 conversations in SWB.",
"We sampled all audio data at 16kHz, and extracted 80-dimensional log-mel filterbank coef-ficients with 3-dimensional pitch features, from 25 ms frames with a 10ms frame shift.",
"We used 83-dimensional feature vectors to input to the network in total.",
"We used 9,840 distinct labels: 9,838 word-level BPE units, start-of-speech/end-of-speech, and blank tokens.",
"Note that no pronunciation lexicon was used in any of the experiments.",
"We used joint CTC/Attention end-to-end speech recognition architecture (Kim et al., 2017; Watanabe et al., 2017) with ESPnet toolkit (Watanabe et al., 2018).",
"We used a CNN-BLSTM encoder as suggested in (Zhang et al., 2017; Hori et al., 2017).",
"We followed the same six-layer CNN architecture as the prior study, except we used one input channel instead of three since we did not use delta or delta delta features.",
"Input speech features were downsampled to (1/4 x 1/4) along with the time-frequency axis.",
"Then, the 6-layer BLSTM with 320 cells was followed by CNN.",
"We used a location-based attention mechanism (Chorowski et al., 2015), where 10 centered convolution fil-ters of width 100 were used to extract the convolutional features.",
"The decoder network of both our proposed models and the baseline models was a 2-layer LSTM with 300 cells.",
"Our proposed models additionally require linear projection layer in order to encode the conversational context vector and merge with decoder states.",
"We also built an external RNN-based language model (RNNLM) on the same BPE-10k sets on the same Switchboard transcriptions.",
"The RNNLM network architecture was a two-layer LSTM with 650 cells.",
"This network was used only for decoding.",
"The AdaDelta algorithm (Zeiler, 2012) with gradient clipping (Pascanu et al., 2013) was used for optimization.",
"We used = 0 .",
"5 for joint CTC/Attention training.",
"We bootstrap the training our proposed conversational end-to-end models from the baseline end-to-end models.",
"When we decode with RNNLM, we used joint decoder which combines the output label scores from the AttentionDecoder, CTC, and RNNLM by using shallow fusion (Hori et al., 2017): y = argmax { log p att ( y | x ) + log p att ( y | x ) + log p rnnlm ( y ) } (5) The scaling factor of CTC, and RNNLM scores were = 0 .",
"3 , and = 0 .",
"3 , respectively.",
"We used a beam search algorithm similar to (Sutskever et al., 2014) with the beam size 10 to reduce the computation cost.",
"We adjusted the score by adding a length penalty, since the model has a small bias for shorter utterances.",
"The final score s ( y | x ) is normalized with a length penalty 0 .",
"5 .",
"The models were implemented by using the Py-Torch deep learning library (Paszke et al., 2017), and ESPnet toolkit (Kim et al., 2017; Watanabe et al., 2017, 2018).",
"We evaluated both the end-to-end speech recognition model which was built on sentence-level data and our proposed end-to-end speech recognition model which leveraged conversational context information.",
"Table 1 shows the WER of our baseline, proposed models, and several other published results those were only trained on 300 hours Switchboard training data.",
"As shown in Table 1, we obtained a performance gain over our baseline by using the Table 1: Comparison of word error rates (WER) on Switchboard 300h with standard end-to-end speech recognition models and our proposed end-to-end speech recognition models with conversational context.",
"conversational context information.",
"Our proposed model",
"(a) mean shows 4.1% and 2.4% relative improvement over our baseline on SWB and CH evaluation set, respectively.",
"Our proposed model",
"(b) att shows 3.5% and 1.7% relative improvement over our baseline on SWB and CH evaluation set, respectively.",
"We also found that we can obtain further accuracy improvement by pre-training the decoder part only with transcription.",
"With this pre-training technique, the",
"(b) att shows 5.9% and 2.7% relative improvement.",
"Unlike the previous work (Renduchintala et al., 2018), we did not use any additional encoder for the text data.",
"We also build the language model with or without the conversational context information.",
"Table 2 shows the perplexity on a held-out set of our baseline LM and our conversational LM.",
"We observed that incorporating the conversational context improves performance showing that 9.6% and 11.7% relative improvement on SWBD only and SWBD + Fisher .",
"Note that the Fisher (LDC2004T19) parts (Cieri et al., 2004) of transcriptions is only used in these experiments.",
"We performed analyses in order to verify the conversational vector helps to improve recognition Figure 2: The architecture of our end-to-end speech recognition model with conversational context information.",
"accuracy.",
"We generate the context vector from an oracle preceding sentence and a random sentence, in addition to our predicted sentence.",
"As described in Figure 2, the model using the oracle context performed best and the model using the random context was even worse than the baseline.",
"Our model outperformed over the baseline and the model using the random context, we can conclude that the benefit from our proposed method is coming from the conversational context information.",
"We proposed an acoustic-to-word model capable of utilizing the conversational context to better process long conversations.",
"A key aspect of our model is that the whole system can be trained with conversational context information in an end-to-end framework.",
"Our model was shown to outperform previous end-to-end speech recognition models trained on isolated utterances by incorporating preceding conversational context representations.",
"We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used for this research.",
"This work also used the Bridges system, which is supported by NSF award number ACI-1445606, at the Pittsburgh Supercomputing Center (PSC).",
"This research was supported by a fellowship from the Center for Machine Learning and Health (CMLH) at Carnegie Mellon University."
] | [
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"result",
"other",
"other",
"other"
] |
[
"Zhiwen Xie 1 , Guangyou Zhou 2 , Jin Liu 1 , Jimmy Xiangji Huang 3 1 School of Computer Science, Wuhan University 2 School of Computer Science, Central China Normal University 3 School of Information Technology, York University [email protected] , [email protected] [email protected] , [email protected]",
"Abstract The goal of Knowledge graph embedding (KGE) is to learn how to represent the low-dimensional vectors for entities and relations based on the observed triples.",
"The conventional shallow models are limited to their expressiveness.",
"ConvE (Dettmers et al., 2018) takes advantage of CNN and improves the expressive power with parameter efficient operators by increasing the interactions between head and relation embeddings.",
"However, there is no structural information in the embedding space of ConvE, and the performance is still limited by the number of interactions.",
"The recent KBGAT (Nathani et al., 2019) provides another way to learn embeddings by adaptively utilizing structural information.",
"In this paper, we take the benefits of ConvE and KBGAT together and propose a Re lation-aware Inception network with joint local-global structural information for knowledge graph E mbedding (ReInceptionE).",
"Specifically, we first explore the Inception network to learn query embedding, which aims to further increase the interactions between head and relation embeddings.",
"Then, we propose to use a relation-aware attention mechanism to enrich the query embedding with the local neighborhood and global entity information.",
"Experimental results on both WN18RR and FB15k-237 datasets demonstrate that ReInceptionE achieves competitive performance compared with state-of-the-art methods.",
"Knowledge graphs (KGs) are at the core of most state-of-the-art natural language processing solutions and have been spotlighted in many real-world applications, including question answering (Hao et al., 2017), dialogue generation (He et al., 2017; Madotto et al., 2018) and machine reading comprehension (Yang and Mitchell, 2017).",
"Typically, KGs Corresponding author.",
"are directed graphs whose nodes denote the entities and edges represent the different relations between entities.",
"The structured knowledge in KGs is organized in the form of triples ( h, r, t ) , where h and t stand for the head and tail entities respectively, and r represents the relation from h to t .",
"Although large-scale KGs (e.g., Freebase (Bollacker et al., 2008), DBpedia (Lehmann et al., 2015)) have already contained millions or even billions of triples, they are still far from complete since the emerging new knowledge appears.",
"Knowledge graph embedding (KGE) is an effective solution to solve the incompletion problem.",
"KGE aims to learn the low-dimensional vectors (embeddings) for entities and relations based on the observed triples in KGs.",
"Conventional models including TransE (Bordes et al., 2013) and its numerous extensions (e.g., TransD (Ji et al., 2015), TransR (Lin et al., 2015), DistMul (Yang et al., 2015), ComplEx (Trouillon et al., 2016),",
"etc.) have been proposed.",
"These shallow models are limited to their expressiveness (Dettmers et al., 2018).",
"Recently, CNN-based methods have been proposed to capture the expressive features with parameter efficient operators.",
"ConvE (Dettmers et al., 2018) takes advantage of CNN and uses convolution filters on 2D reshapings of the head entity and relation embeddings.",
"Through this, ConvE can increase the interactions between head and relation embeddings.",
"Empirical results have proved that increasing the number of interactions is beneficial to the KGE task, but ConvE is still limited by the number of interactions (Jiang et al., 2019; Vashishth et al., 2020).",
"Furthermore, ConvE does not consider the structural information.",
"In contrast, graph-based methods are effective to aggregate neighborhood information to enrich the entity/relation representation (Schlichtkrull et al., 2018; Bansal et al., 2019; Nathani et al., 2019).",
"Among them, KB-Figure 1: An example of relation-aware local and global information (left) and the general framework of our proposed ReInceptionE (right).",
"GAT (Nathani et al., 2019) achieves state-of-the-art performance on various benchmark datasets via using graph attention networks (GAT) (Velickovic et al., 2018).",
"KBGAT learns embeddings for every entity by taking all possible relations into account, which requires multiple hops of reasoning.",
"In contrast, it can be beneficial to learn embeddings from a query-relevant subgraph of the local neighborhood and global entities.",
"As an example shown in Figure 1, given a query ( Jack London , nationality , ? ) for Jack London , we can gather the relation-aware local neighbor ( place lived , Okaland ).",
"The local neighbor allows us to project Jack London into the Okaland region of the embedding space, which can lead to a high score for predicting the target America , as Okaland and America are close in embedding space.",
"Besides, we also note that a specific relation can be acted as the bridge to link the related entities.",
"Considering the relation nationality , the related head entities { Kaneto Shiozawa , Shammi Kapoor , Will Smith , } and tail entities { America , Canada , Japan , } tend to be a set of person names and countries .",
"These related entities act as a strong signal to judge whether a triple is valid or not.",
"Based on the above observations, we take the benefits of ConvE and KBGAT together and propose a Re lation-aware Inception network with joint local-global structural information for knowledge graph E mbedding, and we name it ReInceptionE .",
"In ReInceptionE, we first adapt Inception network (Szegedy et al., 2015, 2016) a high performing convolutional neural network with carefully designed filters, to increase the interactions using multiple convolution filters with different scales, while at the same time to keep parameter efficient.",
"Then, we construct a local neighborhood graph and a global entity graph by sharing the head and relation respectively for a given query.",
"With the constructed graphs, we apply a relation-aware attention mechanism to aggregate the local neighborhood features and gather the global entity information to enrich the head/relation representation.",
"Finally, we aggregate the joint local-global structural information using a fully connected layer to predict the missing links.",
"In summary, we make the following three contributions: (1) It is the first to explore Inception network to learn query embedding which aims to further increase the interactions between head and relation embeddings; (2) We propose to use a relation-aware attention mechanism to enrich the query embedding with the local neighborhood and global entity information; (3) We conduct a series of experiments to evaluate the performance of the proposed method.",
"Experimental results demonstrate that our method obtains competitive performance in comparison to these state-of-the-art models on both WN18RR and FB15k-237.",
"The rest of this paper is structured as follows.",
"In this section, we first describe the background and definition in Subsection 2.1, and Inception-based query encoder in Subsection 2.2.",
"Then, we introduce the relation-aware local attention and global attention in Subsection 2.3 and 2.4, respectively.",
"Finally, we describe the joint using of them in Subsection 2.5.",
"Definition 3.1 Knowledge Graph G : A knowledge graph G = { ( h, r, t ) | ( h, r, t ) E RE} denotes a collection of triples, where E and R indicate entities and relations, respectively, h, t E represent the head entity and tail entity, and r R denotes the specific relation linking from the head entity h to tail entity t .",
"Definition 3.2 Knowledge Graph Embedding : Knowledge graph embedding aims to learn embeddings of entities and relations with the valid triples in G , and then predict the missing head entity h given query (? , r, t ) or tail entity t given query ( h, r, ?) with the learned entity and relation embeddings.",
"The framework of the proposed ReInceptionE is shown in Figure 1 (right).",
"ReIncetionE consists of four modules: (1) Inception-based query encoder (InceptionE), which is used to transform the input query q = ( h, r, ?) into a k -dimensional vector v q ; (2) relation-aware local attention and (3) relation-aware global attention are used to capture the local neighborhood information and the global entity information; and (4) joint relation-aware attention is used to aggregate the different structural information using a fully connected layer.",
"Finally, we compute the score for the given triple ( h, r, t ) based on the query embedding and the tail entity embedding.",
"ConvE (Dettmers et al., 2018) is the first model to apply CNN for KGE, which uses 2D convolution operation to model the head and relation in a query.",
"However, ConvE is limited by the number of interactions between the head and relation embeddings (Jiang et al., 2019; Vashishth et al., 2020).",
"In this paper, we propose to employ the Inception network Figure 2: The structures of ConvE (left) and the proposed Inception-based query encoder (right).",
"(Szegedy et al., 2015, 2016), a high performing convolutional neural network with carefully designed filters, to increase the interactions by taking the head and relation as two channels of the input.",
"Figure 2 shows the differences between InceptionE (right) and ConvE (left).",
"Obviously, ConvE cannot capture full interactions between the head and relation embeddings since the convolution operations in ConvE only slides on the entity or relation 2D matrices independently.",
"On the contrary, InceptionE can increase the interactions between the head and relation embeddings using multiple convolution filters with different scales, while at the same time keep parameter efficient.",
"As shown in Figure 2, given a query q = ( h, r, ?) , we first reshape the head and relation embeddings as 2D matrices denoted as v h and v r .",
"Then, the 2D embeddings are viewed as two channels of the input for the Inception network.",
"Thus, the entries at the same dimension of v h and v r are aligned over the channel dimension, which enables the convolution operations to increase the interactions between the head and relation embeddings.",
"Specifically, We first use 1 1 convolutions to capture the direct interactions at the same dimension, which can be formulated as: v 1 1 = Relu ([ v h || v r ] 1 1 ) (1) where Relu (Glorot et al., 2011) is a non-linear activation function, || denotes the concatenation operation, denotes the convolutional operation and 1 1 is the parameter of convolution filters with 1 1 size, v 1 1 denotes the interaction features of the first 1 1 convolutional layer.",
"Then, filters with different sizes, such as 2 2 and 3 3 , are applied to capture high-level interaction features in various scales.",
"Thus, we can get interaction features of the 2 2 and 3 3 convolutional layers, denoted by v 2 2 and v 3 3 , respectively.",
"As suggested in (Szegedy et al., 2016), we use two 3 3 convolutions instead of a 5 5 convolution to capture interaction features in larger spatial filters, which is able to reduce the number of parameters.",
"The two 3 3 convolutions are denoted as: v 2(3 3) = Relu ( Relu ( v 2(3 3) 1 1 13 3 ) 23 3 ) (2) where v 2(3 3) 1 1 is the input interaction features, 1 3 3 and 2 3 3 are parameters of the two 3 3 convolution layers.",
"Finally, the output interaction features with different scales and levels are concatenated and a fully connected layer is applied to obtain the embedding of the given query.",
"Formally, we define the Inception-based query encoder model as: v q = Inception ( v h , v r ) = Relu ( vec ([ v 1 1 || v 2 2 || v 3 3 || v 2(3 3) ]) W ) (3) where W is the parameter of the fully connected layer.",
"KBGAT learns embedding for every entity by taking all possible relations into account, and the embedding learning is impaired by the irrelevant neighbors.",
"In contrast, it can be beneficial to learn embedding from a query-relevant neighborhood graph.",
"In this subsection, we first construct a relation-aware neighborhood graph and then apply an attention mechanism to aggregate local graph structure information.",
"For the query q = ( h, r, ?) , we denote its neighbors as N q = { n i = ( e i , r i ) | ( e i , r i , h ) G} .",
"Note that, for each triple ( h, r, t ) , we create an inverse triple ( t, r 1 , h ) , which has also been used in (Lacroix et al., 2018; Dettmers et al., 2018).",
"Thus, query (? , r, t ) can be converted to ( t, r 1 , ?) .",
"And the neighbors { ( r j , e j ) | ( h, r j , e j ) G} for head entity h can be converted to a format of { ( e j , r 1 j ) | ( h, r j , e j ) G} .",
"Thus, N q contains both the outgoing and incoming neighbors for a query q = ( h, r, ?) .",
"Each neighbor n i = ( e i , r i ) N q is also a query with a head entity e i and a relation r i .",
"Thus, each entity and relation in neighbor n i = ( e i , r i ) can be encoded using the Inception-based query encoder: v n i = Inception ( v e i , v r i ) (4) where v e i and v r i are the 2D embedding vectors of entity e i and relation r i .",
"In practice, different neighbors may have different impacts for a given query.",
"It is useful to determine the importance of each neighbor for a specific query.",
"As an example in Figure 1, for the query ( Jack London , nationality , ? ), it is reasonable to focus on the the neighbors related to the relation nationality , such as ( Jack London , place lived , Oakland ).",
"To this end, we use relation-aware attention mechanism to assign different importance for each neighbor and compute the relevant score for each neighbor using a non-linear activation layer: s i = LeakyRelu ( W 1 [ W 2 v q || W 3 v n i ]) (5) where W 1 , W 2 and W 3 are parameters to be trained and LeakyRelu (Maas et al., 2013) is the activation function.",
"We then normalize the relevant scores for different neighbors using a softmax function to make it comparable across the neighbors, which is denoted as: i = exp( s i ) (cid:80) n j N q exp( s j ) (6) Finally, we aggregate the neighborhood information according to their attention scores and apply a non-linear function to obtain the neighborhood vector.",
"To keep more information of the original query embedding, we also apply a residual operation: v n = Relu (cid:88) n i N q i W 3 v n i + W 2 v q (7) For simplification, we denote the above relation-aware attention operations as: v n = ReAtt ( V n , v q ) (8) where V n = { v n i | n i N q } is a set of local neighobrhood vectors.",
"The number of relation-aware local neighbors for each node (entity) varies from one to another, making the neighbor graph very sparse.",
"The sparse nature would affect the accuracy of the embedding.",
"In fact, a specific relation can be acted as the bridge to link the related entities.",
"In this subsection, we construct a relation-aware head graph and tail graph by gathering all entities for relation r in the given query q = ( h, r, ?) .",
"Intuitively, all head entities for relation r share some common type information.",
"And the tail entities for relation r contain some implicit information about the type of the target entity t .",
"For example in Figure 1, given the relation nationality , all heads { Kaneto Shiozawa , Shammi Kapoor , Will Smith , , } and tails { America , Canada , Japan , , } are the names of a person and a country, sharing the similar entity types.",
"These relation-aware global heads and tails can provide some useful information for the KGE task.",
"Thus, we construct relation-aware global head and tail graphs according to the head and tail entities of the relation.",
"Let H r = { e i | ( e i , r, e j ) G} and T r = { e j | ( e i , r, e j ) G} denote a set of head and tail entities for relation r , respectively.",
"For each head entity h ri H r , we first represent it as an embedding vector v h ri .",
"Then, we use relation-aware attention mechanism to capture the relevant information from all the relation-aware head entities, which is denoted as: v rh = ReAtt ( V rh , v q ) (9) where V rh = { v h ri | h ri H r } is a set of entity vectors for relation-aware global entities.",
"Similarly, we use relation-aware attention mechanism to capture global tail informations, which is computed as: v rt = ReAtt ( V rt , v q ) (10) where V rt = { v t ri | t ri T r } is a set of entity embeddings for relation-aware global tails.",
"Once obtained the relation-aware local neighborhood information v n and global head and tail vectors v ht and v rt , we concatenate these vectors and merge them by using a linear feed-forward layer:",
"where W 4 and b are the parameters of the feed-forward layer.",
"Finally, we compute the score for each triple ( h, r, t ) by applying a dot product of the query embedding v (cid:48) q and the tail embedding v t : f ( h, r, t ) = v (cid:48) Tq v t (12) To optimize the parameters in our model, we compute the probability of the tail t using a softmax function: p ( t | h, r ) = exp( f ( h, r, t )) (cid:80) ( h,r,t (cid:48) ) G (cid:48) { ( h,r,t ) } exp( f ( h, r, t (cid:48) )) (13) where is a smoothing parameter, and G (cid:48) is a set of invalid triples created by randomly replacing the tail t with an invalid entity t (cid:48) .",
"where ( h i , r i , t i ) G is a valid triple, and |E| is the number of valid triples in G .",
"Datasets : We conduct experiments for KGE on two widely used public benchmark datasets : WN18RR (Dettmers et al., 2018) and FB15k-237 (Toutanova et al., 2015).",
"WN18RR is a subset of WN18 (Bordes et al., 2013) while FB15k-237 is a subset of FB15k (Bordes et al., 2013).",
"Since WN18 and FB15k contain a large number of inverse relations, making the triples in the test set can be obtained simply by inverting triples in the training set.",
"To address the above problem, both WN18RR (Dettmers et al., 2018) and FB15k-237 (Toutanova et al., 2015) are generated by removing the inverse relations from WN18 and FB15k.",
"In recent two Models WN18RR FB15k-237 MR MRR Hits@10 MR MRR Hits@10 TransE (Bordes et al., 2013)* 2300 0.243 0.532 323 0.279 0.441 DistMult (Yang et al., 2015)* 5110 0.430 0.490 512 0.281 0.446 ComplEx (Trouillon et al., 2016)* 5261 0.440 0.510 546 0.278 0.450 R-GCN+ (Schlichtkrull et al., 2018) --0.249 0.417 CACL (Oh et al., 2018) 3154 0.472 0.543 235 0.349 0.487 ConvE (Dettmers et al., 2018) 4187 0.430 0.520 244 0.325 0.501 NKGE (Wang et al., 2018) 4170 0.450 0.526 237 0.330 0.510 TransMS (Yang et al., 2019) 6523 -0.460 249 -0.445 AnyBURL (Meilicke et al., 2019) -0.470 0.552 -0.310 0.486 SACN (Shang et al., 2019) -0.470 0.540 -0.350 0.540 A2N (Bansal et al., 2019) -0.450 0.510 -0.317 0.486 GRank (Ebisu and Ichise, 2019) -0.470 0.539 -0.322 0.489 ConvR (Jiang et al., 2019) -0.475 0.537 -0.350 0.528 MuRE (Balazevic et al., 2019b) -0.475 0.554 -0.336 0.521 RotatE (Sun et al., 2019) 3340 0.476 0.571 177 0.338 0.533 QuatE (Zhang et al., 2019) 3472 0.481 0.564 176 0.311 0.495 InteractE (Vashishth et al., 2020) 5202 0.463 0.528 172 0.354 0.535 ConvKB (Nguyen et al., 2018) b 3433 0.249 0.524 309 0.243 0.421 CapsE (Nguyen et al., 2019) b 718 0.415 0.559 403 0.150 0.356 KBGAT (Nathani et al., 2019) b 1921 0.412 0.554 270 0.157 0.331 ReInceptionE (ours) 1894 0.483 0.582 173 0.349 0.528 ConvKB (Nguyen et al., 2018) a 2554 0.248 0.525 257 0.396 0.517 CapsE (Nguyen et al., 2019) a 719 0.415 0.560 303 0.523 0.593 KBGAT (Nathani et al., 2019) a 1940 0.440 0.581 210 0.518 0.626 Table 2: Link prediction results on WN18RR and FB15k-237 test sets.",
"years, WN18RR and FB15k-237 have become the most popular datasets for the KGE task.",
"Table 1 shows the summary statistics of the datasets.",
"Implementations : For a test triple ( h, r, t ) , the purpose of KGE task is to predict missing links, e.g. predict tail entity t given head entity h and relation r or predict head entity h given tail entity t and relation r .",
"To evaluate our method, three metrics are used, including Mean Rank (MR), Mean Reciprocal Rank (MRR), and Hit@10 (e.g. the accuracy in top 10 predictions).",
"Please note that lower MR, higher MRR and Hits@10 indicate better performance.",
"We follow the Filtered setting protocol (Bordes et al., 2013) to evaluate our model, i.e., ranking all the entities excluding the set of other true entities that appeared in training, validation and test sets.",
"We initialize the embedding of entity and relation in our ReInceptionE model using the pre-trained embeddings with 100-dimension used in (Nguyen et al., 2019).",
"We use Adam (Kingma and Ba, 2015) to optimize the model.",
"The parameters of our model are selected via grid search according to the MRR on the validation set.",
"We select the dropout rate from { 0 .",
"1 , 0 .",
"2 , 0 .",
"4 , 0 .",
"5 } , the learning rate from { 0 .",
"001 , 0 .",
"0005 , 0 .",
"0002 , 0 .",
"0001 } , the L 2 norm of parameters from { 1 e 3 , 1 e 5 , 1 e 8 } , the batch size from { 32 , 64 , 128 , 256 , 512 } and the smoothing parameter in Equation 13 from { 1 , 5 , 10 } .",
"Finally, the learning rate is set to 0.0002 for WN18RR and 0.0001 for FB15k-237.",
"The L 2 norm of parameters is set to 1 e 5 .",
"The batch size is set to 256.",
"The dropout rate is set to 0.4 for WN18RR and 0.2 for FB15k-237.",
"The smoothing parameter in Equation 13 is set to = 5 .",
"The number of filters for each convolution operation in the Inception module is set to 32.",
"We Models WN18RR FB15k-237 MR MRR Hits@10 MR MRR Hits@10 ConvE 4187 0.430 0.520 244 0.325 0.501 KBGAT 1921 0.412 0.554 270 0.157 0.331 InceptionE 2317 0.451 0.563 215 0.334 0.518 ReInceptionE w/o N 1942 0.449 0.573 185 0.348 0.525 ReInceptionE w/o E 1809 0.412 0.569 186 0.343 0.522 ReInceptionE 1894 0.483 0.582 173 0.349 0.528 Table 3: Impact of different modules contributes the KGE task.",
"observe that MRR performance increases slowly, starting to stagnate around 200 epochs.",
"Finally, we train the model up to 200 epoches in the following experiments.",
"The source codes are available at https://github.com/JuneTse/ReInceptionE .",
"We compare our results with various state-of-the-art methods.",
"Experimental results are summarized in Table 2.",
"For all KGE models, a key step is to create the invalid triples to construct the negative samples.",
"Most recently, Sun et al. (2020) investigated the inappropriate evaluation problem happened in ConvKB (Nguyen et al., 2018), CapsE (Nguyen et al., 2019) and KBGAT (Nathani et al., 2019).",
"In fact, this issue comes from the unusual score distribution, e.g., the score function for some invalid triples gets the same values as the valid triples.",
"Sun et al. (2020) also found that KBGAT removed the invalid triples when they appeared in the test set during negative sampling, suffering from the leakage of test triples.",
"Therefore, we take the results (marked with the superscript b ) from (Sun et al., 2020) for ConvKB, CapsE and KBGAT.",
"Besides, we also list the results reported in the original papers (marked with the superscript a ).",
"From Table 2, we can see that our proposed ReInceptionE obtains competitive results compared with the state-of-the-art methods.",
"On WN18RR dataset, the ReInceptionE achieves the best results using Hits@10 and MRR, and the second-best results using MR. On FB15k-237 dataset, the ReInceptionE obtains the second-best results using MR, and comparable results using MRR and Hits@10.",
"Our proposed ReInceptionE is closely related to ConvE (Dettmers et al., 2018) and KBGAT (Nathani et al., 2019).",
"Compared with ConvE, ReInceptionE achieves large performance gains on both WN18RR and FB15k-237 (ConvE vs. ReIn-ceptionE).",
"The reason is that instead of simply concatenating the head and relation embeddings, ReInceptionE takes head and relation as two channels of the input and applies the Inception network to capture the rich interactions, which is able to learn expressive features by using filters with various scales.",
"Unlike KBGAT, the ReInceptionE takes the (entity, relation) pair as a query and utilizes the relation-aware attention mechanism to gather the most relevant local neighbors and global entity information for the given query.",
"The results again verify the effectiveness of the relation-aware local and global information for KGE.",
"Some other methods have been proposed to address the KGE task, such as pLogicNet (Ou and Tang, 2019), RPJE (Niu et al., 2020), CoKE (Wang et al., 2019), TuckER (Balazevic et al., 2019a), D4-GUmbel (Xu and Li, 2019) and HAKE (Zhang et al., 2020).",
"pLogicNet (Ou and Tang, 2019) and RPJE (Niu et al., 2020) leverage logic rules to improve the performance.",
"CoKE (Wang et al., 2019) uses Transformer (Vaswani et al., 2017) to encode contextualized representations.",
"HAKE (Zhang et al., 2020) embeds entities in the polar coordinate system to learn semantic hierarchies.",
"D4-Gumbel (Xu and Li, 2019) uses the dihedral group to model relation composition.",
"TuckER (Balazevic et al., 2019a) uses Tucker decomposition to learn tensor factorization for KGE.",
"These methods take a series of different ways to model the KGE task.",
"For example, logic rules play an important role to determine whether a triple is valid or not, we suspect that the performance of our proposed ReInceptionE can be further improved when taking the logic rules into account.",
"We will leave the comparison and deep analysis in the future work.",
"We describe the experimental results in Table 3 to investigate the impact of different modules in ReInceptionE.",
"In Table 3, InceptionE is the baseline Models Predicting head Predicting tail 1-1 1-N N-1 N-N 1-1 1-N N-1 N-N WN18RR ConvE 0.975 0.414 0.110 0.950 0.975 0.153 0.303 0.949 InceptionE 0.976 0.587 0.128 0.957 0.952 0.231 0.482 0.957 ReInceptionE 0.976 0.586 0.152 0.961 0.976 0.272 0.494 0.958 FB15k-237 ConvE 0.303 0.590 0.137 0.400 0.272 0.088 0.845 0.545 InceptionE 0.573 0.624 0.175 0.452 0.557 0.124 0.865 0.557 ReInceptionE 0.609 0.651 0.185 0.473 0.594 0.149 0.872 0.603 Table 4: Link prediction results for each relation category on the WN18RR and FB15k-237 test sets using Hits@10.",
"model without using relation-aware local neighbors and global entities.",
"ReInception w/o N is the model without using relation-aware local neighbor information while ReInception w/o E is the model without using relation-aware global entity information.",
"Besides, we also take two closely related models ConvE and KBGAT for fair comparison.",
"From Table 3, we can see that our baseline InceptionE outperforms the closely related CNN-based model ConvE.",
"Compared with ConvE, InceptionE is more powerful because it can capture the rich interaction features by using filters with various scales.",
"And the ReInceptionE, which incorporates relation-aware local neighborhood and global entity information, outperforms the related graph-based model KBGAT.",
"Table 3 also shows that the ReInceptionE outperforms InceptionE, ReInception w/o N and ReInception w/o E by a large margin on both datasets, which reconfirms our observations that relation-aware local neighbors and global entities can play different contributions for KGE.",
"In this subsection, we present the experimental results on different relation types on WN18RR and FB15k-237 using Hits@10.",
"We choose the closely related model ConvE, as well as InceptionE as the baselines.",
"Following (Bordes et al., 2013), we classify the relations into four groups: one-to-one (1-1), one-to-many (1-N), many-to-one (N-1) and many-to-many (N-N), based on the average number of tails per head and the average number of heads per tail.",
"Table 4 shows the link prediction results for each relation category.",
"From Table 4, we find that InceptionE achieves better performance than ConvE for all relation types, indicating that increasing the number of interactions between head and relation embeddings is indeed beneficial to KGE task.",
"Furthermore, our proposed ReInceptionE signifi-cantly outperforms ConvE and InceptionE for all relation types.",
"In particular, ReInceptionE obtains larger improvements for complex relations, such as one-to-many, many-to-one and many-to-many.",
"This again verifies our observations that increasing the interactions and taking the local-global structural information allows the model to capture more complex relations.",
"In order to further analyze how relation-aware neighbors contribute to KGE task, we give two examples in Table 5.",
"For the query ( Jack London , nationality , ? ), ReInceptionE assigns the highest attention scores for neighbors ( place lived , Oakland ), since Oakland and America are close to each other in embedding space because of other relations between them.",
"And the top predictions for the query are a set of entities with the type of country.",
"For the second example ( Jerry Lewls , languages , ? ), ReInceptionE assigns the very high score for neighbor ( place of birth , Newark ).",
"This can allow us to project ( place of birth , Newark ) into the Jerry Lewis region of the embedding space, which can lead to a high score for predicting the target English Language .",
"These examples give clear evidence of how our proposed ReInceptionE benefits the KGE task.",
"In this paper, we propose a novel relation-aware Inception network for knowledge graph embedding, called ReInceptionE.",
"ReInceptionE takes the benefits of ConvE and KBGAT together.",
"The proposed method first employs Inception network to learn the query embedding, with the aim of increasing the interaction between head and relation embeddings, while at the same time to keep the parameter efficient.",
"Then, we gather the relation-aware local neighborhood and global entity information with an attention mechanism and enrich the query embedding with the joint local-global structural information.",
"Empirical studies demonstrate that our proposed method obtains comparative performance compared with the state-of-the-art performance on two widely used benchmark datasets WN18RR and FB15k-237.",
"This work was supported by the National Natural Science Foundation of China under Grants 61972290 and 61972173, and also supported by the National Key R&D Program of China under Grant 2018YFC1604000.",
"This research was also supported in part by the research grant from Natural Sciences and Engineering Research Council (NSERC) of Canada and York Research Chairs (YRC) program.",
"We thank anonymous reviewers for their thorough review comments on this paper."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"method",
"method",
"other",
"objective",
"method",
"method",
"method",
"method",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"objective",
"other",
"other",
"other"
] |
[
"Recent Transformer-based architectures, e.g., BERT, provide impressive results in many Natural Language Processing tasks.",
"However, most of the adopted benchmarks are made of (sometimes hundreds of) thousands of examples.",
"In many real scenarios, obtaining high-quality annotated data is expensive and time-consuming; in contrast, unlabeled examples characterizing the target task can be, in general, easily collected.",
"One promising method to enable semi-supervised learning has been proposed in image processing, based on Semi-Supervised Generative Adversarial Networks.",
"In this paper, we propose GAN-BERT that extends the fine-tuning of BERT-like architectures with unlabeled data in a generative adversarial setting.",
"Experimental results show that the requirement for annotated examples can be drastically reduced (up to only 50 100 annotated examples), still obtaining good performances in several sentence classification tasks.",
"In recent years, Deep Learning methods have become very popular in Natural Language Processing (NLP), e.g., they reach high performances by relying on very simple input representations (for example, in (Kim, 2014; Goldberg, 2016; Kim et al., 2016)).",
"In particular, Transformer-based architectures, e.g., BERT (Devlin et al., 2019), provide representations of their inputs as a result of a pre-training stage.",
"These are, in fact, trained over large scale corpora and then effectively fine-tuned over a targeted task achieving state-of-the-art results in different and heterogeneous NLP tasks.",
"These achievements are obtained when thousands of annotated examples exist for the final tasks.",
"As experimented in this work, the quality of BERT fine-tuned over less than 200 annotated instances shows significant drops, especially in classification tasks involving many categories.",
"Unfortunately, obtaining annotated data is a time-consuming and costly process.",
"A viable solution is adopting semi-supervised methods, such as in (Weston et al., 2008; Chapelle et al., 2010; Yang et al., 2016; Kipf and Welling, 2016) to improve the generalization capability when few annotated data is available, while the acquisition of unlabeled sources is possible.",
"One effective semi-supervised method is implemented within Semi-Supervised Generative Adversarial Networks (SS-GANs).",
"Usually, in GANs (Goodfellow et al., 2014) a generator is trained to produce samples resembling some data distribution.",
"This training process adversarially depends on a discriminator, which is instead trained to distinguish samples of the generator from the real instances.",
"SS-GANs (Salimans et al., 2016) are an extension to GANs where the discriminator also assigns a category to each example while discriminating whether it was automatically generated or not.",
"In SS-GANs, the labeled material is thus used to train the discriminator, while the unlabeled examples (as well as the ones automatically generated) improve its inner representations.",
"In image processing, SS-GANs have been shown to be effective: exposed to few dozens of labeled examples (but thousands of unlabeled ones), they obtain performances competitive with fully supervised settings.",
"In this paper, we extend the BERT training with unlabeled data in a generative adversarial setting.",
"In particular, we enrich the BERT fine-tuning process with an SS-GAN perspective, in the so-called GAN-BERT 1 model.",
"That is, a generator produces fake examples resembling the data distribution, while BERT is used as a discriminator.",
"In this way, we exploit both the capability of BERT to produce high-quality representations of input texts and to adopt unlabeled material to help the network in 1 The code is available at https://github.com/ crux82/ganbert .",
"generalizing its representations for the final tasks.",
"At the best of our knowledge, using SS-GANs in NLP has been investigated only by (Croce et al., 2019) with the so-called Kernel-based GAN.",
"In that work, authors extend a Kernel-based Deep Architecture (KDA, (Croce et al., 2017)) with an SS-GAN perspective.",
"Sentences are projected into low-dimensional embeddings, which approximate the implicit space generated by using a Semantic Tree Kernel function.",
"However, it only marginally investigated how the GAN perspective could extend deep architecture for NLP tasks.",
"In particular, a KGAN operates in a pre-computed embedding space by approximating a kernel function (Annesi et al., 2014).",
"While the SS-GAN improves the quality of the Multi-layered Perceptron used in the KDA, it does not affect the input representation space, which is statically derived by the kernel space approximation.",
"In the present work, all the parameters of the network are instead considered during the training process, in line with the SS-GAN approaches.",
"We empirically demonstrate that the SS-GAN schema applied over BERT, i.e., GAN-BERT , reduces the requirement for annotated examples: even with less than 200 annotated examples it is possible to obtain results comparable with a fully supervised setting.",
"In any case, the adopted semi-supervised schema always improves the result obtained by BERT.",
"In the rest of this paper, section 2 provides an introduction to SS-GANs.",
"In sections 3 and 4, GAN-BERT and the experimental evaluations are presented.",
"In section 5 conclusions are derived.",
"SS-GANs (Salimans et al., 2016) enable semi-supervised learning in a GAN framework.",
"A discriminator is trained over a ( k + 1) -class objective: true examples are classified in one of the target (1 , ..., k ) classes, while the generated samples are classified into the k + 1 class.",
"More formally, let D and G denote the discriminator and generator, and p d and p G denote the real data distribution and the generated examples, respectively.",
"In order to train a semi-supervised k -class classifier, the objective of D is extended as follows.",
"Let us define p m ( y = y | x, y = k + 1) the probability provided by the model m that a generic example x is associated with the fake class and p m ( y = y | x, y (1 , ..., k )) that x is considered real, thus belonging to one of the target classes.",
"The loss function of D is defined as: LD = LD sup.",
"+ LD unsup.",
"where: LD sup.",
"= E x,y p d log[ p m ( y = y | x, y (1 , ..., k ))] LD unsup.",
"= E x p d log[1 p m ( y = y | x, y = k +1)] E x G log [ p m ( y = y | x, y = k + 1)] LD sup.",
"measures the error in assigning the wrong class to a real example among the original k categories.",
"LD unsup.",
"measures the error in incorrectly recognizing a real (unlabeled) example as fake and not recognizing a fake example.",
"At the same time, G is expected to generate examples that are similar to the ones sampled from the real distribution p d .",
"As suggested in (Salimans et al., 2016), G should generate data approximating the statistics of real data as much as possible.",
"In other words, the average example generated in a batch by G should be similar to the real prototypical one.",
"Formally, let's f ( x ) denote the activation on an intermediate layer of D .",
"The feature matching loss of G is then defined as: LG featurematching = (cid:107) E x pd f ( x ) E x G f ( x ) (cid:107) 2 2 that is, the generator should produce examples whose intermediate representations provided in input to D are very similar to the real ones.",
"The G loss also considers the error induced by fake examples correctly identified by D , i.e., LG unsup.",
"= E x G log[1 p m ( y = y | x,y = k +1)] The G loss is LG = LG featurematching + LG unsup.",
".",
"While SS-GANs are usually used with image inputs, we will show that they can be adopted in combination with BERT (Devlin et al., 2019) over inputs encoding linguistic information.",
"Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2019) belongs to the family of the so-called transfer learning methods, where a model is first pre-trained on general tasks and then fine-tuned on the final target tasks.",
"In Computer Vision, transfer learning has been shown beneficial in many different tasks, i.e., pre-training a neural network model on a known task, followed by a fine-tuning stage on a (different) target task (see, for example, (Girshick et al., 2013)).",
"BERT is a very deep model that is pre-trained over large corpora of raw texts and then is fine-tuned on target annotated data.",
"The building block of BERT is the Transformer (Vaswani et al., 2017), an attention-based mechanism that learns contextual relations between words (or sub-words, i.e., word pieces, (Schuster and Nakajima, 2012)) in a text.",
"BERT provides contextualized embeddings of the words composing a sentence as well as a sentence embedding capturing sentence-level semantics: the pre-training of BERT is designed to capture such information by relying on very large corpora.",
"After the pre-training, BERT allows encoding",
"(i) the words of a sentence,",
"(ii) the entire sentence, and",
"(iii) sentence pairs in dedicated embeddings.",
"These can be used in input to further layers to solve sentence classification, sequence labeling or relational learning tasks: this is achieved by adding task-specific layers and by fine-tuning the entire architecture on annotated data.",
"In this work, we extend BERT by using SS-GANs for the fine-tuning stage.",
"We take an already pre-trained BERT model and adapt the fine-tuning by adding two components:",
"i) task-specific layers, as in the usual BERT fine-tuning;",
"ii) SS-GAN layers to enable semi-supervised learning.",
"Without loss of generality, let us assume we are facing a sentence classification task over k categories.",
"Given an input sentence s = ( t 1 , ..., t n ) BERT produces in output n + 2 vector representations in R d , i.e., ( h CLS , h t 1 , ..., h t n , h SEP ) .",
"As suggested in (De-vlin et al., 2019), we adopt the h CLS representation as a sentence embedding for the target tasks.",
"As shown in figure 1, we add on top of BERT the SS-GAN architecture by introducing",
"i) a discriminator D for classifying examples, and",
"ii) a generator G acting adversarially.",
"In particular, G is a Multi Layer Perceptron (MLP) that takes in input a 100-dimensional noise vector drawn from N ( , 2 ) and produces in output a vector h fake R d .",
"The discriminator is another MLP that receives in input a vector h R d ; h can be either h fake produced by the generator or h CLS for unlabeled or labeled examples from the real distribution.",
"The last layer of D is a softmax-activated layer, whose output is a k + 1 vector of logits, as discussed in section 2.",
"During the forward step, when real instances are sampled (i.e., h = h CLS ), D should classify them in one of the k categories; when h = h fake , it should classify each example in the k + 1 category.",
"As discussed in section 2, the training process tries F k classes noise is real?",
"During back-propagation, the unlabeled examples contribute only to LD unsup.",
", i.e., they are considered in the loss computation only if they are erroneously classified into the k +1 category.",
"In all other cases, their contribution to the loss is masked out.",
"The labeled examples thus contribute to the supervised loss LD sup.",
".",
"Finally, the examples generated by G contribute to both LD and LG , i.e., D is penalized when not finding examples generated by G and vice-versa.",
"When updating D , we also change the BERT weights in order to fine-tune its inner representations, so considering both the labeled and the unlabeled data 2 .",
"After training, G is discarded while retaining the rest of the original BERT model for inference.",
"This means that there is no additional cost at inference time with respect to the standard BERT model.",
"In the following, we will refer to this architecture as GAN-BERT .",
"In this section, we assess the impact of GAN-BERT over sentence classification tasks characterized by different training conditions, i.e., number of examples and number of categories.",
"We report measures of our approach to support the development of deep learning models when exposed to few labeled examples over the following tasks: Topic Classification over the 20 News Group ( 20N ) dataset (Lang, 1995), Question Classification ( QC ) on the UIUC dataset (Li and Roth, 2006), Sentiment Analysis over the SST-5 dataset (Socher et al., 2013).",
"We 2 From a computational perspective, the additional cost of G is negligible in terms of network parameters: it is an MLP which takes in input random vectors of 100 dimensions and produces in output vectors in the same 768-dimensional space of BERT.",
"In other words, it is characterized by about 100 thousand parameters that are much less than in BERT base, i.e., 110 million parameters.",
"will also report the performances over a sentence pair task, i.e., over the MNLI dataset (Williams et al., 2018).",
"For each task, we report the performances with the metric commonly used for that specific dataset, i.e., accuracy for SST-5 and QC , while F1 is used for 20N and MNLI datasets.",
"As a comparison, we report the performances of the BERT-base model fine-tuned as described in (De-vlin et al., 2019) on the available training material.",
"We used BERT-base as the starting point also for the training of our approach.",
"GAN-BERT is implemented in Tensorflow by extending the original BERT implementation 3 .",
"In more detail, G is implemented as an MLP with one hidden layer activated by a leaky-relu function.",
"G inputs consist of noise vectors drawn from a normal distribution N (0 , 1) .",
"The noise vectors pass through the MLP and finally result in 768 -dimensional vectors, that are used as fake examples in our architecture.",
"D is, also, an MLP with one hidden layer activated by a leaky-relu function followed by a softmax layer for the final prediction.",
"For both G and D we used dropout= 0 .",
"1 after the hidden layer.",
"We repeated the training of each model with an increasing set of annotated material ( L ), starting by sampling only 0 .",
"01% or 1% of the training set, in order to measure the performances 3 https://github.com/google-research/ bert starting with very few labeled examples (about 50 70 instances).",
"GAN-BERT is also provided with a set of unlabeled examples U coming from the unused annotated material for each training set sample ( | U | = 100 | L | , when available).",
"We replicated the labeled examples of a factor log ( | U | / | L | ) : this guarantees the presence of some labeled instances in each batch to avoid divergences due to the unsupervised component of the adversarial training.",
"All the reported results are averaged over 5 different shuffles of the training material.",
"The 20N classification results are shown in figure 2a.",
"The training and testing datasets are made of 11 , 314 and 7 , 531 documents classified in 20 categories 4 , respectively.",
"The plot shows F1 scores of the models: when 1% of data is used (i.e., about 110 examples) BERT almost diverges while GAN-BERT achieves more than 40% of F1.",
"This trend is confirmed until 40% of labeled documents are used (i.e., about 5 , 000 examples).",
"In the QC task we observe similar outcomes.",
"The training dataset is made of about 5 , 400 question.",
"In the coarse-grained setting (figure 2b) 6 classes are involved; in the fine-grained scenario (figure 2c) the number of classes is 50 .",
"In both cases, BERT diverges when only 1% of labeled questions are used, i.e., about 50 questions.",
"It starts to com-4 We used the train/test split available within scikit-learn.",
"pensate when using about 20% of the data in the coarse setting (about 1 , 000 labeled examples).",
"In the fine-grained scenario, our approach is performing better until 50% of the labeled examples.",
"It seems that, when a large number of categories is involved, i.e., the classification task is more complex, the semi-supervised setting is even more beneficial.",
"The results are confirmed in sentiment analysis over the SST-5 dataset (figure 2d), i.e., sentence classification involving 5 polarity categories.",
"Also in this setting, we observe that GAN-BERT is beneficial when few examples are available.",
"This is demonstrated by the difference in accuracy at 1% of the data (about 85 labeled examples), where BERT accuracy is 22 .",
"2% while GAN-BERT reaches 30 .",
"4% in accuracy.",
"This trend is confirmed until about 20% of labeled examples (about 1 , 700 ), where BERT achieves comparable results.",
"Finally, we report the performances on Natural Language Inference on the MNLI dataset.",
"We observe (in figures 2e and 2f) a systematic improvement starting from 0 .",
"01% labeled examples (about 40 instances): GAN-BERT provides about 6 10 additional points in F1 with respect to BERT ( 18 . 09% vs. 29 . 19% and 18 . 01% vs. 31 . 64% , for mismatched and matched settings, re-spectively).",
"This trend is confirmed until 0 .",
"5% of annotated material (about 2 , 000 annotated ex-amples): GAN-BERT reaches 62 .",
"67% and 60 .",
"45% while BERT reaches 48 .",
"35% and 42 .",
"41% , for mismatched and matched, respectively.",
"Using more annotated data results in very similar performances with a slight advantage in using GAN-BERT .",
"Even if acquiring unlabeled examples for sentence pairs is not trivial, these results give a hint about the potential benefits on similar tasks (e.g., question-answer classification).",
"In this paper, we extended the limits of Transformed-based architectures (i.e., BERT) in poor training conditions.",
"Experiments confirm that fine-tuning such architectures with few labeled examples lead to unstable models whose performances are not acceptable.",
"We suggest here to adopt adversarial training to enable semi-supervised learning Transformer-based architectures.",
"The evaluations show that the proposed variant of BERT, namely GAN-BERT , systematically improves the robustness of such architectures, while not introducing additional costs to the inference.",
"In fact, the generator network is only used in training, while at inference time only the discriminator is necessary.",
"This first investigation paves the way to several extensions including adopting other architectures, such as GPT-2 (Radford et al., 2019) or DistilBERT (Sanh et al., 2019) or other tasks, e.g., Sequence Labeling or Question Answering.",
"Moreover, we will investigate the potential impact of the adversarial training directly in the BERT pre-training.",
"From a linguistic perspective, it is worth investigating what the generator encodes in the produced representations.",
"We would like to thank Carlo Gaibisso, Bruno Luigi Martino and Francis Farrelly of the Istituto di Analisi dei Sistemi ed Informatica Antonio Ruberti (IASI) for supporting the early experimentations through access to dedicated computing resources made available by the Artificial Intelligence & High-Performance Computing laboratory."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"other"
] |
[
"We explore the problem of audio captioning 1 : generating natural language description for any kind of audio in the wild, which has been surprisingly unexplored in previous research.",
"We contribute a large-scale dataset of 46K audio clips with human-written text pairs collected via crowdsourcing on the AudioSet dataset (Gemmeke et al., 2017).",
"Our thorough empirical studies not only show that our collected captions are indeed loyal to the audio inputs but also discover what forms of audio representation and captioning models are effective for audio captioning.",
"From extensive experiments, we also propose two novel components that are integrable with any attention-based captioning model to help improve audio captioning performance: the top-down multi-scale encoder and aligned semantic attention.",
"Captioning , the task of translating a multimedia input source into natural language, has been substantially studied over the past few years.",
"The vast majority of the journey has been through the visual senses ranging from static images to videos.",
"Yet, the exploration into the auditory sense has been circumscribed to human speech transcription (Panayotov et al., 2015; Nagrani et al., 2017), leaving the basic natural form of sound in an uncharted territory of the captioning research.",
"Recently, sound event detection has gained much attention such as DCASE challenges (Mesaros et al., 2017) along with the release of a large scale AudioSet dataset (Gemmeke et al., 2017).",
"However, sound classification ( e.g . predicting multiple labels for a given sound) and event detection ( e.g . localizing the sound of interest in a clip) may not be sufficient for a full understanding of the sound.",
"Instead, a natural sen-1 For a live demo and details, https://audiocaps.github.io [Audio Classification] rumble | vehicle | speech | car | outside [Video Captioning] A bus passing by with some people walking by in the afternoon.",
"[Audio Captioning] A muffled rumble with man and woman talking in the background while a siren blares in the distance.",
"tence offers a greater freedom to express a sound, because it allows to characterize objects along with their states, properties, actions and interactions.",
"For example, suppose that suddenly sirens are ringing in the downtown area.",
"As a natural reaction, people may notice the presence of an emergency vehicle, even though they are unable to see any flashing lights nor feel the rush of wind from a passing vehicle.",
"Instead of simply tagging this sound as ambulance or siren , it is more informative to describe which direction the sound is coming from or whether the source of the sound is moving closer or further away, as shown in Figure 1. To that end, we address the audio captioning problem for audios in the wild, which has not been studied yet, to the best of our knowledge.",
"This work focuses on one of the most important bases toward this research direction, contributing a large-scale dataset .",
"The overarching sources of in-the-wild sounds are grounded on the AudioSet (Gemmeke et al., 2017), so far the largest collection of sound events collected from Youtube videos.",
"We newly collect human-written sentences for a subset of AudioSet audio clips via crowdsourcing on Amazon Mechanical Turk (sec-tion 3).",
"We also develop two simple yet effective techniques to generate captions through the joint use of multi-level pretrained features and better attention mechanism named aligned-semantic attention (section 4).",
"Lastly, we perform experiments contrasting between video-based captions and audio-focused captions by employing a variety of features and captioning models (section 5).",
"The contributions of this work are as follows.",
"1. To the best of our knowledge, this work is the first attempt to address the audio captioning task for sound in the wild.",
"We contribute its first large-scale dataset named AudioCaps , which consists of 46K pairs of audio clips and text description.",
"2. We perform thorough empirical studies not only to show that our collected captions are indeed true to the audio inputs and but also to discover what forms of audio representations and captioning models are effective.",
"For example, we observe that the embeddings from large-scale pretrained VGGish (Her-shey et al., 2017) are powerful in describing the audio input, and both temporal and semantic attention are helpful to enhance captioning performance.",
"3. From extensive experiments, we propose two simple yet effective technical components that further improve audio captioning performance: the top-down multi-scale encoder that enables the joint use of multi-level features and aligned semantic attention that advances the consistency between semantic attention and spatial/temporal attention.",
"Speech recognition and separation .",
"One of the most eminent tasks for audio understanding may be speech recognition, the task of recognizing and translating human spoken language into text with less emphasis on background sound that may coexist.",
"A multitude of datasets exist for such task e.g .",
"Speech Commands dataset (Warden, 2018), Common Voice dataset (Mozilla, 2017), Librispeech (Panayotov et al., 2015), LS Speech (Ito, 2017).",
"As one of similar lineage, automatic speech separation forks an input audio signal into several individual speech sources (Hershey et al., 2016; Ephrat et al., 2018).",
"To most of these tasks, in the wild sound is deemed as background noise to be removed as an obstructer of speech recognition.",
"On the other hand, our work puts the spotlight on these neglected sounds and express them through natural language.",
"Audio classification and sound event detection .",
"This line of tasks emphasizes categorizing a sound into a set of predefined classes.",
"There exist a number of datasets to aid in achieving this goal, including DCASE series (Stowell et al., 2015; Mesaros et al., 2016, 2017), UrbanSound8k (Salamon et al., 2014), ESC (Piczak, 2015).",
"AudioSet (Gemmeke et al., 2017) is an audio event dataset collected from Youtube that is unsurpassed in terms of coverage and size, structured with an ontology containing 527 classes.",
"Another predominant large-scale dataset is Freesound (Fon-seca et al., 2017).",
"It consists of audio samples from freesound.org recordings based on the preceding AudioSet ontology.",
"In contrast to audio classification, which uniquely map the audio to a set of labels, our task generates a descriptive sentence.",
"Hence, it needs to not only detect salient sounds of classes but also explores their states, properties, actions or interactions.",
"Captioning tasks and datasets .",
"The vast majority of captioning tasks and datasets focus on the visual domain.",
"Image captioning generates text description of an image, and numerous datasets are proposed, such as Flickr 8k (Rashtchian et al., 2010), Flickr 30k (Young et al., 2014), MS COCO (Lin et al., 2014), DenseCap (Johnson et al., 2016) and Conceptual Captions (Sharma et al., 2018).",
"Akin to the image captioning is video captioning, for which there are many datasets too, including MSVD (Guadarrama et al., 2013), MSR-VTT (Xu et al., 2016), LSMDC (Rohrbach et al., 2017) and ActivityNet Captions (Krishna et al.,",
"2017).Com-pared to previous captioning tasks and datasets, our work confines the problem by focusing on in the wild audio inputs.",
"Recently, there have been some efforts to solve video captioning with audio input (Hori et al., 2017, 2018; Wang et al., 2018).",
"However, the audio input merely serves as auxiliary features for video captioning, and as a result, it only marginally improves the performance ( e.g .",
"BLEU-4 score: 39.6 (video only) vs. 40.3 (video + MFCC) (Wang et al., 2018)).",
"These results are partly culpable to dataset collection, where the annotators mostly rely on the video input.",
"On the contrary, our collection induces the annotators to mainly abide to audio, hence, increasing the dependency of written text on the audio input as can be shown in our survey analysis in Figure 5.",
"Our AudioCaps dataset entails 46K audio caption pairs.",
"Table 1 outlines its key statistics.",
"The audio sources are rooted in AudioSet (Gemmeke et al., 2017), a large-scale audio event dataset, from which we draft the AudioCaps , as discussed below.",
"We present more details of data collection and statistics in the Appendix.",
"It is important to select qualified audio clips as the first step of dataset collection.",
"The chosen categories of clips must be well-rounded in coverage of naturally occurring audios, be relevant to practical applications and appear with high frequency.",
"To that end, we tailor the AudioSet dataset (Gemmeke et al., 2017) that comprises 1,789,621 human-labeled 10 second YouTube excerpts with an ontology of 527 audio event categories.",
"However, an immediate collection of captions from these audios pose several difficulties:",
"(i) too many audio clips,",
"(ii) inconsistent level of abstraction among the classes,",
"(iii) distribution bias of some labels and",
"(iv) noisy labels that are only noticeable from visual cues.",
"We circumvent these issues through a controlled sampling process as described below.",
"Among 527 audio event categories of AudioSet, we first exclude all the labels whose number of clips are less than 1,000 to promote a balanced distribution within the dataset.",
"We also remove all 151 labels in the music super-category, because they are often indiscernible even for a human.",
"For example, a human with no expertise can hardly discriminate the sound of Guitar from Banjo .",
"Thus, we set aside the musical territory for future exploration.",
"We further discard categories if they do not satisfy the following two constraints.",
"The word labels should be identifiable solely from sound",
"(i) without requiring visuals ( e.g . remove the category inside small room ) and",
"(ii) without requiring any expertise ( e.g . remove power windows and electric windows because their distinction may be possible only for car experts).",
"Fi-Split # clips # captions # words/caption # labels/clip Train 38,118 38,118 8.79 (8) 4.25 (4) Val 500 2,500 10.12 (9) 4.06 (3) Test 979 4,895 10.43 (9) 4.03 (3) Total 39,597 45,513 9.03 (9) 4.22 (4) Table 1: Some statistics of AudioCaps dataset.",
"nally, we select 75 word labels derived from 7 augmented super-categories as avoiding the sharp skewness in the word labels ( e.g . 48.5% clips include speech label).",
"We limit the number of instances per category to 2,000 by sampling with preference to audio clips associated with more word labels to prioritize the audios with diverse content.",
"The final number of audio clips is about 115K, from which we obtain captions for 46K as the first version.",
"The collected captions should be precise, spe-cific, diverse, expressive, large-scale and correlated with the paired audios with minimal visual presumptions.",
"Such complex nature of our requirements necessitates employing crowdworkers through Amazon Mechanical Turk (AMT).",
"Some qualification measures are set for the crowdworkers, such as they should hold a +95% HIT approval rate and the total number of approved HITs that are greater than 1,000 and be located at one of [AU, CA, GB, NZ, US].",
"In total, 108 caption writing workers and 3 caption reviewing workers participate and are compensated at 10 cents per clip.",
"Annotation Interface .",
"Figure 2 shows our annotation interface, which is designed to minimize the visual presumption while maintaining diversity.",
"Each task page consists of an audio clip of about 10 seconds, word hints and video hints.",
"The word hints are the word labels that are provided by AudioSet for the clip and are employed A train is approaching with a low rumble and rhythmic click and squeal Below officers creep toward the entrance the door and points a gun",
"as hints to the crowdworkers.",
"Even to humans, recognizing the true identity of a sound can be ambiguous, and thus the word hints act as a precursor to accurately guide the crowdworkers during the description process, while staying aloof from visual bias.",
"Another benefit is that the diversity of the word labels may also enrich the expressiveness of the description.",
"Also derived from AudioSet, the video hints are provided as a stronger hint for sounds that are too difficult even to the human ear or for clips associated with some erroneous or missing word hints (weak labels).",
"We advise the workers to use them as a last resort measure.",
"Some instructions 2 are also provided to demarcate crowdworkers' descriptions as follows.",
"(i) Do not include the words for visuals in the video that are not present in the sound.",
"(ii) Ignore speech semantics.",
"(iii) When applicable, be detailed and expressive.",
"(iv) Do not be imaginative and be literal and present with the descriptions.",
"Quality Control .",
"We use a qualification test to discern many crowdworkers who frequently violate the given instructions ( e.g . transcribing instead of describing, just enumerating provided word hints or writing visual captions).",
"Interested crowdworkers must participate in the test and submit a response, which the authors manually check and approve if they are eligible.",
"We employ three additional workers to verify the data in accordance to our guidelines.",
"In order to maintain high approval rates, we periodically blacklist malicious crowdworkers while granting reasonable incentives to benevolent workers.",
"We exclude the period symbol from all the captions, convert numbers to words using num2words 3 and correct grammar errors by languagetool 4 .",
"We then tokenize words with spacy 5 .",
"Finally, we build a dictionary V with a size of 4506 by choosing all the unique tokens.",
"Figure 3 qualitatively compares some caption examples between our AudioCaps and two captioning datasets with audio: LSMDC (Rohrbach et al., 2017) and MSR-VTT (Xu et al., 2016).",
"Since both LSMDC and MSR-VTT focus more on describing videos than audios, their captions are characterized by visually grounded vocabularies (blue).",
"On the other hand, the captions of AudioCaps accompany sound-based vocabularies (red).",
"We present a hierarchical captioning model that can attend to the fine details of the audio.",
"The backbone of our model is an LSTM (Hochreiter and Schmidhuber, 1997) that we fortify with two novel components which are easily integrable with any attention-based captioning model.",
"The topdown multi-scale encoder enables the contextual use of multi-level features, and the aligned semantic attention enhances the consistency between semantic attention and temporal attention (see Figure 4).",
"Our experiments in section 5.3 show that these two techniques lead to non-trivial performance improvement.",
"The input to our model are mel-frequency cepstral coefficient (MFCC) audio features (Davis and Mermelstein, 1980) and the output is a sequence of words { y m } Mm =1 , each of which is a symbol from the dictionary.",
"For text representation, we use fastText (Bojanowski et al., 2016) trained on the Common Crawl corpus to initialize the word embedding matrix W emb , which is fine-tuned with the model during training.",
"We represent word sequences ( e.g . attribute words for semantic attention and output words for answer captions) in a distributional space as { d n } Nn =1 with d n = W emb w n where w n is a one-hot vector for n -th word in the word sequence and d n R 300 .",
"Unlike speech data, sound in the wild is not always continuous.",
"It can be often brief, noisy, occluded, in-the-distance and randomly sparsed throughout the audio.",
"Hence, the lower-level features can be useful to capture such characteristics of natural sound, although they may lack the semantics of the higher-level features.",
"Thus, the joint use of these two levels of features can be mutually beneficial.",
"The top-down multi-scale encoder takes as input the two-level audio embedding { f t } Tt =1 , { c t } Tt =1 and generates the fused encoding vector, where T is the sequence length of the audio.",
"For input, we use the features from the two layers of the pretrained VGGish network (Hershey et al., 2017): the fc2 vector { f t } Tt =1 as a high-level semantic feature, and the conv4 vector { c t } Tt =1 as a mid-level feature.",
"The first level of hierarchy encodes high-level features { f t } Tt =1 using a bi-directional LSTM.",
"We regard the last hidden state as the global audio embedding h ctxt RI : h a 1 t = biLSTM ( f t , h a 1 t 1 , h a 1 t +1 ) , (1) h ctxt = W c [ h a 1 T ; h a 11 ] + b c , (2) where W c RI D 1 and b c RI are parameters, I is the dimension of input to the next layer and D 1 is the dimension of the first layer hidden states.",
"We then reshape and encode mid-level features { c t } Tt =1 R 512 using another bi-directional LSTM.",
"In order to inject the global semantics, we perform an element-wise addition of h ctxt to the mid-level feature along the time axis, and feed them into the bi-directional LSTM one at a time, \"# $# a cat meows and cry VGGish cry baby cat c4 fc2 Aligned Semantic Attention Decoder Semantic Encoder TopDown Encoder f $ f \" & $ & \" '()( * $ * + $, -, attention flow temporal attention Figure 4: The audio captioning model with top-down multi-scale encoder and aligned semantic attention .",
"In many captioning models (You et al., 2016; Yu et al., 2017; Laokulrat et al., 2018; Long et al., 2018), semantic attention has been independently used from temporal/spatial attention.",
"However, it can be troublesome because there may exist some discrepancies between the two attentions i.e .",
"they do not attend to the same part of the input.",
"For instance, given an audio of a cat meowing and a baby crying, temporal attention may attend to the crying baby while semantic attention attends to the word cat .",
"We propose a simple yet effective approach that implicitly forces both semantic and tempo-ral/spatial attention to be correctly aligned to one another to maximize the mutual consistency.",
"For semantic attention, we extract a set of N attribute words for each audio: following You et al. (2016), we retrieve the nearest training audio from the subset of AudioSet and transfer its labels as attribute words.",
"We encode each attribute word vector using a bi-directional LSTM (named semantic encoder ): h wn = biLSTM ( d n , h wn 1 , h wn +1 ) , (4) where d n is the input text representation of the attribute word sequence.",
"We then align these semantic word features h wn to the temporal axis of the audio features h a 2 t via the attention flow layer (Seo et al., 2017).",
"For notational simplicity, we omit the bidirectional arrow in the following.",
"Attention flow layer .",
"We first compute the similarity matrix, S RT N between each pair of audio and word features using the score function ( h a 2 t , h wn ) R : ( h a 2 t , h wn ) = W [ h a 2 t ; h wn ; h a 2 t h wn ] , (5) S tn = ( h a 2 t , h wn ) , (6) where is element-wise multiplication.",
"We then use S to obtain the attentions and the attended vectors in two directions: word-to-audio { h wt } Tt =1 RD 2 and audio-to-word h a 2 RD 2 : a t = softmax ( S t : ) , h wt = (cid:88) n a tn h wn , (7) b = softmax (max row ( S )) , h a 2 = (cid:88) t b t h a 2 t , (8) where a t RN , b RT .",
"Lastly, we concatenate them into { h flowt } Tt =1 R 4 D 2 , while keeping the temporal axis intact: h flowt = [ h a 2 t ; h wt ; h a 2 t h wt ; h a 2 t h a 2 ] .",
"(9) Temporal attention over attention flow .",
"We now have an embedding that aligns the semantic features of words with the time steps of audio features.",
"Subsequently, we apply temporal attention over it; the attention weight is calculated as in Lu-ong et al. (2015).",
"Specifically, we use the global method for each t in { h flowt } Tt =1 : m = align ( h decm , h flowt ) , (10) c m = (cid:88) t mt h flowt , (11) a m = tanh ( W dec [ c m ; h decm ]) , (12) where h decm RD o is the state of the decoder LSTM, c m R 4 D 2 is the context vector, m RT is the attention mask, and W dec RD o (4 D 2 + D o ) is a parameter.",
"Next, we obtain the output word probability: s m = softmax ( W o a m ) (13) where W o RV D o .",
"Finally, we select the output word as y m +1 = argmax s V ( s m ) .",
"We repeat this process until y m +1 reaches an EOS token.",
"The model is trained to maximize the log-likelihood assigned to the target labels via the softmax as done in most captioning models.",
"We perform several quantitative evaluations to provide more insights about our AudioCaps dataset.",
"Specifically, our experiments are designed to answer the following questions: 1. Are the collected captions indeed faithful to the audio inputs?",
"2. Which audio features are useful for audio captioning on our dataset?",
"3. What techniques can improve the performance of audio captioning?",
"We present further implementation details and more experimental results in the Appendix.",
"Some resulting audio-caption pairs can be found at https://audiocaps.github.io/supp.",
"Evaluation metrics .",
"Audio captioning can be quantitatively evaluated by the language similarity between the predicted sentences and the ground-truths (GTs) such as BLEU (Papineni et al., 2002), CIDEr (Vedantam et al., 2015), METEOR (Baner-jee and Lavie, 2005), ROUGE-L (Lin, 2004) and SPICE (Anderson et al., 2016).",
"In all metrics, higher scores indicate better performance.",
"Audio features .",
"Audios are resampled to 16kHz, and stereo is converted into mono by averaging both channels.",
"We zero-pad clips that are shorter than 10 seconds and extract three levels of audio features.",
"For the low-level audio feature, the lengthy raw audios are average-pooled by the WaveNet encoder as in Engel et al. (2017).",
"For the mid-level feature, mel-frequency cepstral co-efficients (MFCC) (Davis and Mermelstein, 1980) are extracted using librosa (McFee et al., 2015) with a window size of 1024, an overlap of 360 and the number of frames at 240, and encoded further with a bi-directional LSTM followed by a gated convolutional encoder (Xu et al., 2018).",
"Lastly, we use two high-level features: the 24th output layer of SoundNet 6 (Aytar et al., 2016) with a (10 1024) dimension and the final output embedding of VGGish 7 (Hershey et al., 2017) with a (10 128) dimension of (time embedding).",
"Video features .",
"To contrast with video captioning datasets, we also extract video features at the frame-level and at the sequence-level from YouTube clips.",
"For frame features, we use VGG16 (Simonyan and Zisserman, 2015) pretrained on the ILSVRC-2014 dataset (Rus-sakovsky et al., 2015).",
"For sequence features, we use C3D 8 (Tran et al., 2015) pretrained on the Sport1M dataset (Karpathy et al., 2014).",
"We extract subsequent frames with 50% overlap centered at each time step on the input clips for AudioSet videos, while proceeding with no overlap for MSR-VTT clips as in the original paper.",
"We sample videos at 25fps.",
"Retrieval methods .",
"As straightforward baselines, we test the 1-nearest search with audio features, denoted by 1NN-MFCC , 1NN-SoundNet and 1NN-VGGish .",
"For a query audio, we find its closest training audio using the (cid:96) 2 distance on the features and return its text as a prediction.",
"We mean-pool all the audio features over time, because it empirically leads to a strong performance.",
"LSTM methods .",
"As simple generative baselines, we test with the LSTM decoder, denoted by -LSTM postfix, where the encoded audio feature is set as the initial state of the LSTM.",
"For instance, WaveNet-LSTM is the model with the WaveNet encoder and the LSTM decoder.",
"We use a single-layer LSTM with dropout (Srivastava et al., 2014) and layer normalization (Ba et al., 2016).",
"Attention models .",
"We test two popular attention models developed in video captioning research:",
"(i) TempAtt (Luong et al., 2015; Yao et al., 2016) generates captions by selectively attending to audio features over time, and",
"(ii) SemAtt (You et al., 2016) creates text attending to attribute words as secondary information.",
"Our models .",
"We denote our top-down multi-scale encoder as the prefix TopDownand aligned semantic attention as AlignedAtt.",
"Upper-bounds .",
"Given that each test data has five human-generated captions, we perform cross validation on the five GT captions as an upper-bound of performance denoted as Human .",
"We regard one of five human annotations as model prediction and compute the performance metric with the other four as ground-truths.",
"After doing this on each of five, we then average the scores.",
"We discuss experimental results in response to the three questions regarding the AudioCaps dataset.",
"We first evaluate whether the collected audio-based captions are indeed loyal to the audio clips.",
"As one possible method to validate it, we perform comparative experiments with the video-oriented MSR-VTT dataset (Xu et al., 2016).",
"Note that MSR-VTT and AudioCaps both provide pairs of audio clips and its corresponding videos, allowing us to perform this comparative study.",
"We hypothesize that the captions from MSR-VTT would not coherently map to audio features, because they are written mainly based on the visual information.",
"In contrast, AudioCaps captions would be better aligned to audio features than visual features.",
"The results in Table 4 support our hypothesis.",
"In MSR-VTT, the video-based captioning model C3D-LSTM attains better scores than the preceding three audio-captioning models *-LSTM , while in AudioCaps the video-based model performs far worse than the audio models.",
"This may be due to our collection method of AudioCaps, which encourages turkers to submit the descriptions based on the audio rather than the visual.",
"Vocabulary comparison .",
"We also make comparisons between AudioCaps and MSR-VTT in terms of vocabulary usage in the captions.",
"We select the 1,800 most frequent vocabularies of verbs, adjectives and adverbs from each dataset, and run a user study in which three different workers are asked to categorize each sampled word into one of ( Audio, Visual, Both, Not Applicable ).",
"The category label per word is decided by a majority vote of three workers' opinions.",
"We use AMT once more to collect the unbiased opinions.",
"In or-Methods B-1 B-2 B-3 B-4 METEOR CIDEr ROUGE-L SPICE 1NN-MFCC 34.1 17.8 10.0 5.3 9.9 8.7 23.4 4.7 1NN-SoundNet (Aytar et al., 2016) 39.1 22.0 12.9 7.6 12.0 16.4 27.2 6.9 1NN-VGGish (Hershey et al., 2017) 44.2 26.5 15.8 9.0 15.1 25.2 31.2 9.2 WaveNet-LSTM (Engel et al., 2017) 48.9 31.5 20.2 13.0 13.8 29.6 35.5 9.0 MFCC-LSTM (Xu et al., 2018) 57.3 40.0 26.8 16.4 18.4 44.8 41.1 11.5 SoundNet-LSTM (Aytar et al., 2016) 54.0 38.0 26.4 17.6 16.5 43.2 39.2 10.8 VGGish-LSTM (Hershey et al., 2017) 58.7 42.3 29.8 20.4 18.7 50.4 42.6 13.0 TempAtt-WaveNet-LSTM (Luong et al., 2015) 50.7 34.3 22.9 14.8 14.8 28.2 36.4 8.6 TempAtt-MFCC-LSTM (Luong et al., 2015) 57.7 40.7 27.6 17.9 18.2 49.3 41.8 12.4 TempAtt-SoundNet-LSTM (Luong et al., 2015) 55.5 37.4 24.8 15.8 17.0 43.4 40.0 11.6 TempAtt-VGGish(FC2)-LSTM (Luong et al., 2015) 61.3 43.2 29.6 19.5 19.3 50.9 43.5 13.5 TempAtt-VGGish(C4)-LSTM (Luong et al., 2015) 61.8 44.5 30.7 20.4 19.4 55.3 44.0 13.2 TempAtt-VGGish(C3)-LSTM (Luong et al., 2015) 61.2 44.1 30.3 20.9 19.0 52.3 43.7 13.0 TopDown-VGGish(FC2,C4)-LSTM 62.9 45.1 31.5 21.4 19.9 57.7 44.8 14.3 TopDown-VGGish(FC2,C4,C3)-LSTM 60.9 43.7 30.7 20.8 20.0 55.8 43.7 13.6 TopDown-SemTempAtt(1NN) (You et al., 2016) 62.2 44.9 31.3 20.9 20.2 58.1 44.9 13.6 TopDown-AlignedAtt(1NN) 61.4 44.6 31.7 21.9 20.3 59.3 45.0 14.4 Human 65.4 48.9 37.3 29.1 28.8 91.3 49.6 21.6 Table 2: Captioning results of different methods on AudioCaps measured by language similarity metrics.",
"der to guarantee thoughtful submissions, we ask the workers to provide a description using the word.",
"We compensate $0.05 per word to English-speaking workers with a 95% approval rate.",
"Figure 5 shows that AudioCaps has more vocabularies tagged as Audio ( e.g . neighs, rustling ) by 18.9%p more than MSR-VTT.",
"Furthermore, 56.3% of the total vocabularies in AudioCaps are categorized as audio-related, that is, labeled as Audio or Both ( e.g . vibrating, applauds ).",
"Hence, this vocabulary comparison result reassures that AudioCaps is more audio-oriented than MSR-VTT.",
"are more suitable for captioning on AudioCaps.",
"The best results are obtained by VGGish-LSTM .",
"This may be because VGGish is pretrained on YouTube audio clips, similar to AudioCaps.",
"Although the topics of YouTube are extremely diverse, the domain proximity may help VGGish learn more utilizable features for AudioCaps.",
"SoundNet-LSTM shows inferior performance compared to VGGish-LSTM , one possible reason being because it is pretrained with Flickr videos, which are rather distant in domain from the source of our dataset, in terms of topic diversity and the amount of possible noise.",
"MFCC-LSTM does not perform as well as VGGish-LSTM , even with the similar convolutional recurrent encoder.",
"This result hints that pretraining with a proper dataset is essential for audio captioning.",
"A comparison between MFCC-LSTM and WaveNet-LSTM reveals that using MFCC is better than directly taking raw waveform as input.",
"The raw waveform is relatively long ( > 500 longer than MFCC); hence, it may pose a difficulty for RNN-based encoders to precisely represent the whole audio context.",
"Temporal attention consistently boosts the captioning performance of the LSTM decoder in all audio features, as shown in the models with TempAttprefix in Table 2. No-(Ours)",
"tably, a large performance gain is observed for TempAtt-MFCC-LSTM .",
"This may be because MFCC features are transformed to temporally longer features than SoundNet and VGGish features (240 > 10), and thus allow temporal attention to better aid the model and bypass the vanishing gradient problem.",
"The semantic attention is also favorable for captioning performance, as SemTempAtt(1NN)-VGGish-LSTM in Table 3 slightly outperforms TempAtt-VGGish(FC2)-LSTM in Table 2. That is, the additional use of semantic attention enhances the temporal attention model.",
"Obviously, when using GT labels instead of 1NN retrieved labels as attribute words, the performance increases much, hinting that better semantic attributes are more synergetic with the aligned attention.",
"The comparison between different layers (C4, C3, FC2) confirms the effectiveness of jointly using multi-level features.",
"The fused features by the top-down multi-scale encoder ( i.e . TopDown) prove the most beneficial as they outperform their counterparts in Table 2. However, a stack of (FC2,C4) layers performs the best, while the three layer stack is slightly inferior, presumably due to overfitting and weak information flow between the upper and lower levels of the stacks.",
"Finally, our best performing model is TopDown-AlignedAtt where both the topdown multi-scale encoder and aligned semantic attention are jointly used.",
"We postulate that the two techniques synergize well thanks to rich information provided by TopDown allowing for better attention alignment.",
"Figure 6 shows selected examples of audio captioning.",
"In each set, we show a video frame, GT and text descriptions generated by our method and baselines.",
"Many audio clips consist of sounds with multiple sources in sequence, for which baselines often omit some details or mistakenly order the event sequence, whereas our model is better at capturing the details in the correct order.",
"We addressed a new problem of audio captioning for sound in the wild.",
"Via Amazon Mechanical Turk, we contributed a large-scale dataset named AudioCaps , consisting of 46K pairs of audio clips and human-written text.",
"In our experiments, we showed that the collected captions were indeed faithful to the audio inputs as well as improve the captions by two newly proposed components: the top-down multi-scale encoder and aligned semantic attention.",
"There are several possible directions beyond this work.",
"First, we can further expand the scope of AudioCaps .",
"Second, our model is integrable with speech counterparts to achieve more complete auditory captioning tasks.",
"We would like to thank SNU Vision & Learning Lab members and Yunseok Jang for the helpful comments and discussions.",
"This work is supported by Kakao and Kakao Brain corporations and the international cooperation program by the NRF of Korea (NRF-2018K2A9A2A11080927).",
"Gunhee Kim is the corresponding author."
] | [
"objective",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"method",
"objective",
"objective",
"objective",
"objective",
"objective",
"result",
"result",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"result",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"objective",
"objective",
"objective",
"abstain",
"objective",
"result",
"other",
"other",
"other"
] |
[
"Open Information Extraction ( OPENIE ) extracts meaningful structured tuples from freeform text.",
"Most previous work on OPENIE considers extracting data from one sentence at a time.",
"We describe NEURON , a system for extracting tuples from question-answer pairs.",
"Since real questions and answers often contain precisely the information that users care about, such information is particularly desirable to extend a knowledge base with.",
"NEURON addresses several challenges.",
"First, an answer text is often hard to understand without knowing the question, and second, relevant information can span multiple sentences.",
"To address these, NEURON formulates extraction as a multi-source sequence-to-sequence learning task, wherein it combines distributed representations of a question and an answer to generate knowledge facts.",
"We describe experiments on two real-world datasets that demonstrate that NEURON can find a significant number of new and interesting facts to extend a knowledge base compared to state-of-the-art OPENIE methods.",
"Open Information Extraction ( OPENIE ) (Banko et al., 2007) is the problem of extracting structured data from a text corpus, without knowing a priori which relations will be extracted.",
"It is one of the primary technologies used in building knowledge bases (KBs) that, in turn, power question answering (Berant et al., 2013).",
"The vast majority of previous work on OPENIE extracts structured information (e.g., triples) from individual sentences.",
"This paper addresses the problem of extracting structured data from conversational question-answer ( CQA ) data.",
"Often, CQA data contains precisely the knowledge that users care about.",
"As Part of the work was done while the author was at Megagon Labs.",
"such, this data offers a goal-directed method for extending existing knowledge bases.",
"Consider, for example, a KB about a hotel that is used to power its website and/or a conversational interface for hotel guests.",
"The KB provides information about the hotel's services: complimentary breakfast, free wifi, spa.",
"However, it may not include information about the menu/times for the breakfast, credentials for the wifi, or the cancellation policy for a spa appointment at the hotel.",
"Given the wide range of information that may be of interest to guests, it is not clear how to extend the KB in the most effective way.",
"However, the conversational logs, which many hotels keep, contain the actual questions from guests, and can therefore be used as a resource for extending the KB.",
"Following examples illustrate the kind of data we aim to extract: Example",
"Example",
"2. Q: What time does the pool open?",
"A: 6:00am daily.",
"Tuple: (cid:104) pool, open, 6:00am daily (cid:105) As can be seen from these examples, harvesting facts from CQA data presents significant challenges.",
"In particular, the system must interpret information collectively between the questions and answers.",
"In this case, it must realize that third floor' refers to the location of the gym' and that 6:00am refers to the opening time of the pool.",
"OPENIE systems that operate over individual sentences ignore the discourse and context in a QA pair.",
"Without knowing the question, they either fail to or incorrectly interpret the answer.",
"This paper describes NEURON , an end-to-end system for extracting information from CQA data.",
"We cast OPENIE from CQA as a multi-source sequence-to-sequence generation problem to explicitly model both the question and answer in a QA pair.",
"We propose a multi-encoder, constrained-decoder framework that uses two encoders to encode each of the question and answer to an internal representation.",
"The two representations are then used by a decoder to generate an output sequence corresponding to an extracted tuple.",
"For example, the output sequence of Example 2 is: (cid:104) arg 1 (cid:105) pool (cid:104) / arg 1 (cid:105)(cid:104) rel (cid:105) open (cid:104) / rel (cid:105)(cid:104) arg 2 (cid:105) 6:00am daily (cid:104) / arg 2 (cid:105) While encoder-decoder frameworks have been used extensively for machine translation and summarization, there are two key technical challenges in extending them for information extraction from CQA data.",
"First, it is vital for the translation model to learn constraints such as, arguments and relations are sub-spans from the input sequence, output sequence must have a valid syntax (e.g., (cid:104) arg 1 (cid:105) must precede (cid:104) rel (cid:105) ).",
"These and other constraints can be integrated as hard constraints in the decoder.",
"Second, the model must recognize auxiliary information that is irrelevant to the KB.",
"For example, in the hotel application, NEURON must learn to discard greetings in the data.",
"Since existing facts in the KB are representative of the domain of the KB, this prior knowledge can be incorporated as soft constraints in the decoder to rank various output sequences based on their relevance.",
"Our contributions are summarized below: We develop NEURON , a system for extracting information from CQA data.",
"NEURON is a novel multi-encoder constrained-decoder method that explicitly models both the question and the answer of a QA pair.",
"It incorporates vocabulary and syntax as hard constraints and prior knowledge as soft constraints in the decoder.",
"We conduct comprehensive experiments on two real-world CQA datasets.",
"Our experimental results show that the use of hard and soft constraints improves the extraction accuracy and NEURON achieves the highest accuracy in extracting tuples from QA pairs compared with state-of-the-art sentence-based models, with a relative improvement as high as 13.3%.",
"NEURON 's higher accuracy and ability to discover 15-25% tuples that are not extracted by state-of-the-art models make it suitable as a tuple extraction tool for KB extension.",
"We present a case study to demonstrate how a KB can be extended iteratively using tuples extracted using NEURON .",
"In each iteration, only relevant tuples are included in the KB.",
"In turn, the extended KB is used to improve relevance scoring for subsequent iterations.",
"In this work, we choose to model an OPENIE extraction from a question-answer (QA) pair as a tuple consisting of a single relation with two arguments, where the relation and arguments are contiguous spans from the QA pair.",
"Formally, let ( q, a ) be a QA pair, where question q = ( q 1 , q 2 , ..., q m ) and answer a = ( a 1 , a 2 , ..., a n ) are word sequences.",
"The output is a triple ( arg 1 , rel , arg 2 ) extracted from ( q, a ) .",
"The output triple can be naturally interpreted as a sequence y = ( y 1 , y 2 , ..., y o ) where y i is either a word or a placeholder tag ( (cid:104) arg 1 (cid:105) , (cid:104) rel (cid:105) , (cid:104) arg 2 (cid:105) ) that marks relevant portions of the triple.",
"In OPENIE , the extracted tuple should be asserted by the input QA pair.",
"Formulating this, therefore, requires the vocabulary of y to be restricted to the vocabulary of ( q, a ) and placeholder tags.",
"Following this definition, our aim is to directly model the conditional probability p ( y | q, a ) of mapping input sequences q and a into an output sequence: P ( y | q, a ) = o (cid:89) i =1 p ( y i | y 1 , . . . , y i 1 , q, a ) .",
"In our formulation, a triple is generated as a sequence: a head argument phrase arg 1 , followed by a relation phrase rel and a tail argument phrase arg 2 .",
"It is possible to consider different orderings in the output sequence (such as ( rel , arg 1 , arg 2 )).",
"However, the goal of OPENIE is to identify the relation phrase that holds between a pair of arguments.",
"Our representation is, thus, consistent with this definition as it models the relation phrase to depend on the head argument.",
"Overview of NEURON We propose to extract tuples using a variation of an encoder-decoder RNN architecture (Cho et al., 2014) operating on variable-length sequences of tokens.",
"Fig. 1 shows the architecture of NEURON .",
"It uses two encoders to encode question and answer sequences in a QA pair separately into fixed-length vector representations.",
"A decoder then decodes the vector representations into a variable-length sequence corresponding to the tuple.",
"The decoder is integrated Is the indoor open Question Encoder Constrained-Decoder vocab-mask combiner tag-mask x x arg1 indoor pool",
"with a set of hard constraints (e.g., output vocabulary) and soft constraints (e.g., relevance scoring) suited for the extraction task.",
"Given an input QA pair, two RNN encoders separately encode the question and answer.",
"The question encoder converts q into hidden representation h q = ( h q 1 , ..., h qm ) and the answer encoder converts a into h a = ( h a 1 , ..., h qn ) , where h qt = lstm ( q t , h qt 1 ) is a non-linear function represented by the long short-term memory (LSTM) cell.",
"The combiner combines the encoders' states and initializes the hidden states h for the decoder: h = tanh( W c [ h q h a ]) , where denotes concatenation.",
"The decoder stage uses the hidden states to generate the output y with another LSTM-based RNN.",
"The probability of each token is defined as: p ( y t ) = softmax (( s t c qt c at ) W y ) , (2) where s t denotes the decoder state, s 0 = h and s t = lstm (( y t 1 c qt c at ) W s , s t 1 ) .",
"The decoder is initialized by the last hidden state from the combiner.",
"It uses the previous output token at each step.",
"Both W y and W s are learned matrices.",
"Each decoder state is concatenated with context vectors derived from the hidden states of the encoders.",
"Context vector c t is the weighted sum of the encoder hidden states, i.e. c qt = (cid:80) mi =1 t i h qi , where t i corresponds to an attention weight.",
"The attention model (Bahdanau et al., 2015) helps the model learn to focus on specific parts of the input sequences, instead of solely relying on hidden vectors of the decoders' LSTM.",
"This is crucial for extraction from ( q, a ) pairs where input sequences tend to be long.",
"The decoder finds the best hypothesis (i.e., the best output sequence) for the given input representations.",
"Typically, the output sequence is generated, one unit at a time, using beam search .",
"At each time step, the decoder stores the topk scoring partial sequences, considers all possible single token extensions of them, and keeps k most-likely sequences based on model's probabilities (Eq. 1).",
"As soon as the (cid:104) /S (cid:105) symbol is appended, the sequence is removed from the beam and added to the set of complete sequences.",
"The most-likely complete sequence is finally generated.",
"Hard Constraints While such encoder-decoder models typically outperform conventional approaches (Cho et al., 2014; Zoph and Knight, 2016; Xiong et al., 2017) on a wide variety of tasks including machine translation and question answering, the accuracy and training efficiency has been shown to improve when the model is integrated with the constraints of the output domain (Xiao et al., 2016; Yin and Neubig, 2017).",
"Motivated by these, NEURON allows constraints relevant to information extraction to be incorporated in the model.",
"Specifically, we describe how the decoder can enforce vocabulary and structural constraints on the output.",
"Vocabulary constraints.",
"Since the arguments and relations in the extracted tuples typically correspond to the input QA pair, the decoder must constraint the space of next valid tokens when generating the output sequence.",
"NEURON uses a masking technique in the decoder to mask the probability of tokens (as in Eq. 2) that do not appear in the input ( q, a ) pair.",
"Specifically, it computes a binary mask vector v , where | v | is vocabulary size and v i = 1 if and only if i -th token appears in q or a .",
"The probability of each token is modified as: p ( y t ) = softmax (( s t c qt c at ) W y v ) , (3) S S : ( V T )\\ arg 1 B arg 1 I arg 1 I arg \u0000 1 B rel I rel I rel \u0000 B arg 1 I arg 1 I arg \u0000 1 / S rel : T arg 2 : T arg 1 : T w V : T \\/ arg 1 w V : T \\/ arg 1 w V : T \\/ rel w V : T \\/ rel w V : T \\/ arg 2 w V : T \\/ arg 2 ( input ):( tag mask ) / arg 1 : ( V T )\\ rel / rel : ( V T )\\ arg 2 / arg 2 : ( V T )\\/ S Figure 2: State diagram for tag masking rules.",
"Structural constraints.",
"For the output sequence to correspond to a valid tuple with nonempty arguments, the decoding process must conform to the underlying grammar of a tuple.",
"For instance, decoding should always begin in the (cid:104) S (cid:105) state, where only (cid:104) arg 1 (cid:105) can be generated.",
"In subsequent time steps, all other placeholders except (cid:104) /arg 1 (cid:105) should be restricted to ensure a nonempty argument.",
"Once (cid:104) /arg 1 (cid:105) is generated, (cid:104) rel (cid:105) must be generated in the next time step and so on.",
"The various states and grammar rules can be described as a finite state transducer as shown in Figure",
"2. Depending upon the state, NEURON generates a mask r based on this grammar and uses r to further modify the probabilities of the tokens as follows: p ( y t ) = softmax (( s t c qt c at ) W y v r ) .",
"Soft Constraints OPENIE systems are typically used to extract broad-coverage facts to extend existing KBs.",
"Facts already existing in the KB are representative of the domain of the KB.",
"It is, therefore, useful to incorporate this prior knowledge in the extraction itself.",
"NEURON is able to use prior knowledge (incorporated as soft constraints) in the decoder to understand the relevance of candidate extractions and adjust the ranking of various output sequences accordingly.",
"To see why such soft constraints can be useful, consider the example: Example",
"3. Q: Is the pool open?",
"A: I am sorry but our pool reopens at 7:00am.",
"Tuple: (cid:104) I, am, sorry (cid:105) ; (cid:104) pool, reopens at, 7:00am (cid:105) Both the tuple facts are correct given the input QA pair but only the second tuple contains useful information.",
"Filtering such irrelevant facts is difficult without additional evidence.",
"The multi-encoder and constrained-decoder in NEURON are jointly optimized to maximize the log probability of output sequence conditioned on the input sequences.",
"At inference, the decoder estimates the likelihood of various candidate sequences and generates the sequence with the highest likelihood.",
"As shown in Eq.",
"1, this likelihood is conditioned solely on the input ( q, a ) pair, thus increasing the possibility of obtaining facts that may be correct but irrelevant.",
"Instead, if a relevance scoring function were integrated at extraction time, the candidate output sequences could be re-ranked so that the predicted output sequence is likely to be both correct and relevant.",
"Learning a relevance scoring function can be modeled as a KB completion task, where missing facts have to be inferred from existing ones.",
"A promising approach is to learn vector representations of entities and relations in a KB by maximizing the total plausibility of existing facts in the KB (Wang et al., 2017).",
"For a new candidate output sequence, its plausibility can be predicted using the learned embeddings for the entities and relation in the sequence.",
"In NEURON , we learn the entity and relation embeddings using Knowledge Embedding (KE) methods such as TransE (Bordes et al., 2013) and HolE (Nickel et al., 2016).",
"Note that NEURON is flexible with how the relevance scoring function is learned or which KE method is chosen.",
"In this paper, we use TransE for evaluation.",
"TransE computes the plausibility score S of a tuple y = (cid:104) arg 1 , rel, arg 2 (cid:105) as: S ( y ) = || v arg 1 + v rel v arg 2 || , where v arg 1 , v rel , and v arg 2 are embedding vectors for arg 1 , rel , and arg 2 respectively.",
"Following (Jain et al., 2018), we compute the embedding vectors of out-of-vocabulary arguments (and relations) as the average of embedding vectors of known arguments (and relations).",
"We generate the most-likely output based on its conditional probability and plausibility score: y = argmax y (log P ( y | q, a ) + log S ( y )) .",
"To implement the tuple relevance scoring function, we employ the re-ranking approach, which is a common technique for sequence generation methods (Luong et al., 2015b).",
"Our re-ranking method first obtains candidates from a beam search decoder and then re-ranks the candidates based on the objective function (Eq. 5).",
"We evaluated the performance of NEURON on two CQA datasets.",
"In our analysis, we find that integrating hard and soft constraints in the decoder improved the extraction performance irrespective of the number of encoders used.",
"Also, 15-25% of the tuples extracted by NEURON were not extracted by state-of-the-art sentence-based methods.",
"ConciergeQA is a real-world internal corpus of 33,158 QA pairs collected via a multi-channel communication platform for guests and hotel staff.",
"Questions (answers) are always made by guests (staff).",
"An utterance has 36 tokens on average, and there are 25k unique tokens in the dataset.",
"A QA utterance has 2.8 sentences on average, with the question utterance having 1.02 sentences on average and answer utterance having 1.78 sentences on average.",
"AmazonQA (Wan and McAuley, 2016; McAuley and Yang, 2016) is a public dataset with 314,264 QA pairs about electronic products on ama-zon.com .",
"The dataset contains longer and more diverse utterances than the ConciergeQA dataset: utterances have an average of 45 tokens and the vocabulary has more than 50k unique tokens.",
"A QA utterance has 3.5 sentences on average.",
"The question utterances had 1.5 sentences on average and the answer having 2 sentences.",
"For training NEURON , we bootstrapped a large number of high-quality training examples using a state-of-the-art OPENIE system.",
"Such bootstrapping has been shown to be effective in information extraction tasks (Mausam et al., 2012; Saha et al., 2017).",
"The StanfordIE (Angeli et al., 2015) system is used to extract tuples from QA pairs for training examples.",
"To further obtain high-quality tuples, we filtered out tuples that occur too infrequently ( < 5 ) or too frequently ( > 100 ).",
"For each tuple in the set, we retrieved all QA pairs that contain all the content words of the tuple and included them in the training set.",
"This helps create a training set encapsulating the multiplicity of ways in which tuples are expressed across QA pairs.",
"We randomly sampled 100 QA pairs from our bootstrapping set and found 74 of them supported the corresponding tuples.",
"We find this quality of bootstrapped dataset satisfactory, since the seed tuples Instance type ConciergeQA AmazonQA Exclusively from question 13.9 % 13.8 % Exclusively from answer 25.8 % 17.6 % Ambiguous 36.9 % 29.8 % Jointly from Q-A 23.4 % 38.8 % Table 1: Various types of training instances.",
"Our bootstrapped dataset included training instances where a tuple matched",
"(a) tokens in the questions exclusively,",
"(b) tokens in the answers exclusively,",
"(c) tokens from both questions and answers.",
"Table 1 shows the distribution of the various types of training instances.",
"Less than 40% (30%) of ground truth tuples for ConciergeQA ( AmazonQA ) exclusively appear in the questions or answers.",
"Also, 22.1% (37.2%) of ground truth tuples for ConciergeQA ( AmazonQA ) are extracted from the combination of questions and answers.",
"These numbers support our motivation of extracting tuples from QA pairs.",
"We used standard techniques to construct training/dev/test splits so that QA pairs in the three sets are disjoint.",
"Table 2 shows the details of the various subsets.",
"We compared NEURON with two methods that can be trained for tuple extraction from QA pairs: BILSTM-CRF (Huang et al., 2015) and NEURALOPENIE (Cui et al., 2018).",
"BILSTM-CRF is a sequence tagging model that has achieved state-of-the-art accuracy on POS, chunking, NER and OPENIE (Stanovsky et al., 2018) tasks.",
"For OPENIE , the model predicts boundary labels (e.g., B-ARG1, I-ARG1, B-ARG2, O) for the various tokens in a QA pair.",
"NEURALOPENIE is an encoder-decoder model that generates a tuple sequence given an input sequence.",
"Since it uses a single encoder, we generate the input sequence by concatenating the question and answer in a QA pair.",
"We trained all the models using the same training data.",
"We examine the performance of different methods using three metrics: precision , recall , and relative coverage (RC).",
"Given a QA pair, each system returns a sequence.",
"We label the sequence correct if it matches one of the ground-truth tuples for the QA pair, incorrect otherwise.",
"We then measure precision of a method (i.e., # of correct predictions of the method / # of question-answer pairs) and recall (i.e., # of correct predictions of the method / # of correct predictions of any method) following (Stanovsky and Dagan, 2016).",
"To compare the coverage of sequences extracted by NEURON against the baseline method, we compute relative coverage of NEURON as the fraction of all correct predictions that were generated exclusively by NEURON .",
"Specifically, RC = | TPNEURON \\ TP baseline | | TPNEURON (cid:83) TP baseline | , where TP denotes the correct predictions.",
"4.4 Model Training and Optimization We implemented NEURON using OpenNMT-tf (Klein et al., 2017) , an open-source neural machine translation system that supports multi-source encoder-decoder models.",
"We implemented NEURALOPENIE using the same system.",
"We used the open-source implementation of BILSTM-CRF (Reimers and Gurevych, 2017).",
"For fair comparison, we used identical configurations for NEURON and NEURALOPENIE .",
"Each encoder used a 3-layer bidirectional LSTM and the decoder used a 3-layer bidirectional LSTM.",
"The models used 256-dimensional hidden states, 300-dimensional word embeddings, and a vocabulary size of 50k.",
"The word embeddings were initialized with pre-trained GloVe embeddings (glove.6B) (Penning-ton et al., 2014).",
"We used an initial learning rate of 1 and optimized the model with stochastic gradient descent.",
"We used a decay rate of 0.7, a dropout rate of 0.3 and a batch size of 64.",
"The models were trained for 1M steps for the ConciergeQA dataset and 100k steps for the AmazonQA dataset.",
"We used TESLA K80 16GB GPU for training the models.",
"We trained the KE models for relevance scoring using our bootstrapped training dataset.",
"For integrating the relevance scoring function, we experimented with different values for and found it not have a major impact within a range of 0.02 to 0.2.",
"We used a value of 0.05 in all the experiments.",
"The BILSTM-CRF model showed extremely low ( 2 15% ) precision values.",
"Very few of the tagged Method P R RCNEURALOPENIE (baseline) 0.769 0.580 -+ hard constraints 0.776 0.585 -+ hard and soft constraints 0.796 0.600 NEURON (our method) 0.791 0.597 0.224 + hard constraints 0.792 0.597 0.204 + hard and soft constraints 0.807 0.608 0.245 Table 3: Precision (P), Recall (R), and Relative Coverage (RC) results on ConciergeQA .",
"sequences ( 32 39% ) could be converted to a tuple.",
"Most tagged sequences had multiple relations and arguments, indicating that it is difficult to learn how to tag a sequence corresponding to a tuple.",
"The model only learns how to best predict tags for each token in the sequence, and does not take into account the long-range dependencies to previously predicted tags.",
"This is still an open problem and is outside the scope of this paper.",
"Tables 3 and 4 show the performance of NEURALOPENIE and NEURON on the two CQA datasets.",
"NEURON achieves higher precision on both the datasets.",
"This is because NEURALOPENIE uses a single encoder to interpret the question and answer in the same vector space, which leads to lower performance.",
"Furthermore, concatenating the question and answer makes the input sequence too long for the decoder to capture long-distance dependencies in history (Zhang et al., 2016; Toral and Sanchez-Cartagena, 2017).",
"Despite the attention mechanism, the model ignores past alignment information.",
"This makes it less effective than the dual-encoder model used in NEURON .",
"The tables also show that incorporating task-specific hard constraints helps further improve the overall precision and recall, regardless of the methods and the datasets.",
"Re-ranking the tuples based on the soft constraints derived from the existing KB further improves the performance of both methods in ConciergeQA and NEURALOPENIE in AmazonQA .",
"The existing KB also helps boost the likelihood of a correct candidate tuple sequence that was otherwise scored to be less likely.",
"Lastly, we found that NEURON has significant relative coverage; it discovered significant additional, unique tuples missed by NEURALOPENIE .",
"Table 4 shows a slight decrease in performance for NEURON after soft constraints are added.",
"This is likely caused by the lower quality KE model due to the larger vocabulary in AmazonQA .",
"In contrast, even with the lower quality KE model, NEU Method P R RCNEURALOPENIE (baseline) 0.557 0.594 -+ hard constraints 0.563 0.601 -+ hard and soft constraints 0.571 0.610 NEURON (our method) 0.610 0.652 0.139 + hard constraints 0.631 0.674 0.164 + hard and soft constraints 0.624 0.666 0.149 Table 4: Precision (P), Recall (R), and Relative Coverage (RC) results on AmazonQA dataset.",
"RALOPENIE improved slightly.",
"This is likely because the NEURALOPENIE model, at this stage, still had a larger margin for improvement.",
"We note however that learning the best KE model is not the focus of this work.",
"AmazonQA is a more challenging dataset than ConciergeQA : longer utterances (avg. 45 tokens vs. 36 tokens) and richer vocabulary ( > 50k unique tokens vs. < 25k unique tokens).",
"This is reflected in lower precision and recall values of both the systems on the AmazonQA dataset.",
"While the performance of end-to-end extraction systems depends on the complexity and diversity of the dataset, incorporating hard and soft constraints alleviates the problem to some extent.",
"End-to-end extraction systems tend to outperform rule-based systems on extraction from CQA datasets.",
"We observed that training data for ConciergeQA had a large number ( > 750k) dependency-based pattern rules, of which < 5% matched more than 5 QA pairs.",
"The set of rules is too large, diverse and sparse to train an accurate rule-based extractor.",
"Even though our training data was generated by bootstrapping from a rule-based extractor StanfordIE, we found only 51.5% (30.7%) of correct tuples from NEURON exactly matched the tuples from StanfordIE in ConciergeQA ( AmazonQA ).",
"This indicates that NEURON combined information from question and answer, otherwise not accessible to sentence-wise extractors.",
"As an evidence, we found 11.4% (6.1%) of tuples were extracted from answers, 16.8% (5.0%) from questions, while 79.6% (82.5%) combined information from questions and answers in ConciergeQA ( AmazonQA ).",
"Multiple Encoders: Our motivation to use different encoders for questions and answers is based on the assumption that they use different vocabulary and semantics.",
"We found that there were 8k (72k) unique words in questions, 18k (114k) unique words in answers, and the Jaccard co-Figure 3: Example embedding vectors from question and answer encoders.",
"Underlines denote similar embedding vectors in both the encoders.",
"efficient between two vocabulary sets was 0.25 (0.25) in ConciergeQA ( AmazonQA ), indicating that two sources use significantly different vocabulary.",
"Also, the same word can have different meanings depending on a speaker, and thus such words in the two sources should be embedded differently.",
"To visualize the embedding vectors of common words in ConciergeQA , we mapped them into 2D space using t -SNE (Maaten and Hinton, 2008).",
"Fig. 3 shows that subjective words that represents speakers attitude (e.g., ready, guests, time) had significantly different embeddings in the question and answer encoders.",
"In contrast, objective words such as menu, or activity (e.g., ba-con, cruise, weekday) had similar embeddings although the two encoders do not directly share the embedding parameters.",
"This indicates that multiple encoders not only capture the different meanings in questions and answers but also retain consistent meanings for words that keep the same meanings in the two sources.",
"Relevance Scoring: We compared with another NEURON model that uses HolE (Nickel et al., 2016) for relevance scoring.",
"Both the HolE and TransE models achieved the same precision of 80.7%, with HolE achieving slightly higher recall (+1.4%).",
"This suggests that incorporating relevance scoring in NEURON can robustly improve the extraction accuracy, regardless of the choice of the knowledge embedding method.",
"We also estimated the upper-bound precision by evaluating if the correct tuple was included in the top-500 candidates.",
"The upper-bound precision was 85.0% on ConciergeQA , indicating that there is still room for improvement on incorporating relevance scoring.",
"We examined a random sample of 100 errors shared by all the systems across the tested datasets.",
"Arguably, encoder-decoder models suffer when extracting tuples from long utterances (avg. of 54 tokens), contributing to 43% of the errors.",
"34% of the incorrectly extracted tuples used words that were shared across the two sources.",
"This indicates that the extractor makes errors when resolving ambiguity in tokens.",
"28% of the error cases used informal language that is generally difficult for any extractor to understand.",
"We show some examples (1 and 2 in Table 5) where NEURON successfully combined information across two sources and examples (3 and 4 in Table 5) where it failed.",
"We further examined three different scenarios:",
"a) errors are shared by both NEURON and NEURALOPENIE ,",
"b) errors are made exclusively by NEURON ,",
"c) errors are made exclusively by NEURALOPENIE .",
"For each scenario, we examined a random sample of 100 errors.",
"We categorize the different sources of errors and report the results in Table 6.",
"As shown, NEURON is superior on longer utterances compared to NEURALOPENIE (54 tokens vs. 49 tokens).",
"However, ambiguity in tokens in the two sources is a concern for NEURON because it has the flexibility to interpret the question and answer differently.",
"Not surprisingly, informal utterances are hard to translate for both the systems.",
"The extracted tuples from NEURON can be used to extend a KB for a specific domain.",
"However, automatically fusing the tuples with existing facts in Error Category N , B N , B N , B long utterances 43% 45% 40% avg.",
"(",
"the KB can have limited accuracy.",
"This can be due to noise in the source conversation, no prior knowledge of join rules and more.",
"One possible solution is to design a human-in-the-loop system that iteratively extracts tuples and filters them based on human feedback (Fig. 4).",
"In each iteration, a set of tuples is annotated by human annotators based on their relevance to the domain of the KB.",
"The tuples marked relevant are added to the KB and the relevance scoring function is updated for extracting more relevant tuples from the corpus in the next iteration.",
"We conducted a crowdsourced experiment 1 , simulating the first iteration of the procedure i.e., when no KE model is available.",
"We collected annotations on top5 tuples extracted by NEURON for 200 QA pairs in the ConciergeQA dataset.",
"For reliability, we hired five workers for each extraction.",
"The workers were asked to judge if a tuple is relevant to the hotel domain and represents concrete information to be added to a KB.",
"We found preci-sion@5 was 41.4%, and NEURON extracted at least one useful tuple for 83.0% of the 200 QA pairs.",
"Overall, the system added 243 unique tuples (out of 414 tuples extracted by NEURON ) to the KB.",
"We also collected annotations for the tuples extracted by NEURALOPENIE .",
"The precision@5 and recall@5 values were 41.3% and 79.0% respectively.",
"Although the precision values are quite similar, NEURON can extract correct tuples from more QA pairs than NEURALOPENIE .",
"While the precision can further be improved, the preliminary results support that NEURON is a good candidate for extraction in 1 https://www.figure-eight.com/ a human-in-the-loop system for KB extension.",
"We did not use any sophisticated methods for ranking tuples in our experiment.",
"Thus, a better ranking algorithm might lead to improved precision.",
"There is a long history of OPENIE systems for extracting tuples from plain text.",
"They are built on hand-crafted patterns over an intermediate representation of a sentence (e.g., POS tags (Yates et al., 2007; Fader et al., 2011), dependency trees (Bhutani et al., 2016; Mausam et al., 2012)).",
"Such rule-based systems require extensive engineering when the patterns become diverse and sparse.",
"Recently, OPENIE systems based on end-to-end frameworks, such as sequence tagging (Stanovsky et al., 2018) or sequence-to-sequence generation (Cui et al., 2018), have been shown to alleviate such engineering efforts.",
"However, all these systems focus on sentence-level extraction.",
"We are the first to address the problem of extracting tuples from question-answer pairs.",
"Our proposed system is based on an encoder-decoder architecture, which was first introduced by Cho et al. for machine translation.",
"Attention mechanisms (Bahdanau et al., 2015; Luong et al., 2015b) have been shown to be effective for mitigating the problem of poor translation performance on long sequences.",
"Their model can learn how much information to retrieve from specific parts of the input sequence at decoding time.",
"There is abundant research on generalizing such frameworks for multiple tasks, specially by employing multiple encoders.",
"Using multiple encoders has been shown to be useful in mutli-task learning (Luong et al., 2015a), multi-source translation (Zoph and Knight, 2016) and reading comprehension (Xiong et al., 2017).",
"We are the first to explore a multi-source encoder-decoder architecture for extracting tuples from CQA datasets.",
"Traditional encoder-decoder architectures are not tailored for information extraction and knowledge harvesting.",
"To make them suitable for information extraction, the sequence generation must be subjected to several constraints on the vocabulary, grammar etc.",
"Recently, grammar structures have been integrated into encoder-decoder models (Iyer et al., 2017; Zhang et al., 2017).",
"There are variations such as Pointer Networks (Vinyals et al., 2015) that yield a succession of pointers to tokens in the input sequence.",
"All these studies share a common idea with our paper, which is to enforce constraints at sequence generation time.",
"Since we focus on extraction from CQA datasets, our work is broadly related to the literature on relation extraction (Savenkov et al., 2015; Hixon et al., 2015; Wu et al., 2018) and ontology extraction (S and Kumar, 2018) from community generated question-answer datasets.",
"However, we differ in our underlying assumption that the relations and entities of interest are not known in advance.",
"Alternatively, a CQA dataset could be transformed into declarative sentences (Demszky et al., 2018) for a conventional OPENIE system.",
"However, such a two-stage approach is susceptible to error propagation.",
"We adopt an end-to-end solution that is applicable to generic CQA datasets.",
"We have presented NEURON , a system for extracting structured data from QA pairs for the purpose of enriching knowledge bases.",
"NEURON uses a multi-encoder, constrained-decoder framework to generate quality tuples from QA pairs.",
"NEURON achieves the highest precision and recall in extracting tuples from QA pairs compared with state-of-the-art sentence-based models, with a relative improvement as high as 13.3%.",
"It can discover 15-25% more tuples which makes it suitable as a tuple extraction tool for KB extension.",
"There are several directions for future research.",
"One interesting direction is to investigate whether NEURON can be extended to work on open-domain QA corpus , which may not be restricted to any specific domain.",
"We thank Tom Mitchell and the anonymous reviewers for their constructive feedback.",
"This work was supported in part by the UM Office of Research."
] | [
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"method",
"result",
"objective",
"objective",
"objective",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"abstain",
"method",
"objective",
"other",
"other",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other"
] |
[
"We propose a self-supervised method to solve Pronoun Disambiguation and Winograd Schema Challenge problems.",
"Our approach exploits the characteristic structure of training corpora related to so-called trigger words, which are responsible for flipping the answer in pronoun disambiguation.",
"We achieve such commonsense reasoning by constructing pairwise contrastive auxiliary predictions.",
"To this end, we leverage a mutual exclusive loss regularized by a contrastive margin.",
"Our architecture is based on the recently introduced transformer networks, BERT, that exhibits strong performance on many NLP benchmarks.",
"Empirical results show that our method alleviates the limitation of current supervised approaches for commonsense reasoning.",
"This study opens up avenues for exploiting inexpensive self-supervision to achieve performance gain in commonsense reasoning tasks.",
"1 1 Introduction Natural language representation learning (e.g., BERT (Devlin et al., 2018),",
"etc.) can capture rich semantics from text and consistently improve the performance of downstream natural language processing (NLP) tasks.",
"However, despite the recent progress, the task of commonsense reasoning is still far from being solved.",
"Among many factors, this can be attributed to the strong correlation between attainable accuracy and training corpora size and quality.",
"A particular case in point is the Winograd Schema Challenge (WSC) (Levesque et al., 2012).",
"Despite its seeming simplicity for humans, it is still not solved by current algorithms.",
"Below is a popular example of a question-answer pair from the binary-choice pronoun coreference problem (Lee et al., 2017) of WSC: 1 Code available at https://github.com/ SAP-samples/acl2020-commonsense/ Sentence-1: The trophy doesn't fit in the suitcase because it is too small.",
"Sentence-2: The trophy doesn't fit in suitcase because it is too big.",
"Answers: A) the trophy B) the suitcase For humans resolving the pronoun it to the suitcase is straightforward.",
"However, a system without the capacity of commonsense reasoning is unable to conceptualize the inherent relationship and, therefore, unable to distinguish the suitcase from the alternative the trophy.",
"Recently, the research community has experienced an abundance in methods proposing to utilize latest word embedding and language model (LM) technologies for commonsense reasoning (Kocijan et al., 2019; He et al., 2019; Ye et al., 2019; Ruan et al., 2019; Trinh and Le, 2018; Klein and Nabi, 2019).",
"The underlying assumption of these methods is that, since such models are learned on large text corpora (such as Wikipedia), they implicitly capture to a certain degree commonsense knowledge.",
"As a result, models permit reasoning about complex relationships between entities at inference time.",
"Most of the methods proposed a two-stage learning pipeline.",
"They are starting from an initial self-supervised model, commonsense-aware word embeddings are then obtained in a subsequent fine-tuning (ft) phase.",
"Fine-tuning enforces the learned embedding to solve the downstream WSC task only as a plain co-reference resolution task.",
"However, solving this task requires more than just employing a language model learned from large text corpora.",
"We hypothesize that the current self-supervised pre-training tasks (such as next sentence prediction , masked language model , etc.) used in the word embedding phase are too easy to enforce the model to capture commonsense.",
"Consequently, the supervised fine-tuning stage is not suf-ficient nor adequate for learning to reason commonsense.",
"This is particularly more severe when pretraining on commonsense-underrepresented corpora such as Wikipedia, where the authors often skip incorporating such information in the text, due to the assumed triviality.",
"In this case, the supervised fine-tuning does not seem to be enough to solve the task, and can only learn to artificially resolve the pronoun based on superficial cues such as dataset and language biases (Trichelair et al., 2018; Saba, 2018; Trichelair et al., 2019; Emami et al., 2019; Kavumba et al., 2019).",
"In this work, we propose to use minimal existing supervision for learning a commonsense-aware representation.",
"Specifically, we provide the model with a supervision level identical to the test time of the Winograd challenge.",
"For that, we introduce a self-supervised pre-training task, which only requires pair of sentences that differ in as few as one word (namely, trigger words).",
"It should be noted that the notion of trigger words is inherent to the concept of Winograd Schema questions.",
"Trigger words are responsible for switching the correct answer choice between the questions.",
"In the above example, the adjectives big and small act as such trigger words.",
"Given the context established by the trigger word, candidate answer A is either right in the first sentence and wrong in the second, or vice-versa.",
"As is evident from the example, trigger words give rise to the mutual-exclusive relationship of the training pairs.",
"The proposed approach targets to incorporate this pairwise relationship as the only supervisory signal during the training phase.",
"Training in such a contrastive self-supervised manner is inducing a commonsense-aware inductive bias.",
"This can be attributed to several factors.",
"Optimization enforces the classifier to be more rigorous in its decision as well as consistent across pairs while being discriminative.",
"Specifically, in the absence of strong individual sentence signals, the model seeks to combine weak signals across pairs.",
"This unsupervised task is much harder to learn compared to the supervised task, and resolving the respective associations requires a notion of commonsense knowledge.",
"Consequently, we postulate that training with contrastive self-supervised fashion allows for learning more in-depth word relationships that provide better generalization properties for commonsense reasoning.",
"For that, we propose to incorporate a Mutual Exclusive (MEx) loss (Sajjadi et al., 2016) during the representation learning phase by maximizing the mutual exclusive probability of the two plausible candidates.",
"Specifically, given a pair of training sentence, the pronoun to be resolved is masked out from the sentence, and the language model is used to predict such only one of the candidates can fill in the place of masked pronoun while ful-filling the mutual-exclusivity condition.",
"In this self-supervised task, the labels (i.e., correct candidates) do not have to be known a priori.",
"Thus it allows learning in an unsupervised manner by exploiting the fact that the data is provided in a pairwise fashion.",
"Our contributions are two-fold:",
"(i) we propose a novel self-supervised learning task for training commonsense-aware representation in a minimally supervised fashion.",
"(ii) we introduce a pair level mutual-exclusive loss to enforce commonsense knowledge during representation learning.",
"There is a wealth of literature on commonsense reasoning, but we only discuss here the ones most related to our work and refer the reader to the recent analysis paper by (Trichelair et al., 2019).",
"Traditional attempts on commonsense reasoning usually involve heavy utilization of annotated knowledge bases (KB), rule-based reasoning, or hand-crafted features (Bailey et al., 2015; Schuller, 2014; Sharma et al., 2015).",
"Only very recently and after the success of natural language representation learning, several works proposed to use supervised learning to discover commonsense relationships, achieving state-of-the-art in multiple benchmarks (see, e.g., (Kocijan et al., 2019; He et al., 2019; Ye et al., 2019; Ruan et al., 2019)).",
"As an example, (Kocijan et al., 2019) has proposed to exploit the labels for commonsense reasoning directly and showed that the performance of multiple language models on Winograd consistently and robustly improves when fine-tuned on a similar pronoun disambiguation problem dataset.",
"Despite the success of these methods, we posit that unsupervised learning is still more attractive for commonsense reasoning tasks, because curating a labeled dataset entailing all existing commonsense is likely to be an unattainable objective.",
"Very recently, unsupervised learning has also been applied successfully to improve commonsense reasoning in a few works (Trinh and 1.0 0.5 0.0 w1 w2 LM Loss MEX Loss Contrastive margin The man lifted the boy onto his shoulders.",
"Le, 2018; Klein and Nabi, 2019).",
"The most pioneering work in this space is probably by (Trinh and Le, 2018), where the authors proposed to use BERT as a (pseudo) language model to compute the likelihood of candidates replacing the pronoun, and the corresponding ratio giving rise to answer.",
"In another recent work, (Klein and Nabi, 2019) proposed a metric based on the maximum attention score for commonsense reasoning.",
"While these papers show that BERT can implicitly learn to establish complex relationships between entities, our results suggest that solving commonsense reasoning tasks require more than unsupervised models learned from massive text corpora.",
"We note that our model is different from all of the methods above.",
"A key difference is that they require fine-tuning, or explicit substitution or heuristic-based rules, whereas our method learns a commonsense-aware representation in self-supervised fashion.",
"The goal of the proposed approach is to exploit the mutual-exclusive nature of the training samples of commonsense reasoning corpora.",
"Given two sentences where the only difference between them is the trigger word(s), we postulate that the pairwise pronoun disambiguation is mutually exclusive.",
"We formulate this idea using a contrastive loss and use this to update the language model.",
"The proposed contrastive loss decomposes into two components: L ( f ) = L ( f ) MEx + L ( f ) CM (1) Here f is the language model parameterized by .",
"The first term, L MEx enforces the Mutual Exclusivity of the answers across pairs.",
"As such, it is a relaxation of the Exclusive-OR (XOR) operator w.r.t. candidates.",
"The second term, LCM constitutes the Contrastive Margin.",
"It enforces a margin between the candidate likelihoods from the language model.",
"Whereas L MEx operates across pairs, LCM considers the candidates of each pair.",
"Although both terms encourage the same property (mutual exclusivity of the answers), we empirically observed that adding CM increases stability.",
"It should be noted that the proposed approach does not make use of any class label information explicitly.",
"Rather, it solely exploits the structural information of the data.",
"In terms of the language model, we leverage BERT for Masked Token Prediction (Devlin et al., 2018).",
"This entails replacing the pronoun by a mask, i.e., [MASK] .",
"As a result, we yield probabilities for the candidates of each sentence.",
"Preliminaries: Given an associated pair of training sentences, i.e., ( s j , s j +1 ) , where the difference between the sentence pairs are the trigger words.",
"Let c i and c i +1 be the two answer candidates for the masked pronoun resolution task.",
"Then employing BERT for Masked Token Prediction (Devlin et al., 2018) provides p ( c i | s j ) and p ( c i +1 | s j ) , i.e., the likelihood of the first and the second candidate being true in sentence s j , respectively.",
"It should be noted, if a candidate consists of several tokens, the corresponding number of [MASK] tokens is used in the masked sentence.",
"The candidate probability then corresponds to the average of log-probabilities of each composing token.",
"Since a candidate cannot be the right answer for the first and second sentence in the pair, we yield a logical term that holds true for viable answers.",
"It is worth noting that the logical expression is not unique as many logical equivalents exist: ( c i, 1 c i +1 , 1 ) ( c i, 2 c i +1 , 2 ) ( c i, 1 c i, 2 ) (2) Here denotes the XOR operator and c i,j { 0 , 1 } denotes the binary state variable corresponding to candidate c i in sentence s j .",
"Mutual-Exclusive Loss: In order to be differentiable, the discrete logical term of Eq.",
"2 has to be converted into a soft version.",
"To this end, we replace the binary variables with their corresponding probabilities.",
"Similarly, the logical operators are replaced accordingly to accommodate for the probabilistic equivalent.",
"With a b = ( a b ) ( a b ) a logical decomposition of the XOR operator, we adopt the following replacement scheme:",
"(i) (cid:86) ki x i is replaced by (cid:81) ki x i ,",
"(ii) (cid:87) ki x i is replaced by (cid:80) ki x i ,",
"(iii) the not operation of a binary variable x i is replaced by 1 x i .",
"Thus, transforming all the logical terms of Eq.",
"2, we yield the following soft-loss equivalent: L MEx = N (cid:88) i = i +2 , p i, 1 p i +1 , 2 (1 p i, 2 p i +1 , 1 ) + p i, 2 p i +1 , 1 (1 p i, 1 p i +1 , 2 ) (3) Here p i,j = p ( c i | s j ) [0 , 1] denotes the probability of candidate c i being the right answer in sentence s j , is a hyperparameter, and N corresponds to the number of training samples.",
"Intuitively speaking, as no labels are provided to the model during training, the model seeks to make the answer probabilities less ambiguous, i.e., approximate binary constitution.",
"As the model is forced to leverage the pairwise relationship in order to resolve the ambiguity, it needs to generalize w.r.t. commonsense relationships.",
"As such, the task is inherently more challenging compared to, e.g., supervised cross-entropy minimization.",
"Contrastive Margin: In order to stabilize optimization and speed-up convergence, it is beneficial to augment the MEx loss with some form of regularization.",
"To this end, we add a contrastive margin.",
"It seeks to maximize difference between the individual candidate probabilities of the language model and is defined as, LCM = max (0 , | p i,j p i,j +1 | + ) , (4) with , being hyperparameters.",
"See Fig. 1 for a schematic illustration of the proposed method.",
"In this work, we use the PyTorch (Wolf et al., 2019) implementation of BERT.",
"Specifically, we employ a pre-trained BERT large-uncased architecture.",
"The model is trained for 25 epochs using a batch size of 4 (pairs), hyperparameters = 0 .",
"05 , = 0 .",
"02 and = 60 .",
"0 , and Adam optimizer at a learning rate of 10 5 .",
"We approach commonsense reasoning by first fine-tuning the pre-trained BERT LM model on the DPR training set (Rah-man and Ng, 2012).",
"Subsequently, we evaluate the performance on four different tasks.",
"Pronoun Disambiguation Problem: The first evaluation task is on PDP-60 (Davis et al., 2016), which aims the pronoun disambiguation.",
"As can be seen in Tab.",
"1 (top), our method outperforms all previous unsupervised results by a significant margin of at least (+15.0%).",
"Next, we have the alternative approaches making use of a supervisory signal during training.",
"Here, our method outperforms even the best system (78.3%) by (+11.7%).",
"Winograd Schema Challenge: The second task is WSC-273 (Levesque et al., 2012), which is known to be more challenging than PDP-60.",
"Here, our method outperforms the current unsupervised state-of-the-art (Trinh and Le, 2018) (62.6%), as shown in Tab.",
"1 (middle).",
"Specifically, our method achieves an accuracy of (69.6%), which is (+7%) above the previous best result.",
"Simultaneously, the proposed approach is just slightly lower than the best supervised approach (Kocijan et al., 2019).",
"Definite Pronoun Resolution: The third task is DPR (Rahman and Ng, 2012), which resembles WSC.",
"Compared to the latter, it is significantly larger in size.",
"However, according to (Trichelair et al., 2018), it is less challenging due to several inherent biases.",
"Here the proposed approach outperforms the best alternative by a margin of (+3.7%), as can be seen in Tab.",
"1 (lower part).",
"KnowRef: The fourth task is KnowRef (Emami et al., 2019), which is a coreference corpus tailored to remove gender and number cues.",
"The proposed approach outperforms the best alternative by a margin of (+4.5%), as can be seen in Tab.",
"1 (bottom).",
"Ablation study on contrastive margin: The contrastive margin term was incorporated in our method as a regularizer, mainly for the sake of having faster convergence.",
"As such, discarding it during optimization has a minor impact on the accuracy of most benchmarks (less than 1% on WSC, DPR, KnowRef).",
"However, on PDP, we noticed a wider margin of more than 10%.",
"In contrast to supervised learning, where semantics is directly injected through labels, the self-PDP-60",
"bottom: PDP, WSC, DPR, KnowRef.",
"The first two task performances are subdivided into two parts.",
"Upper part: supervised, lower part: unsupervised.",
"supervised-learning paradigm avoids labels by employing a pre-text task and exploits the structural prior of data as a supervisory signal.",
"In this paper, this prior corresponds to the Winograd-structured twin-question pairs, and the pre-text task is to switch the correct answer choice between the pairs using trigger words.",
"We postulate that training in such a contrastive self-supervised manner allows for learning more commonsense-aware word relationships that provide better generalization properties for commonsense reasoning.",
"We acknowledge that this prior is strong in terms of data curation, i.e., expert-crafted twin pairs.",
"However, during training, we provide the model to have access to a supervision level equal to the test time, i.e., not making use of the labels.",
"Therefore, maximizing the mutual exclusive probability of the two plausible candidates is inducing a commonsense-aware inductive bias without using any label information and by merely exploiting the contrastive structure of the task itself.",
"This is confirmed by our approach, reaching the performance of the most recent supervised approaches on multiple benchmarks.",
"At last, we note that our model is different from the self-supervised contrastive learning methodology in (Chen et al., 2020), which focuses on learning powerful representations in the self-supervised setting through batch contrastive loss.",
"A key difference compared to this method is that they generate the contrastive pairs as data augmentations of given samples, whereas in our setting the auxiliary task of mutual exclusivity is enforced on given contrastive pairs.",
"The proposed approach outperforms all approaches on PDP and DPR tasks.",
"At the more challenging WSC task, it outperforms all unsupervised approaches while being comparable in performance to the most recent supervised approaches.",
"Additionally, it is less susceptible to gender and number biases as the performance on KnowRef suggests.",
"All this taken together confirms that self-supervision is possible for commonsense reasoning tasks.",
"We believe in order to solve commonsense reasoning truly, algorithms should refrain from using labeled data, instead exploit the structure of the task itself.",
"Therefore, future work will aim at relaxing the prior of Winograd-structured twin-question pairs.",
"Possibilities are automatically generating an extensive collection of similar sentences or pre-training in a self-supervised fashion on large-scale Winograd-structured datasets, such as the recently published WinoGrande (Sakaguchi et al., 2019).",
"Furthermore, we seek to investigate the transferability of the obtained inductive bias to other commonsense-demanding downstream tasks, which are distinct from the Winograd-structure."
] | [
"objective",
"abstain",
"result",
"method",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective"
] |
[
"Cornell University [email protected]",
"Abstract",
"Naturally-occurring bracketings, such as answer fragments to natural language questions and hyperlinks on webpages, can reflect human syntactic intuition regarding phrasal boundaries.",
"Their availability and approximate correspondence to syntax make them appealing as distant information sources to incorporate into unsupervised constituency parsing.",
"But they are noisy and incomplete; to address this challenge, we develop a partial-brackets-aware structured ramp loss in learning.",
"Experiments demonstrate that our distantly-supervised models trained on naturally-occurring bracketing data are more accurate in inducing syntactic structures than competing unsupervised systems.",
"On the English WSJ corpus, our models achieve an unlabeled F1 score of 68 .",
"9 for constituency parsing.",
"1 1 Introduction Constituency is a foundational building block for phrase-structure grammars.",
"It captures the notion of what tokens can group together and act as a single unit.",
"The motivating insight behind this paper is that constituency may be reflected in markups of bracketings that people provide in doing natural tasks.",
"We term these segments naturally-occurring bracketings for their lack of intended syntactic annotation.",
"These include, for example, the segments people pick out from sentences to refer to other Wikipedia pages or to answer semantically-oriented questions; see Figure 1 for an illustration.",
"Gathering such data requires low annotation expertise and effort.",
"On the other hand, these data are not necessarily suitable for training parsers, as they often contain incomplete, incorrect and sometimes conflicting bracketing information.",
"It is thus an empirical question whether and how much we 1 Our code is publicly available at https://github.",
"[ Republicans ] have been imploring the White House [ to compromise on [ the wage issue ] ] .",
"A: Republicans A: To compromise on the wage issue A: The wage issue QA-SRL Wikipedia Science fiction (sometimes shortened to sci-fi or SF) is a genre of speculative fiction that typically deals with imaginative and futuristic concepts such as advanced science and technology, space exploration, time travel, parallel universes, and extraterrestrial life.",
"Republicans have been imploring the White House to compromise on the wage issue.",
"Context: Q&As: Corresponding bracketings: Figure 1: Two example types of naturally-occurring bracketings.",
"Q: Who have been imploring something?",
"Q: What have someone been imploring?",
"Q: What will someone compromise on?",
"could learn syntax from these naturally-occurring bracketing data.",
"To overcome the challenge of learning from this kind of noisy data, we propose to train discriminative constituency parsers with structured ramp loss (Do et al., 2008), a technique previously adopted in machine translation (Gimpel and Smith, 2012).",
"Specifically, we propose two loss functions to directly penalize predictions in conflict with available partial bracketing data, while allowing the parsers to induce the remaining structures.",
"We experiment with two types of naturally-occurring bracketing data, as illustrated in Figure 1.",
"First, we consider English question-answer pairs collected for semantic role labeling (QA-SRL; He et al., 2015).",
"The questions are designed for non-experts to specify semantic arguments of predicates in the sentences.",
"We observe that although no syntactic structures are explicitly asked for, humans tend to select constituents in their answers.",
"Second, Wikipedia articles 2 are typically richly annotated with internal links to other articles.",
"These links are marked on phrasal units that refer to standalone concepts, and similar to the QA-SRL data, they frequently coincide with syntactic constituents.",
"Experiment results show that naturally-occurring bracketings across both data sources indeed help our models induce syntactic constituency structures.",
"Training on the QA-SRL bracketing data achieves an unlabeled F1 score of 68 .",
"9 on the English WSJ corpus, an accuracy competitive with state-of-the-art unsupervised constituency parsers that do not utilize such distant supervision data.",
"We find that our proposed two loss functions have slightly different interactions with the two data sources, and that the QA-SRL and Wikipedia data have varying coverage of phrasal types, leading to different error profiles.",
"In sum, through this work, (1) we demonstrate that naturally-occurring bracketings are helpful for inducing syntactic structures, (2) we incorporate two new cost functions into structured ramp loss to train parsers with noisy bracketings, and (3) our distantly-supervised models achieve results competitive with the state of the art of unsupervised constituency parsing despite training with smaller data size (QA-SRL) or out-of-domain data (Wikipedia).",
"Constituents are naturally reflected in various human cognitive processes, including speech production and perception (Garrett et al., 1966; Gee and Grosjean, 1983), reading behaviors (Hale, 2001; Boston et al., 2008), punctuation marks (Spitkovsky et al., 2011), and keystroke dynamics (Plank, 2016).",
"Conversely, these externalized signals help us gain insight into constituency representations.",
"We consider two such data sources:",
"a) Answer fragments When questions are answered with fragments instead of full sentences, those fragments tend to form constituents.",
"This phenomenon corresponds to a well-established constituency test in the linguistics literature (Carnie, 2012, pg. 98, inter alia).",
"b) Webpage hyperlinks Since a hyperlink is a pointer to another location or action (e.g., mailto: links), anchor text often represents a conceptual unit related to the link destination.",
"Indeed, Spitkovsky et al. (2010) first give empirical evidence that around half of the anchor text instances in their data respects constituent boundaries and Sgaard (2017) demonstrates that hyperlink data can help boost chunking accuracy in a multi-task learning setup.",
"Both types of data have been considered in previous work on dependency-grammar induction (Spitkovsky et al., 2010; Naseem and Barzilay, 2011), and in this work, we explore their efficacy for learning constituency structures.",
"For answer fragments, we use He et",
"al.'s (2015) question-answering-driven semantic role labeling (QA-SRL) dataset, where annotators answer wh questions regarding predicates in sentences drawn from the Wall Street Jounal (WSJ) section of the Penn Treebank (PTB; Marcus et al., 1993).",
"For hyperlinks, we used a 1% sample of 2020-05-01 English Wikipedia, retaining only within-Wikipedia links.",
"3 We compare our extracted naturally-occurring bracketings with the reference phrase-structure annotations: 4 Table 1 gives relevant statistics.",
"Our results re-affirm Spitkovsky et",
"al.'s (2010) finding that a large proportion of hyperlinks coin-3 See Appendix A for details.",
"For ground-truth structures in the Wikipedia data, we apply a state-of-the-art PTB-trained constituency parser (Ki-taev et al., 2019).",
"cide with syntactic constituents.",
"We also find that 22 .",
"4% / 35 .",
"8% of the natural bracketings are single-word spans, which cannot facilitate parsing decisions, while 11 .",
"8% / 5 .",
"3% of QA-SRL/Wikipedia spans actually conflict with the reference trees and can thus potentially harm training.",
"The QA-SRL data seems more promising for inducing better-quality syntactic structures, as there are more bracketings available across a diverse set of constituent types.",
"Preliminaries The inputs to our learning algorithm are tuples ( w, B ) , where w = w 1 , . . . , w n is a lengthn sentence and B = { ( b k , e k ) } is a set of naturally-occurring bracketings, denoted by the beginning and ending indices b k and e k into the sentence w .",
"As a first step, we extract BERT-based contextualized word representations (Devlin et al., 2019) to associate each token w i with a vector x i .",
"5 See Appendix B for details.",
"Scoring Spans Based on the x i vectors, we assign a score s ij to each candidate span ( i, j ) in the sentence indicating its appropriateness as a constituent in the output structure.",
"We adopt a biaffine scoring function (Dozat and Manning, 2017): s ij = [ l i ; 1] TW [ r j ; 1] , where [ v ; 1] appends 1 to the end of vector v , and l i = MLP left ( x i ) and r j = MLP right ( x j ) are the outputs of multi-layer perceptrons (MLPs) that take the vectors at span boundaries as inputs.",
"6 Decoding We define the score s ( y ) of a binary-branching constituency tree y to be the sum of scores of its spans.",
"The best scoring tree among all valid trees Y can be found using the CKY algorithm (Cocke, 1969; Kasami, 1965; Younger, 1967).",
"Learning Large-margin training (Taskar et al., 2005) is a typical choice for supervised training of constituency parsers.",
"It defines the following 5 The use of pre-trained language models can mitigate the fact that our distant supervision data are either out-of-domain (Wikipedia) or small in size (QA-SRL).",
"6 This is inspired by span-based supervised constituency-parsing methods (Stern et al., 2017), which in turn was based on Wang and Chang (2016).",
"These papers look at the difference vectors between two boundary points, while our scoring function directly uses the vectors at the boundaries (which is more expressive than only using difference vectors).",
"loss function to encourage a large margin of at least ( y, y ) between the gold tree y and any predicted tree y : l = max y Y [ s ( y ) + ( y, y )] s ( y ) , where ( y, y ) is a distance measure between y and y .",
"We can reuse the CKY decoder for cost-augmented inference when the distance decomposes into individual spans with some function c : ( y, y ) = (cid:80) span ( i,j ) in y c ( i, j, y ) .",
"In our setting, we do not have access to the gold-standard y , but instead we have a set of bracketings y .",
"The scoring s ( y ) is not meaningful since y is not a complete tree, so we adopt structured ramp loss (Do et al., 2008; Gimpel and Smith, 2012) and define l = (cid:18) max y Y [ s ( y ) + ( y, y )] s ( y ) (cid:19) + (cid:18) s ( y ) max y Y [ s ( y ) ( y, y )] (cid:19) = max y Y [ s ( y ) + ( y, y )] max y Y [ s ( y ) ( y, y )] , using a combination of cost-augmented and cost-diminished inference.",
"This loss function can be understood as a sum of a convex and a concave large margin loss (Collobert et al., 2006), canceling out the term for directly scoring the gold-standard tree.",
"We consider two methods for incorporating the partial bracketings into the cost functions: c loose ( i, j, y ) = 1 ( span ( i, j ) conflicts with y ) c strict ( i, j, y ) = 1 ( span ( i, j ) not in y ) , where 1 is an indicator function.",
"c loose is more lenient than c strict as it does not penalize spans that do not conflict with y .",
"Both cost definitions promote structures containing bracketings in y .",
"7 In the supervised setting where y refers to a fully-annotated tree y without conflicting span boundaries, c strict is equal to c loose and the resulting ( y, y ) cost functions both correspond to the Hamming distance between y and y .",
"(section 23 as the test set).",
"QA-SRL contains 1 , 241 sentences drawn from the training split (sections 02-21) of the PTB.",
"For Wikipedia, we use a sample of 332 , 079 sentences that are within 100 tokens long and contain multi-token internal hyperlinks.",
"We fine-tune the pretrained BERT base features with a fixed number of mini-batch updates and report results based on five random runs for each setting.",
"See Appendix B for detailed hyper-parameter settings and optimization procedures.",
"Evaluation We follow the evaluation setting of Kim et al. (2019a).",
"More specifically, we discard punctuation and trivial spans (single-word and full-sentence spans) during evaluation and report sentence-level F1 scores as our main metrics.",
"ratios for each constituent type.",
"Our distantly-supervised models trained on QA-SRL are competitive with the state-of-the-art unsupervised results.",
"When comparing our models with Cao et al. (2020), we obtain higher recalls on most constituent types except for VPs.",
"Interestingly, QA-SRL data prefers c strict , while c loose gives better F1 score on Wikipedia; this correlates with the fact that QA-SRL has more bracketings per sentence (Table 1).",
"Finally, our Wikipedia data has a larger relative percentage of ADJP bracketings, which explains the higher ADJP recall of the models trained on Wikipedia, despite their lower overall recalls.",
"Unsupervised Parsing Our distantly-supervised setting is similar to unsupervised in the sense that it does not require syntactic annotations.",
"Typically, lack of annotations implies that unsupervised parsers induce grammar from a raw stream of lexical or part-of-speech tokens (Clark, 2001; Klein, 2005) along with carefully designed inductive biases on parameter priors (Liang et al., 2007; Wang and Blunsom, 2013), language universals (Naseem et al., 2010; Martnez Alonso et al., 2017), cross-linguistic (Snyder et al., 2009; Berg-Kirkpatrick and Klein, 2010; Cohen and Smith, 2009; Han et al., 2019) and cross-modal (Shi et al., 2019) signals, structural constraints (Gillenwater et al., 2010; Noji et al., 2016; Jin et al., 2018), etc.",
"The models are usually generative and learn from (re)constructing sentences based on induced structures (Shen et al., 2018, 2019; Drozdov et al., 2019; Kim et al., 2019a,b).",
"Alternatively, one may use reinforcement learning to induce syntactic structures using rewards defined by end tasks (Yogatama et al., 2017; Choi et al., 2018; Havrylov et al., 2019).",
"Our method is related to learning from constituency tests (Cao et al., 2020), but our use of bracketing data permits discriminative parsing models, which focus directly on the syntactic objective.",
"Learning from Partial Annotations Full syntactic annotations are costly to obtain, so the alternative solution of training parsers from partially-annotated data has attracted considerable research attention, especially within the context of active learning for dependency parsing (Sassano, 2005; Sassano and Kurohashi, 2010; Mirroshandel and Nasr, 2011; Flannery et al., 2011; Flannery and Mori, 2015; Li et al., 2016; Zhang et al., 2017) and grammar induction for constituency parsing (Pereira and Schabes, 1992; Hwa, 1999; Riezler et al., 2002).",
"These works typically require expert annotators to generate gold-standard, though partial, annotations.",
"In contrast, our work considers the setting and the challenge of learning from noisy bracketing data, which is more comparable to Spreyer and Kuhn (2009) and Spreyer et al. (2010) on transfer learning for dependency parsing.",
"We argue that naturally-occurring bracketings are a rich resource for inducing syntactic structures.",
"They reflect human judgment of what constitutes a phrase and what does not.",
"More importantly, they require low annotation expertise and effort; for example, webpage hyperlinks can be extracted essentially for free.",
"Empirically, our models trained on QA-SRL and Wikipedia bracketings achieve competitive results with the state of the art on unsupervised constituency parsing.",
"Structural probes have been successful in extracting syntactic knowledge from frozen-weight pre-trained language models (e.g., Hewitt and Manning, 2019), but they still require direct syntactic supervision.",
"Our work shows that it is also feasible to retrieve constituency trees from BERT-based models using distant supervision data.",
"Our models are limited to the unlabeled setting, and we leave it to future work to automatically cluster the naturally-occurring bracketings and to induce phrase labels.",
"Our work also points to potential applications in (semi-)supervised settings including active learning and domain adaptation (Joshi et al., 2018).",
"Future work can also consider other naturally-occurring bracketings induced from sources such as speech production, reading behavior, etc.",
"We thank the anonymous reviewers for their constructive reviews.",
"This work was supported in part by a Bloomberg Data Science Ph.D.",
"Fellowship to Tianze Shi and a gift from Bloomberg to Lillian Lee."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"method",
"abstain",
"abstain",
"other",
"other",
"other"
] |
[
"Natural Questions is a new challenging machine reading comprehension benchmark with two-grained answers, which are a long answer (typically a paragraph) and a short answer (one or more entities inside the long answer).",
"Despite the effectiveness of existing methods on this benchmark, they treat these two sub-tasks individually during training while ignoring their dependencies.",
"To address this issue, we present a novel multi-grained machine reading comprehension framework that focuses on modeling documents at their hierarchical nature, which are different levels of granularity: documents, paragraphs, sentences, and tokens.",
"We utilize graph attention networks to obtain different levels of representations so that they can be learned simultaneously.",
"The long and short answers can be extracted from paragraph-level representation and token-level representation, respectively.",
"In this way, we can model the dependencies between the two-grained answers to provide evidence for each other.",
"We jointly train the two sub-tasks, and our experiments show that our approach significantly outperforms previous systems at both long and short answer criteria.",
"Machine reading comprehension (MRC), a task that aims to answer questions based on a given document, has been substantially advanced by recently released datasets and models (Rajpurkar et al., 2016; Seo et al., 2017; Xiong et al., 2017; Joshi et al., 2017; Cui et al., 2017; Devlin et al., 2019; Clark and Gardner, 2018).",
"Natural Questions (NQ, Kwiatkowski et al., 2019), a newly released benchmark, makes it more challenging by introducing much longer documents than existing datasets Work was done while this author was an intern at Microsoft Research Asia.",
"and questions that are from real user queries.",
"Besides, unlike conventional MRC tasks (e.g. Ra-jpurkar et al.,2016), in NQ, answers are provided in a two-grained format: long answer, which is typically a paragraph, and short answers, which are typically one or more entities inside the long answer.",
"Figure 1 shows an example from NQ dataset.",
"Existing approaches on NQ have obtained promising results.",
"For example, Kwiatkowski et al. (2019) builds a pipeline model using two separate models: the Decomposable Attention model (Parikh et al., 2016) to select a long answer, and the Document Reader model (Chen et al., 2017) to extract the short answer from the selected long answer.",
"Despite the effectiveness of these approaches, they treat the long and short answer extraction as two individual sub-tasks during training and fail to model this multi-grained characteristic of this benchmark, while we argue that the two sub-tasks of NQ should be considered simultaneously to obtain accurate results.",
"According to Kwiatkowski et al. (2019), a valid long answer must contain all of the information required to answer the question.",
"Besides, an accurate short answer should be helpful to confirm the long answer.",
"For instance, when humans try to find the two-grained answers in the given Wikipedia page in Figure 1, they will first try to retrieve paragraphs (long answer) describing the entity bowling hall of fame , then try to confirm if the location (short answer) of the asked entity exists in the paragraph, which helps to finally decide which paragraph is the long answer.",
"In this way, the two-grained answers can provide evidence for each other.",
"To address the two sub-tasks together, instead of using conventional documents modeling methods like hierarchical RNNs (Cheng and Lapata, 2016; Yang et al., 2016; Nallapati et al., 2017; Narayan et al., 2018), we propose to use graph attention networks (Velickovic et al., 2018) and BERT (Devlin et al., 2019), directly model representations at tokens, sentences, paragraphs, and documents, the four different levels of granularity to capture hierarchical nature of documents.",
"In this way, we directly derive scores of long answers from its paragraph-level representations and obtain scores of short answers from the start and end positions on the token-level representations.",
"Thus the long and short answer selection tasks can be trained jointly to promote each other.",
"At inference time, we use a pipeline strategy similar to Kwiatkowski et al. (2019), where we first select long answers and then extract short answers from the selected long answers.",
"Experiments on NQ dataset show that our model significantly outperforms previous models at both long and short answer criteria.",
"We also analyze the benefits of multi-granularity representations derived from the graph module in experiments.",
"To summarize, the main contributions of this work are as follows: We propose a multi-grained MRC model based on graph attention networks and BERT.",
"We apply a joint training strategy where long and short answers can be considered simultaneously, which is beneficial for modeling the dependencies of the two-grained answers.",
"We achieve state-of-the-art performance on both long and short answer leaderboard of NQ at the time of submission (Jun. 25th, 2019), and our model surpasses single human performance on the development dataset at both long and short answer criteria.",
"We will release our code and models at https: //github.com/DancingSoul/NQ_BERT-DM .",
"Each example in NQ dataset contains a question together with an entire Wikipedia page.",
"The models are expected to predict two types of outputs: 1) long answer, which is an HTML span containing enough information for a reader to completely infer the answer to the question.",
"It can be a paragraph, a table, a list item, or a whole list.",
"A long answer is selected in a list of candidates, or a no answer should be given if no candidate answers the question; 2) short answer, which can be yes, no or a list of entities within the long answer.",
"Also, a no answer should be given if there is no suitable short answer.",
"Since the average length of the documents in NQ is too long to be considered as one training instance, we first split each document into a list of document fragments with overlapping windows of tokens, like in the original BERT model for the MRC tasks (Alberti et al., 2019b; Devlin et al., 2019).",
"Then we generate an instance from a document fragment by concatenating a [CLS] token, tokenized question, a [SEP] token, tokens from the content of the doc-Token-Level Self-Attention Sentence-LevelSelf-Attention Paragraph-LevelSelf-Attention Doc Graph Initialization BERT Encoder Graph Integration Add & Norm Feed-Forward Add & Norm Output Layer N Concatenate Figure 3: Inner structure of our graph encoder.",
"ument fragment and a final [SEP] token.",
"[CLS] and [SEP] follow the definitions from Devlin et al. (2019).",
"We tag each document fragment with an answer type as one of the five labels to construct a training instance: short for instances that contain all annotated short spans, yes and no for yes/no annotations where the instances contain the long answer span, long when the instances contain the long answer span, but there is no short or yes/no answer.",
"In addition to the above situations, we tag a no-answer to those instances.",
"We will explain more details of the data preprocessing in the experiment section.",
"In this section, we will explain our model.",
"The main idea of our model lies in multi-granularity document modeling with graph attention networks.",
"The overall architecture of our model is shown in Figure",
"2. 3.1 Input & Output Definition Formally, we define an instance in the training set as a six-tuple ( c , S, l, s, e, t ) .",
"Suppose the instance is generated from the i -th document fragment D i of the corresponding example, then c = ( [CLS] , Q 1 , ..., Q | Q | , [SEP] , D i, 1 , ..., D i, | D i | , [SEP] ) defines the document fragment D i along with a question Q of the instance, | Q | + | D i | + 3 = 512 corresponding to the data preprocessing method.",
"S denotes the set of long answer candidates inside the document fragment.",
"l S Document Fragment Paragraph Sentence Token Figure 4: The graph on the left is an illustration of the graph integration layer.",
"is the target long answer candidate among the candidate set S of this instance.",
"s , e { 0 , 1 , ..., 511 } are inclusive indices pointing to the start and end of the target answer span.",
"t { 0 , 1 , 2 , 3 , 4 } is the annotated answer type, corresponding to the five labels.",
"For instances containing multiple short answers, we set s and e to point to the leftmost position of the first short answer and the rightmost position of the last short answer, respectively.",
"The intuition of representing documents in multi-granularity is derived from the natural hierarchical structure of a document.",
"Generally speaking, a document can be decomposed to a list of paragraphs, which can be further decomposed to lists of sentences and lists of tokens.",
"Therefore, it is straightforward to treat the document structure as a tree, which has four types of nodes, namely token nodes, sentence nodes, paragraph nodes, and a document node.",
"Different kinds of nodes represent information at different levels of granularity.",
"Since long answer candidates are paragraphs, tables, or lists, information at paragraph nodes also represents the information for long answer candidates.",
"The hierarchical tree structure for a document contains edges that are between tokens and sentences, between sentences and paragraphs, and between paragraphs and documents.",
"Besides, we further add edges between tokens and paragraphs, between tokens and documents, between sentences and the document to construct a graph.",
"All these 1 For brevity, the word document refers to document fragment in the rest of our paper.",
"edges above are bidirectional in our graph representation.",
"Hence information between every two nodes can be passed through no more than two edges in the graph.",
"In the rest of this section, we will present how we utilize this graph structure to pass information between nodes with graph attention networks so that the two-grained answers can promote each other.",
"Figure 3 shows the inner structure of our graph encoder.",
"Each layer in our graph encoder consists of three self-attention layers, a graph integration layer, and a feed-forward layer.",
"The self-attention layers are used for interactions among nodes with the same granularity, while the graph integration layer aims at gathering information from other levels of granularity with graph attention networks.",
"Figure 4 is an illustration for the graph integration layer.",
"Since self-attention is a special case of graph attention networks, where the graph is fully connected, we only introduce the general form of graph attention networks, which can be generalized to the self-attention mechanism.",
"We apply graph attention networks (Velickovic et al., 2018) to model the information flow between nodes, which can further improve the representations of nodes by attention mechanism over features from its neighbors.",
"In this way, the interaction between the two-grained answers can be enhanced.",
"Instead of other graph-based models, we use graph attention networks to keep consistency with the multi-head attention module in the BERT model.",
"We will describe a single layer of our graph attention networks in the following.",
"We define a graph G = ( V , E , X ) that is composed of a set of nodes V , node features X = ( h 1 , ..., h |V| ) and a list of directed edge set E = ( E 1 , ..., EK ) where K is the number of edges.",
"Each i V has its own representation h i R d h where d h is the hidden size of our model.",
"We use the multi-head attention mechanism in our graph attention networks following Vaswani et al. (2017).",
"We describe one of the m attention heads.",
"All the parameters are unique to each attention head and layer.",
"If there is an edge from node j to node i , the attention coefficient e ij is calculated as follows: e ij = (cid:0) h i WQ (cid:1) (cid:0) h j WK (cid:1) T d z .",
"We normalize the attention coefficients of node i by using the softmax function across all the neighbor nodes j N i .",
"Especially, there is a self-loop for each node (i.e. i N i ) to allow it update itself.",
"This process can be expressed as: ij = softmax j ( e ij ) = exp( e ij ) (cid:80) k N i exp( e ik ) .",
"Then the output of this attention head z i is computed as a weighted sum of linear transformed input elements: z i = (cid:88) j N i ij h j WV .",
"of one attention head, we use d z m = d h .",
"Finally we get the multi-head attention result z (cid:48) i R d h by concatenating the outputs of m individual attention heads: z (cid:48) i = m (cid:107) k =1 z ki .",
"The self-attention mechanism is equivalent to the fully-connected version of graph attention networks.",
"To make interactions among nodes with the same granularity, we utilize three self-attention layers, which are token-level self-attention, sentence-level self-attention, and paragraph-level self-attention.",
"Since the four types of nodes are essentially heterogeneous, we separate the self-attention layer from the graph integration layer to distinguish information from nodes with the same granularity or different ones.",
"We use graph attention networks on the graph presented in Figure 4, this layer allows information to be passed to nodes with different levels of granularity.",
"Instead of integrating information only once after the graph encoder, we put this layer right after every self-attention layer inside the graph encoder, which means the update brought by the self-attention layer will also be utilized by the nodes with other levels of granularity.",
"This layer helps to model the dependencies of the two-grained answers.",
"We concatenate the input and output of the graph integration layer and pass it to the feed-forward layer.",
"Following the inner structure of the transformer (Vaswani et al., 2017), we also utilize an additional fully connected feed-forward network at the end of our graph encoder.",
"It consists of two linear transformations with a GELU activation in between.",
"GELU is Gaussian Error Linear Unit activation (Hendrycks and Gimpel, 2016), and we use GELU as the non-linear activation, which is consistent with BERT.",
"Inspired by positional encoding in Vaswani et al. (2017) and relative position representations in Shaw et al. (2018), we introduce a novel relational embedding on our constructed graph, which aims at modeling the relative position information between nodes on the multi-granularity document structure.",
"We make the edges in our document modeling graph to embed relative positional information.",
"We modify equation 1 and 2 for e ij and z i to introduce our relational embedding as follows: e ij = (cid:0) h i WQ (cid:1) (cid:0) h j WK (cid:1) T + h i WQ (cid:16) a K ij (cid:17) T d z , z i = (cid:88) j N i ij (cid:0) h j WV + a V ij (cid:1) .",
"In above equations, the edge between node i and node j is represented by learnable embedding a Kij , a Vij R d z .",
"The representation can be shared across attention heads.",
"Compared to previous work which encodes positional information in the embedding layer, our proposed relational embedding is more flexible, and the positional information can be taken into consideration in each graph layer.",
"For example, relational embedding between two nodes of the same type represents the relative distance between them in the self-attention layer.",
"In the graph integration layer, relational embedding between a sentence and its paragraph represents the relative position of the sentence in the paragraph, and it is the same for other types of edges.",
"Since the BERT model can only provide token-level representation, we use a bottom-up average-pooling strategy to initialize the nodes other than token-level nodes.",
"We use o i { 0 , 1 , 2 , 3 } to indicate the type of node i , representing token node, sentence node, paragraph node and document node respectively.",
"The initialized representation is calculated as follows: h 0 i = average j N i ,o j +1= o i (cid:8) h 0 j + a ij (cid:9) + b o i , where a ij , b o i R d h represent the relational embedding and node type embedding in the graph initializer.",
"The objective function is defined as the negative sum of the log probabilities of the predicted distributions, averaged over all the training instances.",
"The log probabilities of predicted distributions are indexed by the true start and end indices, true long answer candidate index, and the type of this instance: L ( ) = 1 NN (cid:88) i [log p ( s, e, t, l | c , S )] = 1 NN (cid:88) i [log p s ( s | c , S ) + log p e ( e | c , S ) + log p t ( t | c , S ) + log p l ( l | c , S )] , where p s ( s | c , S ) , p e ( e | c , S ) , p l ( l | c , S ) and p t ( t | c , S ) are the probabilities for the start and end position of the short answer, probabilities for the long answer candidate, and probabilities for the answer type of this instance, respectively.",
"One of the probability, p s ( s | c , S ) , is computed as follow, and the others are similar to it: p s ( s | c , S ) = softmax ( f s ( s, c , S ; )) , where f s is a scoring function, derived from the last layer of graph encoder.",
"Similarly, we derive score functions at the other three levels of granularity.",
"For instances without short answers, we set the target start and end indices to the [CLS] token.",
"We also make [CLS] markup as the first sentence and paragraph, and the paragraph-level [CLS] will be classified as long answers for the instances without long answers.",
"At inference time, we get the score of a document fragment g ( c , S ) , long answer score g ( c , S, l ) and short answer score g ( c , S, s, e ) Long Answer Dev Long Answer Test Short Answer Dev Short Answer Test P R F1 P R F1 P R F1 P R F1 DocumentQA 47.5 44.7 46.1 48.9 43.3 45.7 38.6 33.2 35.7 40.6 31.0 35.1 DecAtt + DocReader 52.7 57.0 54.8 54.3 55.7 55.0 34.3 28.9 31.4 31.9 31.1 31.5 BERT joint 61.3 68.4 64.7 64.1 68.3 66.2 59.5 47.3 52.7 63.8 44.0 52.1 + 4M synthetic data 62.3 70.0 65.9 65.2 68.4 66.8 60.7 50.4 55.1 62.1 47.7 53.9 BERT-syn+Model-III 72.4 73.0 72.7 --60.1 54.1 56.9 -+ ensemble 3 models 74.2 73.6 73.9 73.7 75.3 74.5 64.0 54.9 59.1 62.6 55.3 58.7 Single Human 80.4 67.6 73.4 --63.4 52.6 57.5 -Super-annotator 90.0 84.6 87.2 --79.1 72.6 75.7 -Table 1: Results of our best model on NQ compared to the previous systems and to the performance of a single human annotator and of an ensemble of human annotators.",
"g ( c , S ) = f t ( t > 0 , c , S ; ) f t ( t = 0 , c , S ; ); g ( c , S, l ) = f l ( l, c , S ; ) f l ( l = [CLS] , c , S ; ); g ( c , S, s, e ) = f s ( s, c , S ; ) + f e ( e, c , s ; ) f s ( s = [CLS] , c , S ; ) f e ( e = [CLS] , c , S ; ) .",
"We use the sum of g ( c , S, l ) and g ( c , S ) to select a long answer candidate with highest score.",
"g ( c , S ) is considered as a bias term for document fragments.",
"Then we use g ( c , S, s, e ) to select the final short answer within the selected long answer span.",
"We rely on the official NQ evaluation script to set thresholds to separate the predictions to positive and negative on both long and short answer.",
"In this section, we will first describe the data preprocessing details, then give the experimental results and analysis.",
"We also conduct an error analysis and two case studies in the appendix.",
"We ignore all the HTML tags as well as tokens not belonging to any long answer candidates.",
"The average length of documents is approximately 4 , 500 tokens after this process.",
"Following Devlin et al. (2019) and Alberti et al. (2019b), we first tokenize questions and documents using a 30 , 522 word-piece vocabulary.",
"Then we slide a window of a certain length over the entire length of the document with a stride of 128 tokens, generating a list of document fragments.",
"There are about 7 paragraphs and 18 sentences on average per document fragment.",
"We add special markup tokens at the beginning of each long answer candidate according to the content of the candidate.",
"The special tokens we introduced are of the form [Paragraph=N], [Table=N] and [List=N].",
"According to Alberti et al. (2019b), this decision was based on the observation that the first few paragraphs and tables in the document are more likely to contain the annotated answer.",
"We generate 30 instances on average per NQ example, and each instance will be processed independently during the training phase.",
"Since the fact that only a small fraction of generated instances are tagged as positive instances which contains a complete span of long or short answer, and that 51% of the documents do not contain the answers for the questions, We downsample about 97% of null instances to get about 660 , 000 training instances in which 350 , 000 has a long answer, and 270 , 000 has short answers.",
"We use three model settings for our experiments, which are: 1) Model-I: A refined BERT baseline on the basis of Alberti et al. (2019b); 2) Model-II: A pipeline model with only graph initialization method to get representation of sentence, paragraph, and document; 3) Model-III: Adding two layers of our graph encoder on the basis of Model-II.",
"Model-I improves the baseline in Alberti et al. (2019b) in two ways: 1) When training an instance with a long answer only, we ignore the loss of predicting the short answer span to no-answer because it would introduce distraction to the model.",
"2) We sample more negative instances.",
"BERT-base-uncased model finetuned on SQuAD 2.0; 2) BERT-large: a BERT-large-uncased model finetuned on SQuAD 2.0; 3) BERT-syn: Google's BERT-large-uncased model pre-trained on SQuAD2.0 with N-Gram Masking and Synthetic Self-Training.",
"2 Since the Natural Question dataset does not provide sentence-level information, we additionally use spacy (Honnibal and Mon-tani, 2017) as the sentence segmentor to get the boundaries of sentences.",
"We trained the model by minimizing loss L from Section 3.4 using the Adam optimizer (Kingma and Ba, 2015) with a batch size of 32 .",
"We trained our model for 2 epochs with an initial learning rate of 2 10 5 , and we use a warmup proportion of 0 .",
"1 .",
"The training of our proposed model is conducted on 4 Tesla P40 GPUs for approximately 2 days.",
"For each setting, the results are averaged over three models initialized with different random seeds to get a more solid comparison, which also suggests the improvements brought by our methods are relatively stable.",
"The hidden size, the number of attention heads, and the dropout rate in our graph encoder are equal to the values in the corresponding BERT model.",
"The main results are shown in Table",
"1. The results show that our best model BERT-syn+Model-III(ensemble 3 models) have gained improvement over the previous models by a large margin.",
"Our ensemble strategy is to train three models with different random seeds.",
"The scores of answer candi-2 This model can be downloaded at https://bit.ly/ 2w7nUQK .",
"dates are averaged over these three models.",
"At the time of submission (Jun. 25th, 2019), this model has achieved the state-of-the-art performance on both long answer (F1 score of 74 . 5% ) and short answer (F1 score of 58 . 7% ) on the public leaderboard 3 .",
"Furthermore, our model surpasses single human performance at both long and short answer criteria on the development dataset.",
"The comparison of different models with different BERT models is illustrated in Table",
"2. The results show that our approach significantly outperforms our baseline model on both the long answer and the short answer.",
"For the BERT-base setting, our Model-II with a pipeline inference strategy outperforms our baseline by 3 .",
"8% on long answer F1 score while our Model-II with two graph layers further improves the performance by 1 .",
"2% and 1 .",
"0% .",
"For the BERT-syn setting, the Model-III benefits less from the graph layers because the pretraining for this model is already quite strong.",
"Our Model-III with BERT-large, compared to previously public model (BERT joint ) also using BERT-large, improves long answer F1 score by 6.0% and short answer F1 score by 1.1% on the development set.",
"From Table 1 and Table 2, we can see that the ensemble of human annotators can lead to a massive improvement at both long and short answer criteria (from 73.4% to 87.2%, 57.5% to 75.7%).",
"However, the improvement of ensembling our BERT-based model is relatively smaller (from 72.7% to 73.9%, 56.9% to 59.1%).",
"This suggests that the diversity of human annotators is a lot better than the same model structure with different random seeds.",
"How to improve the diversity of the deep learning models for the open-domain datasets like NQ remains as a hard question.",
"3 Since we can only make 10 submissions on the test dataset, we only submit and report the result of our best model.",
"Due to the official attempts on the test dataset are given 24 hours.",
"We can only ensemble 3 models at most.",
"We evaluate the influence of layer numbers, which is illustrated in Table",
"3. We can see the increase in the performance of our models when the number of layers increases from 0 to 2 (The 0-layer setting means that only the graph initialization module is used to obtain the graph representations).",
"Then the model performance does not improve with the number of network layers increasing.",
"We attribute it to the fact that the information between every two nodes in our proposed graph can be passed through in no more than two edges, and that increasing the size of randomly initialized parameters may not be beneficial for BERT fine-tuning.",
"To evaluate the effectiveness of our proposed model, we conduct an ablation study on the development dataset on the BERT-base setting.",
"The results are shown in Table",
"4. First, we discuss the effect of the joint training strategy.",
"We can see that the removal of either sub-task goals will bring decreases on both tasks.",
"It suggests that the two-grained answers can promote each other with our multi-granularity representation.",
"Then we remove the whole graph module, which means the inference process depends on the score of short answer spans because long answer candidates cannot be scored.",
"We can see the decrease of both long and short answer performance by 5 .",
"0% and 0 .",
"9% , respectively, indicating the effectiveness of our proposed graph representations.",
"Finally, we investigate the effect of components in our graph encoder.",
"In Table 4, we can see that without relational embedding, the performance on the long answer and short answer both slightly decrease.",
"When removing the graph integration layer, the performance of long answer and short answer both become worse by 0 .",
"6% and 0 .",
"8% .",
"At last, we remove the self-attention layer in the graph encoder, the performance of long answer and short answer both become worse by 0 .",
"Machine reading comprehension has been widely investigated since the release of large-scale datasets (Rajpurkar et al., 2016; Joshi et al., 2017; Lai et al., 2017; Trischler et al., 2017; Yang et al., 2018).",
"Lots of work has begun to build end-to-end deep learning models and has achieved good results (Seo et al., 2017; Xiong et al., 2017; Cui et al., 2017; Devlin et al., 2019; Lv et al., 2020).",
"They normally treat questions and documents as two simple sequences regardless of their structures and focus on incorporating questions into the documents, where the attention mechanism is most widely used.",
"Clark and Gardner (2018) proposes a model for multi-paragraph reading comprehension using TF-IDF as the paragraph selection method.",
"Wang et al. (2018) focuses on modeling a passage at word and sentence level through hierarchical attention.",
"Previous work on document modeling is mainly based on a two-level hierarchy (Ruder et al., 2016; Tang et al., 2015; Yang et al., 2016; Cheng and Lapata, 2016; Koshorek et al., 2018; Zhang et al., 2019).",
"The first level encodes words or sentences to get the low-level representations.",
"Moreover, a high-level encoder is applied to obtain document representation from the low-level.",
"In these frameworks, information flows only from low-level to high-level.",
"Fernandes et al. (2018) proposed a graph neural network model for summarization and this framework allows much complex information flows between nodes, which represents words, sentences, and entities in the graph.",
"Graph neural networks have shown their flexibil-ity in a variant of NLP tasks (Zhang et al., 2018c; Marcheggiani et al., 2018; Zhang et al., 2018b; Song et al., 2018).",
"A recent approach that began with Graph Attention Networks (Velickovic et al., 2018), which applies attention mechanisms to graphs.",
"Wang et al. (2019) proposed knowledge graph attention networks to model the information in the knowledge graph, (Zhang et al., 2018a) proposed gated attention networks, which use a convolutional sub-network to control each attention head's importance.",
"We model the hierarchical nature of documents by representing them at four different levels of granularity.",
"Besides, the relations between nodes are represented by different types of edges in the graph.",
"In this work, we present a novel multi-grained MRC framework based on graph attention networks and BERT.",
"We model documents at different levels of granularity to learn the hierarchical nature of the document.",
"On the Natural Questions dataset, which contains two sub-tasks predicting a paragraph-level long answer and a token-level short answer, our method jointly trains the two sub-tasks to consider the dependencies of the two-grained answers.",
"The experiments show that our proposed methods are effective and outperform the previously existing methods by a large margin.",
"Improving our graph structure of representing the document as well as the document-level pretraining tasks is our future research goals.",
"Besides, the currently existing methods actually cannot process a long document without truncating or slicing it into fragments.",
"How to model long documents is still a problem that needs to be solved.",
"This work was supported by the National Natural Science Foundation of China (NSFC) via grant 61976072, 61632011 and 61772153."
] | [
"abstain",
"abstain",
"objective",
"result",
"abstain",
"method",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"result",
"abstain",
"abstain",
"result",
"method",
"objective",
"method",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"method",
"method",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"objective",
"abstain",
"method",
"objective",
"objective",
"abstain",
"abstain",
"other"
] |
[
"While pretrained language models achieve excellent performance on natural language understanding benchmarks, they tend to rely on spurious correlations and generalize poorly to out-of-distribution (OOD) data.",
"Recent work has explored using counterfactually-augmented data (CAD)data generated by minimally perturbing examples to flip the ground-truth labelto identify robust features that are invariant under distribution shift.",
"However, empirical results using CAD during training for OOD generalization have been mixed.",
"To explain this discrepancy, through a toy theoretical example and empirical analysis on two crowdsourced CAD datasets, we show that:",
"(a) while features perturbed in CAD are indeed robust features, it may prevent the model from learning unperturbed robust features; and",
"(b) CAD may exacerbate existing spurious correlations in the data.",
"Our results thus show that the lack of perturbation diversity limits CAD's effectiveness on OOD generalization, calling for innovative crowdsourcing procedures to elicit diverse perturbation of examples.",
"Large-scale datasets have enabled tremendous progress in natural language understanding (NLU) (Rajpurkar et al., 2016; Wang et al., 2019) with the rise of pretrained language models (Devlin et al., 2019; Peters et al., 2018).",
"Despite this progress, there have been numerous works showing that models rely on spurious correlations in the datasets, i.e. heuristics that are effective on a specific dataset but do not hold in general (McCoy et al., 2019; Naik et al., 2018; Wang and Culotta, 2020).",
"For example, BERT (Devlin et al., 2019) trained on MNLI (Williams et al., 2018) learns the spurious correlation between world overlap and entailment label.",
"A recent promising direction is to collect counterfactually-augmented data (CAD) by asking humans to minimally edit examples to flip their ground-truth label (Kaushik et al., 2020).",
"Figure 1 shows example edits for Natural Language Inference (NLI).",
"Given interventions on robust features that cause the label to change, the model is expected to learn to disentangle the spurious and robust features.",
"Despite recent attempt to explain the efficacy of CAD by analyzing the underlying causal structure of the data (Kaushik et al., 2021), empirical results on out-of-distribution (OOD) generalization using CAD are mixed.",
"Specifically, Huang et al. (2020) show that CAD does not improve OOD generalization for NLI; Khashabi et al. (2020) find that for question answering, CAD is helpful only when it is much cheaper to create than standard examples but Bowman et al. (2020) report that the cost is actually similar per example.",
"In this work, we take a step towards bridging this gap between what theory suggests and what we observe in practice in regards to CAD.",
"An intuitive example to illustrate our key observation is shown in Figure 1",
"(a), where the verb eating' is changed to drinking' to flip the label.",
"While there are many other words that could have been changed to flip the label, given only these two examples, the model learns to use only the verbs (e.g. using a Naive Bayes model, all other words would have zero weights).",
"As a result, this model would fail when evaluated on examples such as those in",
"(b) where the quantifier two' is changed to three', while a model trained on the unaugmented data may learn to use the quantifiers.",
"First, we use a toy theoretical setting to formalize counterfactual augmentation, and demonstrate that with CAD, the model can learn to ignore the spurious features without explicitly intervening on them.",
"However, we find that without perturbing all robust features to generate CAD, perturbations of one robust feature can prevent the model from learning other unperturbed robust features.",
"Motivated by this, we set up an empirical analysis on 3668 Premise : The lady is standing next to her two children who are eating a pizza.",
"two crowdsourced CAD datasets collected for NLI and Question Answering (QA).",
"In the empirical analysis, we identify the robust features by categorizing the edits into different perturbation types (Wu et al., 2021) (e.g. negating a sentence or changing the quantifiers), and show that models do not generalize well to unseen perturbation types, sometimes even performing worse than models trained on unaugmented data.",
"Our analysis of the relation between perturbation types and generalization can help explain other observations such as CAD being more beneficial in the low-data regime.",
"With increasing data size, improvement from using CAD plateaus compared to unaugmented data, suggesting that the number of perturbation types in existing CAD datasets does not keep increasing.",
"Another consequence of the lack of diversity in edits is annotation artifacts, which may produce spurious correlations similar to what happens in standard crowdsourcing procedures.",
"While CAD is intended to debias the dataset, surprisingly, we find that crowdsourced CAD for NLI exacerbates word overlap bias (McCoy et al., 2019) and negation bias (Gururangan et al., 2018a) observed in existing benchmarks.",
"In sum, we show that while CAD can help the model ignore spurious feature, its effectiveness in current CAD datasets is limited by the set of robust features that are perturbed.",
"Furthermore, CAD may exacerbate spurious correlations in existing benchmarks.",
"Our results highlight the importance of increasing the diversity of counterfactual perturbations during crowdsourcing: We need to elicit more diverse edits of examples that make models more robust to the complexity of language.",
"In this section, we use a toy setting with a linear Gaussian model and squared loss to formalize counterfactual augmentation and discuss the conditions required for it's effectiveness.",
"The toy example serves to motivate our empirical analysis in Section 3.",
"We adopt the setting in Rosenfeld et al. (2021): each example consists of robust features x r R d r whose joint distribution with the label is invariant during training and testing, and spurious features x s R d s whose joint distribution with the label varies at test time.",
"Here d r and d s denote the feature dimensions.",
"We consider a binary classification setting where the label y { 1 , 1 } is drawn from a uniform distribution, and both the robust and spurious features are drawn from Gaussian distributions.",
"Specifically, an example x = [ x r , x s ] R d is generated by the following process (where d = d r + d s ): y = (cid:40) 1 w.p. 0 .",
"where r R d r ; s R d s ; r , s R ; and I is the identity matrix.",
"1 The corresponding data distribution is denoted by D .",
"Note that the relation between y and the spurious features x s depends on s and s , which may change at test time, thus relying on x s may lead to poor OOD performance.",
"1 This model corresponds to the anti-causal setting (Scholkopf et al., 2012), i.e. the label causing the features.",
"We adopt this setting since it is consistent with how most data is generated in tasks like NLI, sentiment analysis etc. 3669 Intuitively, in this toy setting, a model trained with only access to examples from D would not be able to differentiate between the spurious and robust features, since they play a similar role in the data generating process for D .",
"Formally, consider the setting with infinite samples from D where we learn a linear model ( y = w T x where w R d ) by least square regression.",
"Let w R d be the optimal solution on D (without any counterfactual augmentation).",
"The closed form solution is: Cov( x, x ) w = Cov( x, y ) w = Cov( x, x ) 1 (4) where = [ r , s ] R d and Cov( ) denotes the covariance matrix: Cov( x, x ) = (cid:20) r r Ts s Tr s (cid:21) , (5) where r , s are covariance matrices of x r and x s respectively.",
"This model relies on x s whose relationship with the label y can vary at test time, thus it may have poor performance under distribution shift.",
"A robust model w inv that is invariant to spurious correlations would ignore x s : w inv = (cid:2) 1 r r , 0 (cid:3) .",
"The counterfactual data is generated by editing an example to flip its label.",
"We model the perturbation by an edit vector z that translates x to change its label from y to y (i.e. from 1 to -1 or vice versa).",
"For instance, the counterfactual example of a positive example ( x, +1) is ( x + z, 1) .",
"Specifically, we define the edit vector to be z = [ yz r , yz s ] R d , where z r R d r and z s R d s are the displacements for the robust and spurious features.",
"Here, z is label-dependent so that examples with different labels are translated in opposite directions.",
"Therefore, the counterfactual example ( x c , y ) generated from ( x, y ) has the following distribution: x cr | y N ( y ( r + z r ) , 2 r I ) , (7) x c s | y N ( y ( s + z s ) , 2 s I ) .",
"Optimal edits.",
"Ideally, the counterfactual data should de-correlate x s and y , thus it should only perturb the robust features x r , i.e. z = [ yz r , 0] .",
"To find the displacement z r that moves x across the decision boundary, we maximize the log-likelihood of the flipped label under the data generating distribution D : z r = arg max z r R dr E ( x,y ) D log p ( y | x + [ yz r , 0]) = 2 r .",
"Intuitively, it moves the examples towards the mean of the opposite class along coordinates of the robust features.",
"Using the edits specified above, if the model trained on D c has optimal solution w c , we have: Cov( x, x ) w c = Cov( x, y ) w c = (cid:2) 1 r r , 0 (cid:3) = w inv .",
"Incomplete edits.",
"There is an important assump-tion made in the above result: we have assumed all robust features are edited.",
"Suppose we have two sets of robust features x r 1 and x r 2 , 2 then not editing x r 2 would effectively make it appear spurious to the model and indistinguishable from x s .",
"In practice, this happens when there are multiple robust features but only a few are perturbed during counterfactual augmentation (which can be common during data collection if workers rely on simple patterns to make the minimal edits).",
"Considering the NLI example, if all entailment examples are flipped to non-entailment ones by inserting a negation word, then the model will only rely on negation to make predictions.",
"More formally, consider the case where the original examples x = [ x r 1 , x r 2 , x s ] and counterfactual examples are generated by incomplete edits z = [ z r 1 , 0 , 0] that perturb only x r 1 .",
"Using the same analysis above where z r 1 is chosen by maximum likelihood estimation, let the model learned on the incompletely augmented data be denoted by w inc .",
"We can then show that the error of the model trained from incomplete edits can be more than that of the model trained without any counterfactual augmentation under certain conditions.",
"More formally, we have the following: 2 We assume they are conditionally independent given the label.",
"Proposition 1. Define the error for a model as (cid:96) ( w ) = E x F (cid:2) ( w T inv x w T x ) 2 (cid:3) where the distribution F is the test distribution in which x r and x s are independent: x r | y N ( y r , 2 r I ) and x s N (0 , I ) .",
"Assuming all variables have unit variance (i.e. r = 1 and s = 1), (cid:107) r (cid:107) = 1, and (cid:107) s (cid:107) = 1 , we get (cid:96) ( w inc ) > (cid:96) ( w ) if (cid:107) r 1 (cid:107) 2 < 1+ 13 6 0 .",
"767 , where (cid:107) (cid:107) denotes the Euclidean norm, and r 1 is the mean of the perturbed robust feature r 1 .",
"Intuitively, this statement says that if the norm of the edited robust features (in the incomplete-edits model) is sufficiently small, then the test error for a model with counterfactual augmentation will be more than a model trained with no augmentation.",
"Proof Sketch.",
"The proof mainly follows from algebra and using the fact that Cov( x, x ) 1 is a block matrix consisting of rank-one perturbations of the identity matrix.",
"We refer the reader to Appendix A for the detailed proof.",
"Thus, Proposition 1 implies that perturbing only a small subset of robust features could perform worse than no augmentation, indicating the importance of diversity in CAD.",
"Next, we show that the problem of incomplete edits is exhibited in real CAD too.",
"In this section, we test the following hypothesis based on the above analysis: models trained on CAD are limited to the specific robust features that are perturbed and may not learn other unperturbed robust features.",
"We empirically analyze how augmenting counterfactual examples by perturbing one robust feature affects the performance on examples generated by perturbing other robust features.",
"Perturbation types.",
"Unlike the toy example, in NLU it is not easy to define robust features since they typically correspond to the semantics of the text (e.g. sentiment).",
"Following Kaushik et al. (2021) and similar to our toy model, we define robust features as spans of text whose distribution with the label remains invariant, whereas spans of text whose dependence on the label can change during evaluation are defined as spurious features.",
"We then use linguistically-inspired rules (Wu et al., 2021) to categorize the robust features into several perturbation types : negation , quantifier , lexical , insert , delete and resemantic .",
"Table 1 gives the definitions of each type.",
"Train/test sets.",
"Both the training sets and the test sets contain counterfactual examples generated by a particular perturbation type.",
"To test the generalization from one perturbation type to another, we use two types of test sets: aligned test sets where examples are generated by the same perturbation type as the training data; and unaligned test sets where examples are generated by unseen perturbation types (e.g. training on examples from lexical and testing on negation ).",
"Data.",
"We experiment on two CAD datasets collected for SNLI (Kaushik et al., 2020) and BoolQ (Khashabi et al., 2020).",
"The size of the paired data (seed examples and edited examples) for each perturbation type in the training dataset is given in Table 1. Since some types (e.g. delete ) contain too few examples for training, we train on the top three largest perturbation types: lexical , insert , and resemantic for SNLI; and lexical , negation , and resemantic for BoolQ.",
"2: Accuracy of NLI CAD on both aligned and unaligned test sets.",
"We report the mean and standard deviation across 5 random seeds.",
"Each dataset has a total of 1400 examples.",
"On average models perform worse on unaligned test sets (i.e. unseen perturbation types).",
"experiments, we use 700 seed examples and their corresponding 700 perturbations for each perturbation type.",
"As a baseline (SNLI seed'), we subsample examples from SNLI to create a similar sized dataset for comparison.",
"3 For BoolQ (Clark et al., 2019a), our initial experiments show that training on only CAD does not reach above random-guessing.",
"Thus, we include all original training examples in BoolQ (Khashabi et al., 2020), and replace part of them with CAD for each perturbation type.",
"This results in a training set of 9427 examples of which 683 are CAD for each perturbation type.",
"The size 683 is chosen to match the the smallest CAD type for BoolQ.",
"As a baseline (BoolQ seed'), we train on all the original training examples, consisting of 9427 examples.",
"For both datasets, the training, dev and test sets are created from their respective splits in the CAD datasets.",
"The size of the dev and test sets is reported in Appendix B.2.",
"Model.",
"We use the Hugging Face implementation (Wolf et al., 2020) of RoBERTa (Liu et al., 2019) to fine-tune all our models.",
"To account for the small dataset sizes, we run all our experiments with 5 different random seeds and report the mean and standard deviation.",
"Details on hyperparameter tuning are reported in Appendix B.1.",
"4 3 We observe similar trends when using different subsets of the SNLI data.",
"We discuss results for the main question in this sectionhow does adding CAD generated from one perturbation type affect performance on examples generated from other perturbation types?",
"Table 2 and 3 show results for SNLI and BoolQ.",
"CAD performs well on aligned test sets.",
"We see that on average models perform very well on the aligned test sets (same perturbation type as the training set), but do not always do well on unaligned test sets (unseen perturbation types), which is consistent with our analysis in Section 2. On SNLI, one exception is resemantic , which performs well on unseen perturbation types.",
"We believe this is because it is a broad category (replac-ing any constituent) that covers other types such as lexical (replacing any word).",
"Similarly, on BoolQ, lexical and resemantic both perform better than the baseline on some unaligned test sets (e.g. quantifier ), but they perform much better on the aligned test sets.",
"CAD sometimes performs worse than the baseline on unaligned test sets.",
"For example, on SNLI, training on insert does much worse than the seed baseline on lexical and resemantic , and SNLI seed performs best on quantifier and negation .",
"On BoolQ, training on negation does slightly worse than the baseline on lexical and resemantic .",
"This suggests that augmenting perturbations of one particular robust feature may reduce the model's reliance on other robust features, 3672 ins ins+lex ins+lex+resem All Types Diversity 56 58 60 62 64 66 68 70 A cc u r a c y o n MNLI ( OOD ) SNLI Baseline Figure 2: OOD accuracy (mean, std. deviation) on MNLI of models trained on SNLI CAD and SNLI seed (baseline) with increasing number of perturbation types and fixed training set size.",
"In Section 3.3, we have seen that training on CAD generated by a single perturbation type does not generalize well to unseen perturbation types.",
"However, in practice CAD contains many different perturbation types.",
"Do they cover enough robust features to enable OOD generalization?",
"Increasing Diversity.",
"We first verify that increasing the number of perturbed robust features leads to better OOD generalization.",
"Specifically, we train models on subsets of SNLI CAD with increasing coverage of perturbation types and evaluate on MNLI as the OOD data.",
"Starting with only insert , we add one perturbation type at a time until all types are included; the total number of examples are fixed throughout the process at 1400 (which includes 700 seed examples and the corresponding 700 perturbations).",
"Figure 2 shows the OOD accuracy on MNLI when trained on CAD and SNLI seed examples of the same size.",
"We observe that as the number of perturbation types increases, models generalize better to OOD data despite fixed training data size.",
"The result highlights the importance of collecting a diverse set of counterfactual examples, even if each perturbation type is present in a small amount.",
"A natural question to ask here is: If we continue to collect more counterfactual data, does it cover more perturbation types and hence lead to better OOD generalization?",
"Thus we investigate the im-BERT RoBERTa SNLI seed 59.7 0.3 73.8 1.2 CAD 60.2 1.0 70.0 1.1 Table 4: Accuracy (mean and std. deviation across 5 runs) on MNLI of different pretrained models fine-tuned on SNLI seed and CAD.",
"Role of Dataset Size.",
"To better understand the role dataset size plays in OOD generalization, we plot the learning curve on SNLI CAD in Figure 3, where we gradually increase the amount of CAD for training.",
"The baseline model is trained on SNLI seed examples of the same size, and all models are evaluated on MNLI (as the OOD dataset).",
"We also conduct a similar experiment on BoolQ in Figure 4, where a subset of MultiRC (Khashabi et al., 2018) is used as the OOD dataset following Khashabi et al. (2020).",
"Since the test set is unbalanced, we report F1 scores instead of accuracy in this case.",
"For SNLI, CAD is beneficial for OOD generalization only in low data settings ( < 2000 examples).",
"As the amount of data increases, the comparable SNLI baseline performs better and surpasses the performance of CAD.",
"Similarly for BoolQ, we observe that CAD is comparable to the baseline in the low data setting ( 1000 examples).",
"Surprisingly, more CAD for BoolQ leads to worse OOD performance.",
"We suspect this is due to overfitting to the specific perturbation types present in BoolQ CAD.",
"Intuitively, as we increase the amount of data, the diversity of robust features covered by the seed examples also increases.",
"On the other hand, the benefit of CAD is restricted to the perturbed robust features.",
"The plateaued performance of CAD (in the case of NLI) shows that the diversity of perturbations may not increase with the data size as fast as we would like, calling for better crowdsourcing protocols to elicit diverse edits from workers.",
"Role of Pretraining.",
"Tu et al. (2020) show that larger pretrained models generalize better from minority examples.",
"Therefore, in our case we would expect CAD to have limited benefit on larger pretrained models since they can already leverage the 5 The results in Figure 2 when all perturbation types are included indicate that CAD performs better than the SNLI baseline.",
"This is not in contradiction with the results found in Huang et al. (2020), since our models are trained on only a subset of CAD.",
"This further motivates the study of how CAD data size affects generalization.",
"diverse (but scarce) robust features revealed by SNLI examples.",
"We compare the results of BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019) trained on SNLI CAD in Table 4 both models are fine-tuned on the SNLI CAD dataset and are evaluated on the OOD set (MNLI).",
"For the RoBERTa model (pretrained on more data), CAD no longer improves over the SNLI baseline, suggesting that current CAD datasets may not have much better coverage of robust features than what stronger pretrained models can already learn from benchmarks like SNLI.",
"An artifact of underdiverse perturbations is the newly introduced spurious correlations.",
"As an example, in the extreme case where all entailment examples are flipped to non-entailment by the negation operation in Table 1, the model would learn to exclusively rely on the existence of negation words to make predictions, which is clearly undesirable.",
"In this section, we study the impact of CAD on two known spurious correlations in NLI benchmarks: word overlap bias (McCoy et al., 2019) and negation bias (Gururangan et al., 2018b).",
"Negation bias.",
"We take examples where there is a presence of a negation word (i.e. \"no\", \"not\", \"n't\") in the hypothesis, and plot the fraction of examples in each class in both the seed and the Stress Test MNLI subset SNLI Seed 57.5 4.6 63.3 3.8 CAD 49.6 1.5 55.7 4.2 Table 5: Accuracy of models on challenge examples in the stress test and MNLI, where non-contradiction examples contain a negation word in the hypothesis.",
"corresponding CAD examples in Figure 5a.",
"As expected, contradiction is the majority class in the seed group, but surprisingly, including CAD amplifies the fraction of contradiction examples!",
"As a result, training on CAD leads to worse performance on challenge sets that counter the negation bias compared to training on seed examples of the same size.",
"Specifically, we test on the negation' part of the Stress Tests (Naik et al., 2018) 6 and challenge examples in the combined MNLI development set which contain negation words in the hypothesis but are not contradictions.",
"Table 5 shows that models trained on CAD perform worse on both test sets, implying that they rely more on the negation bias.",
"Word-overlap bias.",
"Similarly, in Figure 5b, we show that CAD amplifies the fraction of entailment examples among those with high word overlap (i.e. more than 90% of words in the hypoth-6 Synthetic examples where the phrase and false is not true is appended to the hypothesis of MNLI examples.",
"esis are present in the premise).",
"Models trained on SNLI and CAD both perform poorly (< 10% accuracy) on the non-entailment subset of HANS challenge set (McCoy et al., 2019), which exploits the word overlap bias.",
"Takeaway.",
"This section reveals that in the process of creating CAD, we may inadvertently exacerbate existing spurious correlations.",
"The fundamental challenge here is that perturbations of the robust features are only observed through word change in the sentenceit is hard to surface the underlying causal variables without introducing (additional) artifacts to the sentence form.",
"Label-Preserving Data Augmentation.",
"A common strategy to build more robust models is to augment existing datasets with examples similar to those from the target distribution.",
"Min et al. (2020) improve accuracy on HANS challenge set (McCoy et al., 2019) by augmenting syntactically-rich examples.",
"Jia and Liang (2016) and Andreas (2020) recombine examples to achieve better compositional generalization.",
"There has also been a recent body of work using task-agnostic data augmentation by paraphrasing (Wei and Zou, 2019), back-translation (Sennrich et al., 2016) and masked language models (Ng et al., 2020).",
"The main difference between these works and CAD is that the edits in these works are label-preserving whereas they are label-flipping in CADthe former prevents models from being over-sensitive and the latter alleviates under-sensitivity to perturbations.",
"Label-Changing Data Augmentation.",
"Lu et al. (2020) and Zmigrod et al. (2019) use rule-based CAD to mitigate gender stereotypes.",
"Gardner et al. (2020) build similar contrast sets using expert edits for evaluation.",
"In contrast, Kaushik et al. (2020) crowdsource minimal edits.",
"Recently, Teney et al. (2020) also use CAD along with additional auxiliary training objectives and demonstrate improved OOD generalization.",
"Kaushik et al. (2021) analyze a similar toy model (linear Gaussian model) demonstrating the benefits of CAD, and showed that noising the edited spans hurts performance more than other spans.",
"Our analysis complements theirs by showing that while spans identified by CAD are useful, a lack of diversity in these spans limit the effectiveness of CAD, thus better coverage of robust features could potentially lead to better OOD generalization.",
"Robust Learning Algorithms.",
"Another direction of work has explored learning more robust models without using additional augmented data.",
"These methods essentially rely on learning debi-ased representationsWang et al. (2018) create a biased classifier and project its representation out of the model's representation.",
"Along similar lines, Belinkov et al. (2019) remove hypothesis-only bias in NLI models by adversarial training.",
"He et al. (2019) and Clark et al. (2019b) correct the conditional distribution given a biased model.",
"Utama et al. (2020) build on this to remove unknown' biases, assuming that a weak model learns a biased representations.",
"More recently, Veitch et al. (2021) use ideas from causality to learn invariant predic-3675 tors from counterfactual examples.",
"The main difference between these methods and CAD is that the former generally requires some prior knowledge of what spurious correlations models learn (e.g. by constructing a biased model or weak model), whereas CAD is a more general human-in-the-loop method that leverages humans' knowledge of robust features.",
"In this work, we first analyzed CAD theoretically using a linear model and showed that models do not generalize to unperturbed robust features.",
"We then empirically demonstrated this issue in two CAD datasets, where models do not generalize well to unseen perturbation types.",
"We also showed that CAD amplifies existing spurious correlations, pointing out another concern.",
"Given these results, a natural question is: How can we fix these problems and make CAD more useful for OOD generalization?",
"We discuss a few directions which we think could be helpful: We can use generative models (Raffel et al., 2020; Lewis et al., 2020) to generate diverse minimal perturbations and then crowdsource labels for them (Wu et al., 2021).",
"We can improve the diversity of the generations by masking different spans in the text to be in-filled, thus covering more robust features.",
"An alternative to improving the crowdsourcing procedure is to devise better learning algorithms which mitigate the issues pointed out in this work.",
"For example, given that we know the models do not always generalize well to unperturbed features, we can regularize the model to limit the reliance on the perturbed features.",
"We hope that this analysis spurs future work on CAD, making them more useful for OOD generalization.",
"We thank Divyansh Kaushik, Tatsunori Hashimoto and members of the NYU ML2 group for discussion and feedback on the work.",
"The first author is supported by a NSF Graduate Research Fellowship under grant number 1839302.",
"This work was partly supported by Samsung Advanced Institute of Technology (Next Generation Deep Learning: From Pattern Recognition to AI)."
] | [
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"objective",
"objective",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"objective",
"result",
"result",
"method",
"result",
"result",
"method",
"abstain",
"other",
"other",
"other"
] |
[
"Multi-label text classification is one of the fundamental tasks in natural language processing.",
"Previous studies have difficulties to distinguish similar labels well because they learn the same document representations for different labels, that is they do not explicitly extract label-specific semantic components from documents.",
"Moreover, they do not fully explore the high-order interactions among these semantic components, which is very helpful to predict tail labels.",
"In this paper, we propose a novel label-specific dual graph neural network (LDGN), which incorporates category information to learn label-specific components from documents, and employs dual Graph Convolution Network (GCN) to model complete and adaptive interactions among these components based on the statistical label co-occurrence and dynamic reconstruction graph in a joint way.",
"Experimental results on three benchmark datasets demonstrate that LDGN significantly outperforms the state-of-the-art models, and also achieves better performance with respect to tail labels.",
"Automatically labeling multiple labels of documents is a fundamental and practical task in natural language processing.",
"Recently, with the growth of data scale, multi-label text classifica-tion(MLTC) has attracted more attention, since it is often applied to many fields such as sentiment analysis (Liu and Chen, 2015; Li et al., 2016), emotion recognition (Wang et al., 2016; Jabreel and Moreno, 2019), web page tagging (Jain et al., 2016) and so on.",
"However, the number of labels and documents and the complex relations of labels render it an unsolved and challenging task.",
"representation (Liu et al., 2017) and modeling label dependency (Zhang et al., 2018; Yang et al., 2018; Tsai and Lee, 2019) to improve classification performance.",
"Although they have explored the informative words in text content, or considered the label structure and label semantics to capture label correlations, these models cannot distinguish similar labels well (e.g., the categories Prices vs Consumer Prices in Reuters News).",
"The main reason is that most of them neglect the semantic connections between labels and input documents and they learn the same document representations for different labels, which cannot issue the label similarity problem.",
"More specifically, they do not explicitly consider the corresponding semantic parts of each label in the document.",
"Recently, some studies (You et al., 2019; Xiao et al., 2019; Du et al., 2019) have used attention mechanism to explore the above semantic connections, and learn a label-specific document representation for classification.",
"These methods have obtained promising results in MLTC, which shows the importance of exploring semantic connections.",
"However, they did not further study the interactions between label-specific semantic components which can be guided by label correlations, and thus these models cannot work well on predicting tail labels which is also a challenging issue in MLTC.",
"To handle these issues, a common way to explore the semantic interactions between label-specific parts in document is to utilize the statistical correlations between categories to build a label co-occurrence graph for guiding interactions.",
"Nevertheless, statistical correlations have three drawbacks.",
"First, the co-occurrence patterns between label pairs obtained from training data are incomplete and noisy.",
"Specifically, the label co-occurrences that appear in the test set but do not appear in the training set may be ignored, while some rare label co-occurrences in the statistical correlations may be noise.",
"Second, the label co-occurrence graph is built in global, which may be biased for rare label correlations.",
"And thus they are not flexible to every sample document.",
"Third, statistical label correlations may form a long-tail distribution, i.e., some categories are very common while most categories have few of documents.",
"This phenomenon may lead to models failing to predict low-frequency labels.",
"Thus, our goal is to find a way to explore the complete and adaptive interactions among label-specific semantic components more accurately.",
"In this paper, we investigate: (1) how to explicitly extract the semantic components related to the corresponding labels from each document; and (2) how to accurately capture the more complete and more adaptive interactions between label-specific semantic components according to label dependencies.",
"To solve the first challenge, we exploit the attention mechanism to extract the label-specific semantic components from the text content, which can alleviate the label similar problem.",
"To capture the more accurate high-order interactions between these semantic components, we first employ one Graph Convolution Network (GCN) to learn component representations using the statistical label co-occurrence to guide the information propagation among nodes (components) in GCN.",
"Then, we use the component representations to reconstruct the adjacency graph dynamically and re-learn the component representations with another GCN, and thus we can capture the latent interactions between these semantic components.",
"Finally, we exploit final component representations to predict labels.",
"We evaluate our model on three real-world datasets, and the results show that the proposed model LDGN outperforms all the comparison methods.",
"Further studies demonstrate our ability to effectively alleviate the tail labels problem, and accurately capture the meaningful interactions between label-specific semantic components.",
"The contributions of this paper are as follows: We propose a novel label-specific dual graph neural network (LDGN), which incorporates category information to extract label-specific components from documents, and explores the interactions among these components.",
"To model the accurate and adaptive interactions, we jointly exploit global co-occurrence patterns and local dynamic relations.",
"To make up the deficiency of co-occurrences, we employ the local reconstruction graph which is built by every document dynamically.",
"We conduct a series of experiments on three public datasets, and experimental results demonstrate that our model LDGN significantly outperforms the state-of-the-art models, and also achieves better performance with respect to tail labels.",
"As depicted in Figure 1, our model LDGN is composed of two major modules: 1) label-specific document representation 2) dual graph neural network for semantic interaction learning.",
"Specifically, label-specific document representation learning describes how to extract label-specific semantic components from the mixture of label information in each document; and the dual graph neural network for semantic interaction learning illustrates how to accurately explore the complete interactions among these semantic components under the guidance of the prior knowledge of statistical label co-occurrence and the posterior information of dynamic reconstruction graph.",
"Problem Formulation: Let D = { x i , y i } N be the set of documents, which consists of N document x i and its corresponding label y i { 0 , 1 } | C | , where | C | denotes the total number of labels.",
"Each document x i contains J words x i = w i 1 , w i 2 , . . . , w iJ .",
"The target of multi-label text classification is to learn the mapping from input text sequence to the most relevant labels.",
"Given a document x with J words, we first embed each word w j in the text into a word vector e wj R d , where d is the dimensionality of word embedding vector.",
"To capture contextual information from two directions of the word sequence, we first use a bidirectional LSTM to encode word-level semantic information in document representation.",
"And we concatenate the forward and backward hidden states to obtain the final word sequence vector h R | J | D .",
"After that, to explicitly extract the corresponding semantic component related to each label from documents, we use a label guided attention mechanism to learn label-specific text representation.",
"Firstly, we randomly initialize the label representation C R | C | d c , and compute the label-aware attention values.",
"Then, we can induce the label-specific semantic components based on the label guided attention.",
"The formula is as follows: ij = exp (cid:0) h j c Ti (cid:1) (cid:80) j exp (cid:0) h j c Ti (cid:1) , (1) u i = (cid:88) j ij h j , (2) where ij indicates how informative the j-th text feature vector is for the i-th label.",
"u i RD denotes the semantic component related to the label c i in this document.",
"Interaction Learning with Statistical Label Co-occurrence To capture the mutual interactions between the label-specific semantic components, we build a label graph based on the prior knowledge of label co-occurrence, each node in which correlates to a label-specific semantic component u i .",
"And then we apply a graph neural network to propagate message between nodes.",
"Formally, we define the label graph G = ( V , E ) , where nodes refer to the categories and edges refer to the statistical co-occurrence between nodes (categories).",
"Specifically, we compute the probability between all label pairs in the training set and get the matrix A s R | C || C | , where A sij denotes the conditional probability of a sample belonging to category C i when it belongs to category C j .",
"Then, we utilize GCN (Kipf and Welling, 2017) to learn the deep relationships between label-specific semantic components guided by the statistical label correlations.",
"GCNs are neural networks operating on graphs, which are capable of enhancing node representations by propagating messages between neighboring nodes.",
"In multi-layer GCN, each GCN layer takes the component representations from previous layer H l as inputs and outputs enhanced component representations, i.e., H l +1 .",
"The layer-wise propagation rule is as follows: H l +1 = (cid:16) (cid:98) A s H l W l (cid:17) , (3) where ( ) denotes LeakyReLU (Maas et al., 2013) activation function.",
"W l RD D (cid:48) is a transformation matrix to be learned.",
"(cid:98)",
"A denotes the normalized adjacency matrix, and the normalization method (Kipf and Welling, 2017) is: (cid:98) A = D 12 AD 12 , (4) where D is a diagonal degree matrix with entries D ij = j A ij Depending on how many convolutional layers are used, GCN can aggregate information only about immediate neighbors (with one convolutional layer) or any nodes at most K-hops neighbors (if K layers are stacked).",
"See (Kipf and Welling, 2017) for more details about GCN.",
"We use a two-layer GCN to learn the interactions between label-specific components.",
"The first layer takes the initialized component representations U R | C | D in Equation 2 as inputs H 0 ; and the last layer outputs H 2 R | C | D (cid:48) with D (cid:48) denoting the dimensionality of final node representations.",
"However, the statistical label correlations obtained by training data are incomplete and noisy.",
"pairs may form a long-tail distribution.",
"Re-learning with Dynamic Reconstruction Graph To capture the more complete and adaptive interactions between these components, we exploit the above component representations H 2 to reconstruct the adjacency graph dynamically, which can make up the deficiency of co-occurrence matrix.",
"And then we re-learn the interactions among the label-specific components guided by the posterior information of dynamic reconstruction graph.",
"Specifically, we apply two 1 1 convolution layers and dot product to get the dynamic reconstruction graph AD as follows: AD = f (cid:16)(cid:0) W a H 2 (cid:1) T (cid:0) W b H 2 (cid:1)(cid:17) , (5) where W a R d 1 D (cid:48) and W b R d 1 D (cid:48) are the weights of two convolution layers, f is the sigmoid activation function.",
"And then we normalize the reconstruction adjacency matrix as Equation 4, and obtain the normalized adjacency matrix (cid:98) AD of reconstruction graph.",
"In a similar way as Equation 3, we apply another 2-layer GCN to learn the deep correlations between components with the dynamic reconstruction graph.",
"The first layer of this GCN takes the component representations H 2 as inputs, and the last layer outputs the final component representations H 4 R | C | D (cid:48) .",
"After the above procedures, we concatenate the two types of component representations HO = [ H 2 , H 4 ] and feed it into a fully connected layer for prediction: (cid:98) y = ( W 1 HO ) , where W 1 R 2 D (cid:48) 1 and is the sigmoid function.",
"We use y R | C | to represent the ground-truth label of a document, where y i = 0 , 1 denotes whether label i appears in the document or not.",
"The proposed model LDGN is trained with the multi-label cross entropy loss: L = C (cid:88) c =1 y c log ( (cid:98) y c ) + (1 y c ) log (1 (cid:98) y c ) .",
"classification datasets, which are AAPD (Yang et al., 2018), EUR-Lex (Mencia and Furnkranz, 2008) and RCV1 (Lewis et al., 2004).",
"The statistics of these three datasets are listed in Table 1.",
"Evaluation Metric Following the settings of previous work (You et al., 2019; Xiao et al., 2019), we use precision at top K (P@k) and Normalized Discounted Cumulated Gains at top K (nDCG@k) for performance evaluation.",
"The definition of two metrics can be referred to You et al. (2019).",
"Implementation Details For a fair comparison, we apply the same dataset split as previous work (Xiao et al., 2019), which is also the original split provided by dataset publisher (Yang et al., 2018; Mencia and Furnkranz, 2008).",
"The word embeddings in the proposed network are initialized with the 300-dimensional word vectors, which are trained on the datasets by Skip-gram (Mikolov et al., 2013) algorithm.",
"The hidden sizes of Bi-LSTM and GCNs are set to 300 and 512, respectively.",
"We use the Adam optimization method (Kingma and Ba, 2014) to minimize the cross-entropy loss, the learning rate is initialized to 1e-3 and gradually decreased during the process of training.",
"We select the best parameter configuration based on performance on the validation set and evaluate the configuration on the test set.",
"Our code is available on GitHub 1 .",
"We compare the proposed model with recent deep learning based methods for MLTC, including seq2seq models, deep embedding models, and label attention based models.",
"And it should be noted that, because of different application scenarios, we did not choose the label tree-based methods and extreme text focused methods as baseline models.",
"XML-CNN (Liu et al., 2017): a CNN-based 1 https://github.com/Makwen1995/LDGN MLTC Models AAPD EUR-Lex P@1 P@3 P@5 N@3 N@5 P@1 P@3 P@5 N@3 N@5 XML-CNN 74.38 53.84 37.79 71.12 75.93 70.40 54.98 44.86 58.62 53.10 SGM 75.67 56.75 35.65 72.36 75.35 70.45 60.37 43.88 60.72 55.24 DXML 80.54 56.30 39.16 77.23 80.99 75.63 60.13 48.65 63.96 53.60 AttentionXML 83.02 58.72 40.56 78.01 82.31 67.34 52.52 47.72 56.21 50.78 EXAM 83.26 59.77 40.66 79.10 82.79 74.40 61.93 50.98 65.12 59.43 LSAN 85.28 61.12 41.84 80.84 84.78 79.17 64.99 53.67 68.32 62.47 LDGN 86.24 61.95 42.29 83.32 86.85 81.03 67.79 56.36 71.81 66.09 Table 2: Comparisons with state-of-the-art methods on both AAPD and EUR-Lex datasets.",
"model which uses CNN and a dynamic pooling layer to extract high-level feature for MLTC.",
"SGM (Yang et al., 2018): a sequence generation model which models label correlations as an ordered sequence.",
"DXML (Zhang et al., 2018): a deep embedding method which models the feature space and label graph structure simultaneously.",
"AttentionXML (You et al., 2019): a label tree-based deep learning model which uses a probabilistic label tree and multi-label attention to capture informative words in extreme-scale data.",
"EXAM (Du et al., 2019): a novel framework that leverages the label information to compute the word-level interactions.",
"LSAN (Xiao et al., 2019): a label-specific attention network model based on self-attention and label-attention mechanism.",
"The SotA model (i.e., LSAN) used BiLSTM model for text representations.",
"For a fair comparison, we also take BiLSTM as text encoder in our model.",
"Table 2 and Table 3 demonstrate the performance of all the compared methods based on the three datasets.",
"For fair comparison, the experimental results of baseline models are directly cited from previous studies (Xiao et al., 2019).",
"We also bold the best result of each column in all tables.",
"baselines on three datasets.",
"The outstanding results confirm the effectiveness of label-specific semantic interaction learning with dual graph neural network, which include global statistical patterns and local dynamic relations.",
"It is observed that the performance of XML-CNN is worse than other comparison methods.",
"The reason is that it only exploits the text content of documents for classification but ignores the label correlations which have been proven very important for multi-label classification problem.",
"The label tree-based model AttentionXML performs better than the seq2seq method ( SGM ) and the deep embedding method ( DXML ).",
"Although both DXML and SGM employ a label graph or an ordered sequence to model the relationship between labels, they ignore the interactions between labels and document content.",
"And AttentionXML uses multi-label attention which can focus on the most relevant parts in content and extract different semantic information for each label.",
"Compared with other label attention based 70 75 80 85 90 95 PSP@1 PSP@3 PSP@5 LSAN LDGN",
"methods ( AttentionXML, EXAM ), LSAN performs the best because it takes the semantic correlations between document content and label text into account simultaneously, which exploits an adaptive fusion to integrate self-attention and label-attention mechanisms to learn the label-specific document representation.",
"In conclusion, the proposed network LDGN outperforms sequence-to-sequence models, deep embedding models, and label attention based models, and the metrics P @ k and nDCG @ k of multi-label text classification obtain significant improvement.",
"Specifically, on the AAPD dataset, LDGN increases the P @1 of the LSAN method (the best baseline) from 85.28% to 86.24%, and increases nDCG @3 and nDCG @5 from 80.84% to 83.33%, 84.78% to 86.85% , respectively.",
"On the EUR-Lex dataset, the metric P @1 is boosted from 79.17% to 81.03%, and P @5 and nDCG @5 are increased from 53.67% to 56.36%, 62.47% to 66.09%, respectively.",
"On the RCV1 dataset, the P @ k of our model increased by 0.3% at average, and LDGN achieves 1% and 1.6% absolute improvement on nDCG @3 , 5 compared with LSAN .",
"The improvements of the proposed LDGN model demonstrate that the semantic interaction learning with joint global statistical relations and local dynamic relations are generally helpful and effective, and LDGN can capture the deeper correlations between categories than LSAN .",
"We perform a series of ablation experiments to examine the relative contributions of dual graph-based semantic interactions module.",
"To this end, LDGN is compared with its three variants:(1) S : Graph-based semantic interactions only with statistical label co-occurrence; (2) D : Graph-based semantic interactions only with dynamic reconstruction graph; (3) no-G :Removing the dual graph 82 83 84 85 86 87 88 P@1 N@5 S D no-G S+D",
"neural network.",
"For a fair comparison, both S and D use 4-layer GCN which is the same as LDGN .",
"As presented in Figure 3, S and D perform better than no-G , which demonstrates that exploring either statistical relations or dynamic relations can correctly capture the effective semantic interactions between label-specific components.",
"D performs better than S , indicating the model with local dynamic relations is adaptive to data and has better stability and robustness, which also shows that the model with local dynamic relations can capture semantic dependencies more effectively and accurately.",
"The performance of S+D (i.e., LDGN ) combining two aspect relations obtains significant improvement, which shows dynamic relations can make up the deficiency of statistical co-occurrence and correct the bias of global correlations.",
"Thus, it is necessary to explore their joint effects to further boost the performance.",
"In order to prove the effectiveness of the proposed LDGN in alleviating the tail labels problem, we evaluate the performance of LDGN by propensity scored precision at k (PSP@k), which is calcu-smart",
"calcu-smart grid digitalization power grid visionary acceptation model energy management users engaged producing energy consuming systems aware energy demand response network dynamically varying prices natural question smart grid reality distribution grid updated assume positive answer question lower layers medium low voltage change previous analyzed samples dutch distribution grid previous considered evolutions synthetic topologies modeled studies complex systems technological domains previous paper extra step defining methodology evolving existing physical power grid smart grid model laying foundations decision support system utilities governmental organizations evolution strategies apply dutch distribution grid",
"Figure 2 shows the results of LDGN and LSAN on three datasets.",
"As shown in Figure",
"2(a), Figure",
"2(b) and Figure",
"2(c), the proposed LDGN performs better in predicting tail labels than the LSAN model (the best baseline) on three datasets.",
"Specifically, on the RCV1 dataset, LDGN achieves 0.97% and 1.35% absolute improvement in term of P SP @3 and P SP @5 compared with LSAN .",
"On the AAPD dataset, the P SP @ k increased by at least 0.63% up to 0.90%.",
"And on the EUR-Lex dataset, LDGN achieves 1.94%, 3.64% and 4.93% absolute improvement on P SP @1 , 3 , 5 compared with LSAN .",
"The reason for the improvement in the EUR-Lex dataset is more obvious is that the semantic interactions learning is more useful to capture related information in the case of a large number of labels.",
"The results prove that LDGN can effectively alleviate the problem of predicting tail labels.",
"To further verify the effectiveness of our label attention module and dual graph neural network in LDGN, we present a typical case and visualize the attention weights on the document words and the similarity scores between label-specific components.",
"We show a test sample from original AAPD dataset, and the document belongs to three categories, Physics and Society' ( physics.soc ), Computers and Society' ( cs.cy ) and Computa-tional Engineering, Finance, and Science' ( cs.ce ).",
"Visualization of Attention We can observe from the Figure 4 that different labels focus on different parts in the document text, and each label has its own concerned words.",
"For example, Figure 5: The Visualization of two adjacency matrices of dual GNN.",
"the more important parts in the physics.soc' category are digitalization power grid', energy man-agement'.",
"And the words that the cs.ce' category focuses on are consuming systems', vary-ing prices', laying foundations', lower ' and etc.",
"For class cs.cy' , the concerned words are sam-ples dutch distribution', evolutions' and topolo-gies'.",
"The corresponding related words of the three categories can reflect the semantics of the categories.",
"a clearer view of the importance of our dual graph-based interactions learning module, we display two",
"heatmaps in Figure 5 to visualize the partial graph structure of dual GCN.",
"The edge weights shown in the heatmaps are obtained by global label co-occurrence and local dynamic relations (i.e., computed by Equation 5), respectively.",
"As presented in heatmaps, different relations between categories are captured by dual GCN.",
"In global statistical relations, cs.cy' is highly linked with physics.soc' and wrong label nlin.ao' , while the true label cs.ce' is isolated.",
"And in local dynamic relations, cs.cy' is more related to cs.ce' , and the correlations between wrong label nlin.ao' and true labels are reduced.",
"This demonstrates that local dynamic relations can capture the latent relations that do not appear in global relations, and correct the bias of global correlations.",
"Multi-label Text Classification The existing methods for MLTC mainly focus on learning enhanced document representation (Liu et al., 2017) and modeling label dependency (Nam et al., 2017; Yang et al., 2018; Tsai and Lee, 2019) to improve the classification performance.",
"With the wide application of neural network methods for text representation, some innovative models have been developed for this task, which include traditional deep learning methods and Seq2Seq based methods.",
"Liu et al. (2017) employed CNNs and dynamic pooling to learn the text representation for MLTC.",
"However, they treated all words equally and cannot explored the informative words in documents.",
"The Seq2Seq methods, such as MLC2Seq (Nam et al., 2017) and SGM (Yang et al., 2018), employed a RNN to encode the input text and an attention based RNN decoder to generate predicted labels sequentially.",
"Although they used attention mechanism to capture the informative words in text content, these models cannot distinguish similar labels well.",
"There is a big reason that most of them neglect the semantic connections between labels and document, and learn the same document representations for different labels.",
"Recently, some studies (You et al., 2019; Xiao et al., 2019; Du et al., 2019) have used attention mechanism to explore the interactions between words and labels, and learned a label-specific document representation for classification.",
"These methods have obtained promising results in MLTC, which shows the importance of exploring semantic connections.",
"However, they did not further study the interactions between label-specific semantic components which can help to predict low-frequency labels.",
"To handle these issues, a common way to explore the semantic interactions between label-specific parts in document, is to utilize the label graph based on statistical co-occurrences.",
"MLC with Label Graph In order to capture the deep correlations of labels in a graph structure, many researches in image classification apply node embedding and graph neural network models to the task of multi-label image classification.",
"Lee et al. (2018) incorporated knowledge graphs for describing the relationships between labels, and the information propagation can model the dependencies between seen and unseen labels for multi-label zero-shot learning.",
"Chen et al. (2019) learned label representations with prior label correlation matrix in GCN, and mapped the label representations to inter-dependent classifiers, which achieved superior performance.",
"However, there were few related approaches for multi-label classification of text.",
"Zhang et al. (2018) established an explicit label co-occurrence graph to explore label embedding in low-dimension latent space.",
"Furthermore, the statistical label correlations obtained by training data are incomplete and noisy.",
"And the co-occurrence patterns between label pairs may form a long-tail distribution.",
"Thus, our goal is to find a way to explore the complete and adaptive interactions among label-specific semantic components more accurately.",
"In this paper, we propose a graph-based network LDGN to capture the semantic interactions related to corresponding labels, which jointly exploits global statistical patterns and local dynamic relations to derive complete and adaptive dependencies between different label-specific semantic parts.",
"We first exploit multi-label attention to extract the label-specific semantic components from documents.",
"Then, we employ GCN to learn component representations using label co-occurrences to guide the information propagation among components.",
"After that, we use the learned component representations to compute the adjacency graph dynamically and re-learn with GCN based on the reconstruction graph.",
"Extensive experiments conducted on three public datasets show that the proposed LDGN model outperforms other state-of-the-art models on multi-label text classification task and also demonstrates much higher effectiveness to alleviate the tail label problem.",
"In the future, we will improve the proposed model in effi-ciency, for example we could construct a dynamic graph for a few samples rather than each sample.",
"And besides, we will explore more information about labels for MLC classification.",
"We gratefully thank the anonymous reviewers for their insightful comments.",
"This research is supported by the Strategic Priority Research Program of the Chinese Academy of Sciences under Grant No.",
"XDC02060400."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"method",
"method",
"objective",
"objective",
"objective",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"objective",
"objective",
"method",
"method",
"abstain",
"objective",
"objective",
"other",
"other",
"other"
] |
[
"While neural dependency parsers provide state-of-the-art accuracy for several languages, they still rely on large amounts of costly labeled training data.",
"We demonstrate that in the small data regime, where uncertainty around parameter estimation and model prediction matters the most, Bayesian neural modeling is very effective.",
"In order to overcome the computational and statistical costs of the approximate inference step in this framework, we utilize an ecient sampling procedure via stochastic gradient Langevin dynamics to generate samples from the approximated posterior.",
"Moreover, we show that our Bayesian neural parser can be further improved when integrated into a multi-task parsing and POS tagging framework, designed to minimize task interference via an adversarial procedure.",
"When trained and tested on 6 languages with less than 5 training instances, our parser consistently outperforms the strong Bi LSTM baseline (Kiper-wasser and Goldberg, 2016).",
"Compared with the B i AFFINE parser (Dozat et al., 2017) our model achieves an improvement of up to 3% for Vietnamese and Irish, while our multi-task model achieves an improvement of up to 9% across five languages: Farsi, Russian, Turkish, Vietnamese, and Irish.",
"Dependency parsing is essential for many Natural Language Processing (NLP) tasks (Levy and Goldberg, 2014; Angeli et al., 2015; Toutanova et al., 2016; Hadiwinoto and Ng, 2017; Marcheggiani et al., 2017).",
"While earlier work on dependency parsing required careful feature engineering (Mc-Donald et al., 2005b; Koo et al., 2008), this has become less of a concern in recent years with the emergence of deep neural networks (Kiperwasser and Goldberg, 2016; Dozat et al., 2017).",
"Nonetheless, an accurate parser still requires a large amount of labeled data for training, which is costly to obtain, while the lack of data often causes overfitting and poor generalization.",
"Several approaches for parsing in the small data regime have been proposed.",
"These include augmenting input data with pretrained embedding (Dozat et al., 2017; Che et al., 2018), leveraging unannotated data via semi-supervised learning (Corro and Titov, 2018), predicting based on a pool of high probability trees (Niculae et al., 2018; Keith et al., 2018), and transferring annotation or model across languages (Agic et al., 2016; Lacroix et al., 2016; Rasooli and Collins, 2017).",
"Despite the empirical success of these approaches, an inherent problem still holds: The maximum likelihood parameter estimation ( MLE ) in deep neural networks (DNNs) introduces statistical challenges at both estimation (training), due to the risk of overfitting, and at test time as the model ignores the uncertainty around the estimated parameters.",
"When training data is small these challenges are more pronounced.",
"The Bayesian paradigm provides a statistical framework which addresses both challenges by",
"(i) including prior knowledge to guide the learning in the absence of sucient data, and",
"(ii) predicting under the full posterior distribution of model parameters which oers the desired degree of uncertainty by exploring the posterior space during inference.",
"However, this solution comes with a high computational cost, specifically in DNNs, and is often replaced by regularization techniques such as dropout (Srivastava et al., 2014) as well as ensemble learning and prediction averaging (Liu et al., 2018; Che et al., 2018).",
"Bayesian neural networks (BNNs) have attracted some attention (Welling and Teh, 2011; Hernndez-Lobato and Adams, 2015; Li et al., 2016; Gong et al., 2018).",
"Yet, its current application to NLP is limited to language modeling (Gan et al., 2017), and BNNs have not been developed for structured prediction tasks such as dependency parsing.",
"In this paper we aim to close this gap and propose the first BNN for dependency parsing (BNNP).",
"To address the costs of inference step, we apply an ecient sampling procedure via stochastic gradient Langevin dynamics ( SGLD ) (Welling and Teh, 2011).",
"At training, samples from the posterior distribution of the parser parameters are generated via controlled noise injection to the maximum a posteriori ( MAP ) gradient update.",
"The generated samples are then used during the inference step to create multiple viable parses based on which the final dependency parse is generated.",
"Another means of directing a model towards more accurate predictions in the small data regime is via multi-task learning (Caruana, 1997).",
"Such a framework allows models for multiple tasks to reinforce each other towards more accurate joint solutions, which is particularly useful when training data is scarce.",
"We hence present a multi-task framework where our BNNP is integrated with a POS tagger through an adversarial procedure designed to guide the two models towards improved joint solutions.",
"Our experiments with monolingual and delexicalized cross-lingual parsing using the Universal Dependency treebank (Zeman et al., 2017) demonstrate the eectiveness of our approach.",
"Particularly, our BNNP consistently outperforms the single task Bi LSTM baseline (Kiperwasser and Goldberg, 2016), while outperforming the B i AFFINE parser (Dozat et al., 2017) by up to 3% on Vietnamese and Irish.",
"Additionally, our multi-task model achieves an improvement of up to 9% over the B i AFFINE parser for five low-resource languages: Farsi, Russian, Turkish, Vietnamese, and Irish.",
"Our parser extends the graph-based Bi LSTM parser of Kiperwasser and Goldberg (2016).",
"We briefly review their work and its notable extensions, and then discuss our extension of their architecture and the limitations of MLE training.",
"where ( . ) denotes trainable embedding, denotes pretrained external embedding, and denotes concatenation.",
"The context dependent representation of each word is generated through a Bi LSTM , = LSTM ( 1 ) LSTM ( ) .",
"Here, and denote the LSTM s' parameters.",
"The resulting sequence of -dimensional vectors, 1 , is then used for computing the ARC-SCORE matrix ( ) where cell ARC-SCORE ( , ) is computed as, tanh ( ( ) head specialized + ( ) modifier specialized + ) , indicating the likelihood of having the arc from to .",
"Here, is (1 ) , is ( 1) , and ( . ) is a ( ) matrix used for creating specialized versions of words in 1 .",
"The -dimensional specialized representation of as a head or a modifier is generated via multiplication with ( ) and ( ) , respectively.",
"Following a first-order dependence assumption between the arcs, the parsing is done via a dynamic programming solution that finds the dependency parse such that, =argmax ( SCORE ( )= ( , ) ARC-SCORE ( , ) ) .",
"Next, given , for each arc ( , ) the LABEL-SCORE ( , , ) is computed as: ( tanh ( ( ) head specialized + ( ) modifier specialized + ) ) [ ] , where is a dependency relation type, { } =1 , is ( ) , is ( 1) , and ( . ) is ( ) .",
"The label for each dependency arc ( , ) is then chosen by a max operation.",
"Dozat et al. (2017) proposed an extension of the Bi LSTM parser by replacing the non-linear transformation of ARC-SCORE and LABEL-SCORE with a linear transformation ( B i AFFINE ).",
"This was further extended by Che et al. (2018) who utilized contextualized word embeddings (Peters et al., 2018).",
"Both extensions showed success in dependency parsing shared tasks (Zeman et al., 2017, 2018).",
"architecture extends the Bi LSTM model with an additional Bi LSTM layer and input signals.",
"While our architecture is not the core contribution of this paper, we aim to implement our BNNP on a strong architecture.",
"In 5.3 we demonstrate the contribution of these additions to our final results.",
"In our BNNP, each word is represented as, = ( ) ( ) ( ) ( ) , where, similar to the B i AFFINE parser, ( ) is a character-level representation of the word , generated by a Bi LSTM : ( ) = LSTM ( 1 | | ) LSTM ( | | 1 ) .",
"= LSTM ( 1 ) LSTM ( ) ,",
"where is the encoding generated by the 1 layer of the Bi LSTM s.",
"Learning For structured prediction, the objective function is to maximize the margin between the gold structure and the other structures.",
"To achieve this, the cost augmented hinge loss is used, where the fixed margin is replaced by a cost function, COST ( , ) , sensitive to the local mistakes (here, each local mistake has a constant cost of 1 ), and computed as: =max ( 0 , COST ( , )+ SCORE ( ) SCORE ( ) ) For label prediction, hinge loss with = 1 is used (denoted by ).",
"We refer to the parser loss as: = + .",
"Beyond MLE Training The point-estimate of DNN parameters is computationally ecient, but ignores the uncertainty around model parameters during learning.",
"This results in an overconfidence over model predictions during the inference phase.",
"A common generic practice to incorporate a degree of uncertainty is to consider an ensemble of models.",
"Indeed, for dependency parsing ensemble learning has shown to improve accuracy (Surdeanu and Manning, 2010; Kuncoro et al., 2016; Che et al., 2018).",
"However, ensembles are computationally demanding due to the large number of participating models.",
"The de-facto approach to overcome this 2 Stacking Bi LSTM s is believed to be helpful.",
"In our case, the addition of a third layer led to overfitting.",
"has been to randomly perturb the structure of the network for each training instance by switching o connections between the nodes, a practice known as dropConnect (Wan et al., 2013), or eliminating the nodes entirely, which is known as dropout (Sri-vastava et al., 2014).",
"Hinton et al. (2012) demonstrated that dropping out a node with probability from a neural model at training, and scaling the same node's output at test time (in the fully connected ), performs very well in practice.",
"While this scaling trick avoids the need for running multiple models at test time, it lacks theoretical guarantees when applied to DNNs.",
"Gal and Ghahramani (2016) showed that dropout can be casted as integrating out the posterior of model parameters, and in this regard is a form of Bayesian approximation.",
"Still, dropout is not as eective when applied to the small data regime (Goodfellow et al., 2016).",
"We hence propose a fully Bayesian approach.",
"The Bayesian paradigm is appealing in its ability to capture uncertainty around parameters, avoid overfitting, and incorporate prior knowledge to compensate for the lack of sucient training data in resource lean settings.",
"Dependency parsing may be a natural candidate for Bayesian modeling since a typical sentence can have multiple viable parses corresponding to dierent grammatical ambiguities, and considering these possibilities have shown to improve predictive accuracy (Niculae et al., 2018; Keith et al., 2018).",
"In the same spirit, Bayesian inference can be interpreted as a statistically grounded means to generate this pool of high quality trees.",
"In detail, Bayesian inference for parsing computes the predictive distribution ( | , ) 3 of the dependency parse tree for an input sentence given the training data by posterior averaging : ( | , ) = ( | , ) ( | ) , where ( | ) ( | ) ( ) denotes the posterior distribution of the BNNP parameters given observations .",
"Here the prior distribution ( ) is a standard Gaussian (0 , ) , and, by assuming i.i.d training datapoints, the likelihood term 3 When ( | , ) is replaced by the unnormalized score from the first-order arc-factored model, ( | , ) can be interpreted as the expected score assigned to a dependency tree under the full posterior distribution of model parameters.",
"is ( | ) = ( | ) where = ( , ) is a training instance.",
"Therefore, ecient generation of posterior samples is the key for applying the Bayesian treatment.",
"Since the posterior is intractable in DNNs, additional machineries such as Markov Chain Monte Carlo ( MCMC ) (Robert and Casella, 2010) are required for sampling.",
"In this paper we explore an ecient class of Stochastic Gradient Markov Chain Monte Carlo ( SG-MCMC ) methods, designed for NNs by adjusting stochastic gradient descent with properly scaled Gaussian noise to generate posterior samples.",
"We first consider the Stochastic Gradient Langevin Dynamics ( SGLD ) (Welling and Teh, 2011) sampler to generate posterior samples.",
"Intuitively, this mechanism guides SGD to explore the posterior distribution rather than finding the maximum a posteriori ( MAP ) solution.",
"More concretely, at each time step we first compute a stochastic estimate of the gradient for the negative log-posterior distribution on a mini-batch : ( log ( )+ | | | | log ( | ) ) , and then update the BNNP parameters by SGD with noise injection, +1 2 + , (0 , ) (2) Theoretically, the distribution of converges to the true posterior ( | ) when = , and 0 (see Teh et al. (2016) for SGLD convergence rate proofs and analysis).",
"In practice, the learning rate = ( + ) , starting high and decaying, plays a crucial role in eqn.",
"2 dynamics.",
"Intuitively, while the variance of the injected noise is , the variance of the gradient is of ( 2 ) .",
"In the earlier stages of the optimization, the 2 term is dominant as gradients are of greater impact and the learning rate is high, simulating stochastic gradient descent.",
"As the optimization proceeds, the impact of the gradient shrinks and the injected Gaussian noise becomes the dominant term, transitioning from SGD to Langevin Monte Carlo (Neal, 2011).",
"Since 0 might cause slow mixing in practice, we also set a minimum threshold for the learning rate to allow mixing at later stages.",
"SGLD and SGD have the same time complexity as sampling occurs along with parameter updates with a negligible overhead of drawing s.",
"As SGLD relies on a single learning rate along all dimensions of , it has the same potential ine-ciencies of SGD for optimizing functions with different curvatures along each dimension.",
"Inspired by more advanced optimization techniques that adjust to the geometry of parameter space, Li et al. (2016) proposed preconditioning of SGLD (similar to RMSprop (Tieleman and Hinton, 2012)) which scales the gradients with respect to the weighted average of the gradients along each dimension, such that a unified learning rate is sucient.",
"The proposed preconditioner diagonal matrix is computed as follows, ( ) = diag (1 ( 1 + ( ))) , ( ) = ( 1 ) + (1 ) , where , are element-wise matrix product and division, and , are hyperparameters controlling the extremes of the curvature, and weighting average of previous and current gradients, respectively.",
"The updated preconditioned eqn.",
"2 is, +1 2 ( ) + , (0 , ( )) .",
"Here, replacing the preconditioner matrix ( ) with an identity matrix will recover the SGLD update formulation.",
"In case of a diagonal preconditioner, it has the same time complexity of SGLD , although the constant factors in the complexity figure can be slightly higher due to additional computations involved in ( ) and ( ) .",
"While SGLD is designed to explore dierent modes of the posterior distribution, we found (5.4) that in practice we still require dropout to facilitate modes exploration via random perturbation of the model structure.",
"The generated posterior samples can be used to compute the approximate expectation of eqn.",
"1 by scoring all plausible parses via explicit model averaging.",
"As this solution is not computationally feasible, we use the sampled parameters and follow a procedure that minimizes the Bayes risk ( MBR ) (Goodman, 1996).",
"Given each sampled parameter, first we generate the maximum scoring parse using the arc-factored decomposition (McDonald et al., 2005a) and dynamic programming (Eisner, 1996).",
"This can be done concurrently for all samples, resulting in a running time identical to the non-Bayesian approach.",
"For each labelled edge, we replace its score in the ARC-SCORE matrix with its occurrence count in the collection of sampled trees and infer the final tree using counts as scores.",
"The predicted structure is then passed to the label predictor, which assigns labels to the edges (2).",
"This decoding approach, while selecting the global structure with the highest probability under the approximate posterior, could potentially allow for additional corrections of the highest scoring tree in the pool of samples (Shareghi et al., 2015; Kuncoro et al., 2016).",
"Multi-task learning (Caruana, 1997) lends itself as a natural choice for low-resource settings as it aims at leveraging the commonality between tasks to improve their performance in the absence of sucient amount of training data.",
"This framework hence naturally complements Bayesian modeling in dealing with the challenges of the small data regime.",
"We couple our BNNP with POS tagging due to the strong connection between the two tasks (Rush et al., 2010) and the availability of joint training data in several languages (Zeman et al., 2017).",
"While multi-task frameworks have shown success in some areas (Reichart et al., 2008; Finkel and Manning, 2009; Liu et al., 2016; Malca and Reichart, 2018), in our case we found that our two tasks interfered with each other and degraded the parser performance (see similar findings for other tasks at Sgaard and Goldberg (2016); Plank and Alonso (2017)).",
"To minimize task interference, an approach shown eective (Ganin and Lempitsky, 2015; Kim et al., 2017; Chen et al., 2017; ZareMoodi and Haf-fari, 2018) is to implicitly guide the update signals during training via an adversarial procedure that avoids shared parameters contamination.",
"We adapt this idea to our multi-task learning.",
"Architecture Given an input word, , the score for a POS tag is computed as tanh ( + )[ ] , where { } =1 , is ( ) , and is ( 1) .",
"To train the POS tagger, the cross-entropy loss is Character BiLSTM Input Representation 1 st BiLSTM 2 nd BiLSTM i s s i Shared BiLSTM D i sc .",
"used (denoted by ).",
"Taking s as input, the shared Bi LSTM strictly encodes a task-agnostic dimensional representation of the input (explained in the next subsection), = LSTM ( 1 ) LSTM ( ) .",
"The input to each task is then defined as the concatenation of task-agnostic and task-specific representations.",
"We considered a basic architecture where both tasks have their separate parameters (identical number of layers, dimensions, etc.) and they only share the shared Bi LSTM (denoted as MULTI TASK in Table 1).",
"4 Adversarial Training The shared Bi LSTM output, , is meant to encode a task-agnostic representation of the input word .",
"In order to enforce this criterion, we apply an adversarial training procedure.",
"The shared representation, , is forwarded to a task discriminator (denoted as Disc. in Figure 1) through a gradient reversal layer.",
"The task discriminator predicts the task identity for each word in the input via a linear transformation ( + )[ ] followed by a softmax, where { parser,tagger } and and are (2 ) and (2 1) , respectively.",
"To train the discriminator, a sum of the cross-entropy losses for 1 is used (denoted by ).",
"As the parameters of the discriminators are being updated, the gradient signals to minimize the discriminator's error are backpropagated with an opposite sign to the shared Bi LSTM layer, which adversarially encourages the shared Bi LSTM to fool the discriminator.",
"Our training schedule alternates between the two modes, in one mode optimizing the shared and task-specific parameters based on and (in 4 We also tried layer-wise placements of tasks (Sgaard and Goldberg, 2016; Hashimoto et al., 2017) and the results were slightly worse.",
"Details are omitted for space reason.",
"We experiment with mono-lingual and cross-lingual dependency parsing using the treebanks of the CoNLL 2017 shared task on parsing to Universal Dependencies (UD) (Zeman et al., 2017).",
"5 5.1 Experimental Setup We use the UDPipe baseline outputs for segmentation and POS tagging of the raw test data (released along with the raw test data).",
"While segmentation and POS errors substantially impact the quality of the final predicted parse, their exploration is beyond our scope.",
"Our evaluation metric is Labeled Attachment Score (LAS), computed by the shared task evaluation script.",
"Statistical significance, when mentioned, is computed over 20 runs, via the Kolmogorov-Smirnov test (Reimers and Gurevych, 2017; Dror et al., 2018) with = 0 .",
"01 .",
"Mono-Lingual Experiments We experiment with Persian (fa), Korean (ko), Russian (ru), Turkish (tr), Vietnamese (vi) and Irish (ga), all with less than 5 training sentences (Table 1).",
"For comparison we report the scores published by the top system of the CoNLL 2017 shared task, B i AFFINE (Dozat et al., 2017), noting the following dierences between their input and output and ours.",
"The B i AFFINE parser:",
"(i) uses the UDPipe outputs for segmentation but corrects POS errors before parsing,",
"(ii) includes both language specific and universal POS tags in the input layer while we only include the universal POS tags, and",
"(iii) applies post-process correction for non-projective languages.",
"Cross-Lingual Experiments We use the English (en), French (fr), Russian (ru), and Persian (fa) datasets of the UD treebanks as our training and test data, with the addition of 3 languages for which we did not have any training data: Kurmanji (kmr), Buriat (bxr), and Northern Sami (sme).",
"We also report the results for each language, where the combination of training datasets for the rest of the languages (marked as + ) was used for training.",
"The cross-lingual experiments are done on delexicalized 5 For train and dev sets (1-1983), test set (1-2184), and pretrained embeddings (1-1989) see: https: //lindat.mff.cuni.cz/repository/xmlui/handle/11234/{1-1983,1-2184,1-1989} parses after replacing the words with their Universal POS tags.",
"Models and Baselines Single-Task We consider the following models: BASE is the Bi LSTM model of Kiperwasser and Goldberg (2016); BASE ++ extends BASE by having 2 layers of Bi LSTM s and using 1 layer of character level Bi LSTM (2); + SHARED includes an additional Bi LSTM (dashed box in Figure 1).",
"We included this to provide a fair comparison (in terms of the number of parameters) with the multi-task experiments but we apply a higher dropout rate to resolve overfitting; ENSEMBLE denotes a collection of 9 + SHARED models each randomly initialized (Reimers and Gurevych, 2017) and trained for MLE with MBR (3.3) applied for prediction; MAP denotes the + SHARED model optimized for MAP instead of MLE ; + SGLD denotes Bayesian learning and prediction (3.1), and + PRECOND denotes preconditioned SGLD (3.2).",
"+ SGLD and + PRECOND are applied to the + SHARED model.",
"Models and Baselines Multi-Task We consider the same models as in the single-task setup, with the following changes.",
"BASEMT is the variant of MULTI TASK architecture where the shared Bi LSTM is removed and all components except for decoders are shared between the two tasks.",
"+ SHARED and + ADV denote the results without and with the adversarial training (4), respectively.",
"Training is done for 330 epochs with early stopping, using successive 6 mini-batches of 5000 words, 7 with drop-out rate 0 .",
"33 unless stated otherwise.",
"SGLD is only applied for learning the parameters of the parser, with learning rate decaying from 0 .",
"01 to 0 .",
"0001 ( = 0 . 5 ), and the preconditioned matrix hyperparameters are = 10 5 , = 0 .",
"99 .",
"The rest of the parameters are updated using Adam (Kingma and Ba, 2014) with default DyNet parameters (Neu-big et al., 2017).",
"The size and selection of the samples used in Bayesian inference are tuned on Irish (see 6.1 of the main paper).",
"In all experiments, training stops based on loss convergence of a single model on the dev set.",
"We also tried GRU s and our results were substantially worse than LSTM s.",
"units, 100 -D word embeddings, 25 -D POS embeddings, and dropout rate of 0 .",
"33 ; BASE ++ extends BASE by having 2 layers of Bi LSTM s with 200 -D hidden units, 100 -D external word embeddings, and using 1 layer of character level Bi LSTM with 200 D hidden units to generate 100 -D character embeddings (2); + SHARED includes an additional Bi LSTM with 200 -D hidden units in BASE ++ to provide a fair comparison (in terms of the number of parameters) with the multi-task experiments but we apply a higher dropout rate of 0 .",
"66 to resolve overfitting.",
"Mono-Lingual Parsing Table 1 summarizes the training data statistics and the LAS results of the various models, with bold-font marking cases where a model outperforms the B i AFFINE parser.",
"1. SINGLE TASK Comparison between BASE ++ and + SHARED reveals that the additional Bi LSTM in the + SHARED does not improve the results.",
"As expected, model averaging in ENSEMBLE improved the results over the + SHARED model.",
"Additionally, the MAP results show slight improvements compared with + SHARED .",
"Both of these findings suggest that the quality of predictions is likely to improve if some notion of uncertainty (via averaging, or prior insertion) is included.",
"For Bayesian solutions, both + SGLD and + PRECOND consistently (and in a statistically significant manner) outperform the ENSEMBLE models and the strong Bi LSTM baselines on all languages, while PRECOND outperforms B i AFFINE by up to 3% on two languages (vi, and ga).",
"The consistency of improvements as we incorporate richer means of capturing the uncertainty suggests that these gains are independent of our specific choice of neural architecture and that they might also hold for future parsing architectures.",
"2. MULTI TASK As explained earlier (4), comparing the parser performances under + SHARED models in single and multi task settings indicates that parser quality was degraded with the inclusion of a POS tagging task, possibly due to interference between the tasks.",
"This issue was alleviated with the inclusion of task discriminator and the adversarial training.",
"The + ADV model consistently improves the results (in a statistically significant manner), while outperforming the SINGLE TASK (+ SHARED ) by 2 .",
"2% for Persian (fa), and 5 .",
"2% for Irish (ga).",
"This is an indicator that low-resource lan-fa ko ru tr vi ga TRAIN (sen.) 4798 4400 3850 3685 1400 566 Labeled Attachment Score (LAS) B i AFFINE 86.31 82.49 83.65 62.79 42.13 70.06 BASE 80.97 64.76 75.45 52.64 39.36 62.50 SINGLE TASK BASE ++ 83.15 76.70 79.44 58.92 41.03 66.58 + SHARED 83.11 76.32 79.62 58.72 41.10 66.42 ENSEMBLE 84.12 77.28 80.17 59.36 41.89 68.12 MAP 83.59 76.61 79.78 59.13 41.33 66.95 + SGLD 84.98 78.91 80.86 60.53 43.12 69.51 + PRECOND 85.76 79.83 81.9 61.71 44.52 70.91 MULTI TASK BASEMT 81.54 75.78 78.61 57.12 40.08 60.51 + SHARED 81.03 75.08 78.64 57.09 40.04 65.24 + ADV 84.93 78.12 81.23 60.66 43.11 69.89 ENSEMBLE 85.01 78.31 81.56 60.92 43.3 70.00 MAP 85.02 78.26 81.59 60.72 43.25 70.36 + SGLD 85.89 79.50 83.06 62.13 44.57 72.52 + PRECOND 86.75 80.97 84.51 63.24 45.96 74.12 Table 1: Mono-Lingual results.",
"guages could potentially benefit more from multitask learning.",
"Other variants of + ADV are also reported in Table 1 and similar patterns to the single task setting are observed.",
"Next, we test the eectiveness of Bayesian learning and inference using = 9 thinned samples (5.4).",
"For all languages in the multi-task setting the gain from including + SGLD in + ADV is statistically significant.",
"However, the gain decays as the amount of training data increases: from 3 .",
"8% on Irish (ga) to 1 .",
"1% on Persian (fa).",
"This verifies our expectation that Bayesian learning helps parameter estimation and prediction in the small data regime.",
"Compared to + SGLD , + PRECOND provides further improvements (all are statistically significant) of 2 .",
"2% on Irish and 1% on Persian, showing a similar generic negative correlation with data size.",
"To summarize our mono-lingual results, the Bayesian framework shows its merit in improving predictive quality, and multi-task learning introduces new and informative signal via a related task which allows for better parameter estimation.",
"The integration of a Bayesian parser into a multitask learning framework outperforms the B i AFFINE parser on up to 5 languages, while we observe decreasing improvements as the data size grows.",
"Also, since our Bayesian approach builds on the BASE model, it is subject to the shortcomings of this model: For instance, on Korean (ko) the dier-ence between BASE and B i AFFINE is too large to be closed without further adjustment of model design, sen. Labeled Attachment Score (LAS) en fr ru fa kmr bxr sme en 13 81 .",
"Cross-Lingual Delexicalized Parsing We tested our most successful mono-lingual model, MULTI TASK (+ PRECOND ), in the cross-lingual setup against an ensemble of = 9 randomly initialized SINGLE TASK ( BASE ++) models.",
"Table 2 summarizes the training data size (rounded up) and the LAS results of both models.",
"As expected, the ensemble models perform best when the training and test data come from the same language or language family (as in Buryat to Russian and Kurmanji to Persian).",
"Showing a similar pattern to mono-lingual parsing, in all cases the performance gain of the Bayesian MULTI TASK (+ PRECOND ) setting, reported as under-text in the table, consistently outperforming the ENSEMBLE (in a statistically significant manner) and the impact decays with the training set size.",
"We aim to answer two questions:",
"(i) how many of the generated samples are used during inference?",
"and",
"(ii) how much the success of SGLD depends on other sources of noise during training?",
"1. UTILIZINGSAMPLES Once the training stops, dierent strategies could be applied to utilize the collection of posterior samples to recompute ARC-SCORE matrix (see 3.3).",
"An extreme option is including all samples (denoted by All ), another option is to only use the last sample (denoted by Last ).",
"An intermediate alternative is to choose samples with interval , an approach known as Thinning (Gan et al., 2017).",
"terestingly, when we compared Thinning with a variant (denoted by in Figure 2a) that just uses the last 5 samples , it appears that Thinning which includes earlier samples is still superior.",
"This is likely to be an indicator that a larger (and more diverse) set of trees can potentially improve MBR decoding.",
"In practice, we use Thinned samples.",
"2. DROPOUT , SGLD , ANDMINI-BATCH We would next like to better understand the interdependence between SGLD and two existing sources of noise: dropout and mini-batch size.",
"Results are reported in Figure 2b.",
"In all four stacked bars, the top segment demonstrates the gain when including + SGLD .",
"The right-most stacked bar illustrates the performance of MULTI TASK (+ADV) without (bottom part of the bar) and with + SGLD .",
"This is our reference and in the following three experiments, we make a single change to this model while keeping everything else fixed.",
"In the left-most stacked bar, we exclude dropout.",
"This significantly hurts the performance, an indication that the random perturbation of model by dropout can provide a positive complimentary effect for better exploration of the posterior modes via SGLD .",
"Comparing the top parts of all stacked bars (representing the gain with + SGLD ) reveals that this gain is at its maximum when dropout is switched o.",
"An interpretation can be that in the absence of dropout, SGLD becomes the main component for countering overfitting.",
"In the second stacked bar from the right, we remove the noise caused by mini-batching and use the entire dataset as one batch.",
"This slightly improves the results, although this improvement is not statistically significant.",
"Based on eqn.",
"2, full training data gradient updates cause 0 in fewer steps.",
"Consequently, the learning rate (which controls the variance of the noise) is at a higher value when the gradient is close to zero, giving a better chance for the sampler to mix via .",
"In the second stacked bar from left, parameters are updated for each training sentence.",
"In this setting the learning rate, , is consumed much faster and can potentially reach zero on the MAP solution, discarding the eect of the injected noise and + SGLD .",
"Hence, the gain for including SGLD is at its minimum.",
"We speculate that adjusting the decaying speed of the learning rate (which we left untouched) could allow for this extreme case to perform better.",
"We proposed a Bayesian framework for neural dependency parsing (BNNP).",
"We employ ecient SG-MCMC approximate inference mechanisms through stochastic gradient Langevin dynamics to generate posterior samples during optimization.",
"The collected samples are then used via a minimum Bayes risk parsing algorithm to generate the final parse tree.",
"In mono-lingual and cross-lingual experiments in the small data regime, where Bayesian learning in expected to be most eective, our BNNP consistently outperformed the strong Bi LSTM baselines.",
"Moreover, when integrating the BNNP into a multi-task learning framework, utilized to prevent task interference, we outperformed the B i AFFINE parser (best system of the CoNLL17 shared task) on 5 low-resource languages by up to 9% LAS.",
"In future work, we intend to investigate other types of priors over the network parameters (e.g., sparse priors (Lobacheva et al., 2017)).",
"We would also like to explicitly quantify the uncertainty captured in our framework under dierent sampling strategies or MCMC-SG methods (e.g., similar to McClure and Kriegeskorte (2016); Teye et al. (2018)).",
"This work is supported by the ERC Consolidator Grant LEXICAL (648909).",
"The first author would like to thank Costanza Conforti, Victor Prokhorov, and Gamal Crichton for their comments on the presentation of this work.",
"The authors would like to thank the anonymous reviewers for their helpful suggestions."
] | [
"abstain",
"objective",
"method",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"result",
"result",
"abstain",
"abstain",
"other",
"other",
"other"
] |
[
"Few-shot learning has drawn researchers' attention to overcome the problem of data scarcity.",
"Recently, large pre-trained language models have shown great performance in few-shot learning for various downstream tasks, such as question answering and machine translation.",
"Nevertheless, little exploration has been made to achieve few-shot learning for the fact-checking task.",
"However, fact-checking is an important problem, especially when the amount of information online is growing exponentially every day.",
"In this paper, we propose a new way of utilizing the powerful transfer learning ability of a language model via a perplexity score.",
"The most notable strength of our methodology lies in its capability in few-shot learning.",
"With only two training samples, our methodology can already outperform the Major Class baseline by more than an absolute 10% on the F1-Macro metric across multiple datasets.",
"Through experiments, we empirically verify the plausibility of the rather surprising usage of the perplexity score in the context of fact-checking and highlight the strength of our few-shot methodology by comparing it to strong fine-tuning-based baseline models.",
"Moreover, we construct and publicly release two new fact-checking datasets related to COVID-19.",
"Few-shot learning is being actively explored to overcome the heavy dependence on large-scale labeled data that serves as a crucial bottleneck to machine learning models.",
"Recently, researchers have explored few-shot learning that leverages the powerful transfer learning ability of pre-trained large language models (LMs) in various NLP tasks.",
"Petroni et al. demonstrated that an LM serves as a good zero-shot learner on the question-answering task due to its encoded commonsense knowledge.",
"Equal contribution.",
"Going further, Brown et al. illustrated the impressive potential of LMs as strong zero-shot and few-shot learners across translation, commonsense reasoning and natural language inference (NLI).",
"However, little or no exploration has been made on few-shot learning in the fact-checking domain, which is a timely and important task in which data-scarcity is particularly problematic.",
"Previous works have proposed different ways of leveraging LMs to conduct zeroor few-shot learning.",
"One common approach is to query the LM for the missing token (i.e., answer) for the zero-shot question-answering task (Petroni et al., 2019; Brown et al., 2020) by transforming questions into a form of statement.",
"Another approach is to adopt an in-context learning approach where the input context of the LM is carefully crafted to control the output.",
"For example, a natural language task instruction (e.g., Translate English to French:) or training sample (e.g., sea otter => loutre de mer) is provided as the context for zero-shot/few shot translation (Brown et al., 2020).",
"1972",
"Leveraging LMs as a knowledge base, zero-shot learner or a few-shot learner has been gaining popularity within the NLP field.",
"It was discovered that large pre-trained LMs can store factual knowledge in their parameters (Petroni et al., 2019; Roberts et al., 2020; Madotto et al., 2020), and that this stored knowledge can help LM to be good at zero-shot and few-shot learning in various NLP tasks, such as question answering, summarization, textual entailment, translation and commonsense reasoning (Brown et al., 2020).",
"For the task of fact-checking, Lewis et al. and Lee et al. attempted to 1973 leverage such LMs.",
"evidence-conditioned LMs.",
"Fact-checking is the task of verifying a claim based on its corresponding evidence, and one of its most important objectives is to correctly model the relationship between the given claim and evidence.",
"We hypothesize that a perplexity score from evidence-conditioned LMs is helpful for such purpose since perplexity measures the likelihood of a given sentence with reference to previously encountered text (i.e., given the evidence prefix and the LM's training corpus).",
"Therefore, this paper attempts to investigate this hypothesis and proposes a novel perplexity-based few-shot learning methodology for fact-checking.",
"Through experimental analysis, we empirically demonstrate the effectiveness of our proposed methodology in few-shot learning , and we compare it to strong fine-tuning-based baselines.",
"Moreover, we compare different LMs (BERT and GPT2) in different sizes, from small to XL, to unveil interesting insights on which model is more suitable for this task.",
"Finally, we discuss the potential application of evidence-conditioned perplexity for ranking candidate claims in priority order of the most urgent to be fact-checked to the least.",
"Our contribution is three-fold: First, we propose an effective way of leveraging the perplexity score in the context of fact-checking.",
"We would like to emphasize that our approach is a simple yet effective way of leveraging large pre-trained LMs.",
"Second, we demonstrate the effectiveness of the perplexity-based approach in the few-shot setting by outperforming strong fine-tuned baselines, such as BERT (Devlin et al., 2019), RoBERTA (Liu et al., 2019), and XLNet (Yang et al., 2019), by an absolute 10 20% F1-Macro scores in the 2 -, 10 -, and 50 -shot settings.",
"Third, we construct two new fact-checking datasets related to COVID-19, which has caused the problem of an infodemic.",
"Fact-checking is a complex task that is split into many sub-tasks.",
"First, credible sources of evidence need to be identified.",
"Second, a set of relevant evidence needs to be retrieved from the identified credible sources.",
"Last, veracity classification of claims can be made based on the retrieved evidence.",
"Some works have focused on full-pipeline systems that handle all sub-tasks and provide real working web prototypes (Karadzhov et al., 2017; Popat et al., 2017, 2018a; Hasanain et al., 2019; Tokala et al., 2019).",
"These works use the entire Web as a knowledge source to confirm or reject a claim taking the credibility or reliability of the Web source into account.",
"Another common setting for fact-checking is to assume a credible evidence source is given (e.g., Wikipedia), and to focus on the evidence retrieval and veracity verification steps only.",
"FEVER (Thorne et al., 2018) and Tabfact (Chen et al., 2019) are two large datasets for this setting, and there are many follow-up studies working on them (Yoneda et al., 2018a; Nie et al., 2019; Zhong et al., 2020; Herzig et al., 2020; Zhou et al., 2019; Hidey et al., 2020).",
"Our work follows the latter group of works and uses the following setting: given a tuple consisting of claims and relevant evidence, we classify the final fact-checking veracity label of the given claim (Popat et al., 2018b; Ma et al., 2019; Wu et al., 2020).",
"By doing this, we focus on the methodology for the veracity classification task without worrying about the propagated errors from earlier modules, such as source credibility profiling and evidence retrieval.",
"4 Methodology 4.1 Task definition In this work, we define our task to be: Given a {claim, evidence} pair, determine the veracity of a claim against the evidence i.e., Supported vs. Unsupported claims.",
"However, they mainly use the model to replace the evidence retriever of the fact-checking pipeline, and they still require training of final veracity classifier.",
"Our work, in contrast, focuses on the few-shot ability of LMs for veracity classification .",
"In this section, we conduct a preliminary investigation to validate the potential of our hypothesis that the perplexity score from an evidence-conditioned LM can provide a signal for claims unsupported by evidence.",
"For our exploration, we first collect a small set of Supported and Unsupported claims that can be verified based on the training corpus of the target LM (namely, Wikipedia which is used in the training of many pre-trained LMs).",
"Then, we compare the perplexity scores between them.",
"To recap, perplexity is a commonly used metric for measuring the performance of LMs.",
"It is de-fined as the inverse of the probability of the test set normalized by the number of words: P P L ( X ) = n (cid:118)(cid:117)(cid:117)(cid:116) n (cid:89) i =1 1 p ( x i | x 0 , . . . , x i 1 ) .",
"Another way of interpreting perplexity is as a measure of the likelihood of a given test sentence with reference to the training corpus.",
"From Table 1, we can observe that Unsupported claims on average have higher perplexity than Supported claims.",
"For example, Supported claim Washing hands prevents the spread of diseases,\" has a perplexity value of 96.74, whereas the Unsupported claim All dogs speak English fluently,\" has a much higher perplexity value of 328.23.",
"We believe these observations support our hypothesis.",
"Thus, we proceed to build our approach based on this hypothesis (Section 4), and conduct experiments (Section 5) and analysis (Section 6) to verify the validity of our perplexity-based fact-checking approach.",
"The label Supported is assigned when there exists relevant evidence that supports the claim, while Unsupported is assigned when there does not exist any supporting evidence.",
"Note that this existence of refuting evidence also places a claim into this latter category.",
"Although previous works have shown that an LM can encode knowledge from its training corpus, there are a few limitations to solely relying on the pre-trained weights.",
"First, we cannot easily check and guarantee whether the LM has already seen the evidence that is required for verification, and the LM would definitely not have seen the evidence related to newly emerging events after the LM pretraining.",
"For instance, the event of COVID-19 emerged after the release of the GPT2 pre-trained model.",
"Second, although LMs have shown surprising ability in memorizing some knowledge, they are not perfect, as pointed out by previous works (Poerner et al., 2019; Lee et al., 2020).",
"Therefore, we propose to incorporate evidence into the perplexity calculation by using it as a prefix of the claim.",
"There are two popular kinds of LMs:",
"i) unidirectional LMs that are trained with the conventional next token prediction task, and",
"ii) masked LMs that are trained with the masked token prediction token, resulting in a bidirectional LM.",
"We briefly describe how to obtain the evidence-conditioned perplexity for both types of LM: Unidirectional Language Model Perplexity For a unidirectional LM, first we concatenate the evidence and claim to obtain the input to the LM: X = { x e 0 , . . . , x e E , x c 0 . . . , x c C } , where E and C denote the number of evidence tokens and claim tokens, respectively.",
"Then, we obtain the evidence-conditioned perplexity by P P L ( X )= C (cid:118)(cid:117)(cid:117)(cid:116) C (cid:89) i =1 1 p ( x c i | x e 0 , . . . , x e E , . . . , x c i 1 ) .",
"Note that the evidence tokens are used to condition the perplexity, yet their conditional probabilities p ( x e i | x e 0 , . . . , x e i 1 ) do not contribute to the P P L ( X ) , which is the main difference from Eq.",
"(1).",
"Masked Language Model Pseudo Perplexity A masked LM (MLM) is a type of LM, first proposed by Devlin et al., which is trained with the masked token prediction task instead of the next token prediction task.",
"The perplexity score from the MLM does not mean the same as the conventional perplexity score.",
"Therefore, we use the pseudo perplexity score proposed by Salazar et al., which is computed by summing all the log probabilities obtained by sequentially masking each token in the input sentence.",
"Once we obtain the evidence-conditioned perplexity scores for each claim, we find the best threshold th that separates Supported claims from Unsupported claims.",
"We would like to emphasize that our approach does not involve any parameter update of the LM.",
"We only do inference with the LM, and leverage the few-shot samples as the validation set to find the optimal single threshold parameter, th .",
"Throughout our paper, we refer to our methodology as the perplexity-based classifier.",
"Given a set of a claim and evidence, if the evidence-conditioned perplexity score is less than the threshold (i.e. < th ), the claim is Supported by the evidence; otherwise it is Unsupported .",
"All datasets used in the experiment are in English, and we report the data statistics in Table 2.",
"Covid19-Scientific A new test set is constructed by collecting COVID-19-related myths and scientific truths labelled by reliable sources like Med-icalNewsToday, the Centers for Disease Control and Prevention (CDC), and the World Health Organization (WHO).",
"It consists of the most com-1 Authors from HKUST obtained performed all experiments with the existing datasets and compiled and released the new datasets.",
"mon scientific or medical myths about COVID-19, which must be debunked correctly to ensure the safety of the public (e.g., drinking a bleach solution will prevent you from getting COVID-19).",
"The set contains 172 claims with labels ( Supported , Unsupported ) obtained from the aforementioned reliable sources.",
"Note that myths that are unverifiable from current findings are also assigned the Unsupported label.",
"2 The gold evidence is obtained from the winning system of the Kaggle Covid-19 challenge (Su et al., 2020).",
"This system retrieves the evidence from 59,000 scholarly articles about COVID-19, SARS-CoV-2, and other related corona viruses.",
"3 Covid19-Social Another test set is constructed by crawling 340 COVID-19-related claims fact-checked by journalists from a website called Politi-fact.com.",
"Unlike the Covid19-Scientific dataset, it contains non-scientific and socially-related claims, such as For the coronavirus, the death rate in Texas, per capita of 29 million people, we're one of the lowest in the country.",
"Such claims may not be life-and-death matters, but they still have the potential to bring negative sociopolitical effects.",
"Originally, these claims are labelled into six classes {pants-fire, false, barely-true, half-true, mostly-true, true}.",
"However, we use it in a binary setup for consistency with the Covid19-Scientific setup by assigning the first three classes to Unsupported and the rest to Supported .",
"For evidence of each claim, we follow the Al-hindi et al. to obtain the human-written evi-dence/justification available on the Politifact.com website, from which the claims are crawled.",
"FEVER (Thorne et al., 2018) Fact Extraction and Verification (FEVER) is a publicly released large-scale dataset generated by altering sentences extracted from Wikipedia to promote research on fact-checking systems.",
"Since our few-shot experiment requires little data, we only leverage the Paper Test Dataset from the FEVER workshop (https://fever.ai/) resource page to speed up our experiments.",
"This dataset originally has three classes, {Sup-port, Refute, Not Enough Info}.",
"Support\" is sim-2 Disclaimer: The data were collected during the early outbreak of COVID-19 (March 2020). The veracity may have been updated as the time evolved, but we release the original version of the dataset for future comparison 3 https://www.kaggle.com/allen-institute-for-ai/CORD-19-research-challenge 1975 ilar to our Supported label, where a claim can be supported by given evidence. Refute\" is where a claim is \"refuted\" by given evidence, whereas Not Enough Info\" means not enough evidence is available for verification. For our FEVER experiment, we treat Refute and Not Enough Info as one class. This is because we believe that in a real scenario both cases are Unsupported claims that need attention. To provide further detail, the Support\" class is mapped into Supported , and Refute/Not Enough Info is mapped into Unsupported to match our task setting.",
"Note that to balance the dataset, we obtain half the data from Refute and the other half from Not Enough Info.",
"Note that the gold evidence is included in the dataset released by Thorne et al. 5.2 Models Ours We consider one unidirectional LM and one masked LM for our proposed perplexity-based methodology.",
"PPL GPT2-B Our single-parameter classifier based on perplexity from GPT2-base (Radford et al., 2019) (unidirectional LM) PPLBERT-B Our single-parameter classifier based on perplexity from BERT-base (Devlin et al., 2019) (Masked LM) Baselines We finetune various pre-trained Transformer-based (Vaswani et al., 2017) models to build our baseline classifiers, which is a common approach used to achieve many state-of-the-art results in the literature.",
"Major Class A simple majority classifier which always assigns the majority class of the training set to all samples.",
"We provide this for reference because some of our dataset classes are imbalanced.",
"BERT-B ft A fine-tuned BERT-base model with a feed-forward classifier trained on top.",
"BERT-L ft A fine-tuned BERT-large model with a feed-forward classifier trained on top.",
"RoBERTa ft A fine-tuned RoBERTa-base model (Liu et al., 2019) with a feed-forward classifier trained on top.",
"XLNet ft A fine-tuned XLNet-base model (Yang et al., 2019) with a feed-forward classifier trained on top.",
"Few-Shot Data Setup Given ND as the size of the dataset D , we do an n -shot experiment with n samples from D as a validation set for our perplexity-based approach or as a training set for the fine-tuning approach, and the remainder ( ND n ) as a test set.",
"To give a concrete example, in the 2-shot experiment using the Covid19-Social dataset (340 samples), we have two training samples and 338 test samples.",
"We use three seeds to split the datasets and train the models.",
"For a fair comparison, all the seeds and splits are kept the same across the models.",
"Evaluation We mainly evaluate our experiments using accuracy and the Macro-F1 metric.",
"Since some of our datasets are imbalanced (the ratio of Supported to Unsupported in Table 2), we prioritize the overall Macro-F1 score over accuracy.",
"Training Details In our methodology, no gradient update is required.",
"Thus, there are no training details such as learning rate, batch size or max-epoch to report.",
"We simply use a small validation set (size of 2,10,50) to find the best-performing hyper-parameter value for the threshold th from the range of { 0 1000 } .",
"None of the samples from the test set were seen in threshold searching.",
"For baseline fine-tuned classifiers, we do a grid-search to find the best-performing parameters, as follows: We use a learning rate of 5e 6 for training the BERT-B ft , RoBERTa ft , and XLNet ft models, while BERT-L ft is trained with a rate of 2e 5 .",
"All models share the same batch size of 32 and maximum input sequence length of 128 .",
"We also use early-stopping with patience 3 with a maximum of 10 training epochs.",
"Each experiment is run on an Nvidia GTX 1080 Ti, and each epoch takes 2 15 seconds depending on the number of the training samples n .",
"Note that for reproducibility, we will also publicly release the code.",
"Table 3 reports the few-shot performance of the fine-tuning-based baselines and our perplexity-based classifiers.",
"Usage of Perplexity We can observe that our perplexity-based classifiers, especially PPL GPT2-B , outperform all Major Class baselines across all tasks in all settings.",
"For instance, PPL GPT2-B outperforms the Major Class by a great margin of 16% and 36 .",
"8% on accuracy and F1-Macro scores, 1976 Shot # Models Fine-tuning?",
"Few-shot Comparison to Fine-tuned Baselines Except for the Covid-Social accuracy in the 50 shot setting, both of our proposed classifiers ( PPL GPT2-B , PPLBERT-B ) outperform the fine-tuned baseline classifiers across all tasks in all of the 2 -, 10 and 50 -shot settings.",
"For the 2-shot and 10-shot settings, many of the baseline classifiers un-derperform the Major Class baseline regardless of the task.",
"for the Covid-Scientific dataset in the 50-shot setting.",
"This supports our hypothesis that evidence-conditioned perplexity scores are capable of providing signals regarding the veracity of the given claim.",
"Intuitively, we can consider the perplexity score to be mimicking the role of the logits from a classifier, and we are trying to find the best threshold to map this pseudo-logit-like perplexity score into a veracity label.",
"The classification performance of our perplexity-based approach increases as the shot size increases.",
"As the shot size increases from 2 to 50 , PPL GPT2-B shows an average gain of 8 .",
"19 2 .",
"74% and 7 .",
"64 1 .",
"61% in accuracy and Macro-F1 score, respectively, across all tasks.",
"This is because a greater number of data samples means more anchor perplexity points for threshold searching, and thus, a better threshold to determine the veracity of claims.",
"This implies their failure to learn anything from the fine-tuning step with a limited number of samples.",
"Only after 50-shot do these baselines start to learn and outperform the Major Class baselines.",
"This is not surprising, since the pre-trained models are known to perform well in a full-shot scenario, but they do not guarantee good performance when they are shown few samples.",
"In contrast, our perplexity-based classifiers manage to perform fairly well, even in the 2-shot setting, because our classifier is a single parameter (i.e., threshold value), which requires no complex learning or optimization.",
"We would like to emphasize that ours consistently outperform the strong Transformer-based baselines across all dataset on the F1-Macro metric by absolute 10 20% .",
"We argue that these results demonstrate the strength of our approach in low-resource few-shot settings.",
"BERT vs. GPT2 for Perplexity Scores Most of the time, PPL GPT2-B outperforms PPLBERT-B .",
"For instance, in the 50-shot setting for the FEVER dataset, performance differences are 10.04% and 7.76% for accuracy and F1-Macro 1977 LM Type Parameter Size Covid-Scientific Covid-Social FEVER Acc F1 Macro Acc F1 Macro Acc F1 Macro PPL GPT2-B 117M 74.73% 73.83% 73.63% 59.91% 67.48% 64.70% PPL GPT2-M 345M 75.11% 73.93% 75.43% 60.23% 69.02% 66.39% PPL GPT2-L 774M 76.19% 75.53% 73.29% 59.30% 71.66% 69.99% PPL GPT2-XL 1558M 78.23% 77.63% 72.80% 59.88% 73.67% 71.71% Table 4: Effect of LM parameter size on the performance of proposed perplexity-based approach in 50-shot setting.",
"Template-based Data Negation We create our negated dataset by replacing all the auxiliary verbs (e.g., is, can) with their corresponding negated forms (e.g., is not, can not), and vice versa.",
"We apply this approach to the Covid-Scientific dataset and obtain a new version that contains {original-1978",
"scores respectively.",
"Based on this observation, we can speculate that the perplexity from a unidirectional LM is more suitable for our proposed method than from a masked LM.",
"This is most likely because the BERT perplexity score is only an estimation based on the pseudo-perplexity proposed by Salazar et al. 6 Analysis and Discussion In this section, we conduct multiple analysis to further evaluate and understand aspects of our perplexity-based approach.",
"Generally, scaling the model size helps to also improve the model performance, because more parameters mean a stronger learning capability during fine-tuning or training.",
"Also, Roberts et al. have demonstrated that increasing the parameter size allows for more knowledge to be packed into the LM's parameters.",
"Therefore, we experiment with the model size to see if such findings also extend to our proposed methodology.",
"The following model sizes of GPT2 are investigated: base ( PPL GPT2-B ), medium ( PPL GPT2-M ), large ( PPL GPT2-L ) and xl ( PPL GPT2-XL ).",
"Results are reported in Table 4.",
"As expected, we can observe the trend that the performance increases with parameter size.",
"For instance, PPL GPT2-XL is the best performing compared to the other, smaller, models for Covid-Scientific and FEVER, achieving the new state-of-the-art few-shot results by gaining absolute 4% on Covid-Scientific and 2% on FEVERfor accuracy/F1-Macro.",
"We carry out an ablation study on the effect of evidence-conditioning in respect of the final perplexity scores and the corresponding final classification performance.",
"In Table 5, we can observe the performance drops when evidence-conditioning is ablated the biggest drop is 15% on F1-Macro for the FEVER task in the 50-shot setting.",
"This implies that the perplexity score is assigned in relation to the context of the provided evidence.",
"In fact-checking, negation is one of the most diffi-cult challenges, and many state-of-the-art models are brittle against it.",
"Thorne and Vlachos show that the winning fact-checking systems from the FEVER workshop are brittle against negations, experiencing a huge performance drop when given negated test sets, up to absolute 29% in accuracy.",
"Therefore, we also conduct analysis regarding the negation handling of our proposed methods by augmenting our dataset with negated examples.",
"6.5 Potential Application: Ranking of Candidate Claims for Fact-Checking Here, we discuss another way of leveraging the evidence-conditioned perplexity score.",
"It can be 1979 used for prioritizing false-claim candidates for human fact-checkers, instead of doing hard prediction on the veracity of the given claims.",
"sentence ( S original ), negated-sentence ( S negated )} pairs.",
"Note that the evidence is kept the same, but the veracity label of S original is negated (i.e., Supported is negated to Unsupported and vice versa).",
"To illustrate with an example, S original = {claim: 5g helps covid-19 spread., evidence: evidence 1 , label: Unsupported } is negated into S negated = {claim: 5g does not help covid-19 spread., evidence: evidence 1 , label: Supported }.",
"Q1: Can the LM distinguish negation?",
"We use the new augmented Covid-Scientific dataset to investigate whether the LM manages to differentiate between the original-sentence S original and negated-sentence S negated .",
"The average of the absolute difference between the perplexities assigned to S original and S negated is 122 and the maximum absolute difference value is 2800 .",
"Q2: Performance on negation-augmented dataset?",
"We evaluate the performance of the perplexity-based classifier ( PPL GPT2-B ) on the negation-augmented\" Covid-Scientific dataset in reference to its original. Unsurprisingly, PPL GPT2-B does experience a drop in performance of 13.77% and 13.40% in accuracy and F1-Macro, respectively. However, it still outperforms the fine-tuned RoBERTa ft baseline, the best performing baseline in the 2-shot setting, as shown in Table 6. 6.4 Comparison with existing FEVER System in Few-shot Setting For all three tasks, we compare our perplexity models against different fine-tune baselines in Section 5.4. Unlike two newly proposed COVID-19-related tasks, FEVER is a well-established task studied by many existing works. In order to understand how our perplexity-based method compares against the literature, we conduct an additional experiment with the publicly available system from the runner-up team of the FEVER workshop, HexaF (Yoneda et al., 2018b). We fine-tune HexaF's veracity classification modules in few-shot settings. In the 2-shot settting, HexaF shows accuracy of 49.99% and F1-Macro score of 33.33%. In the 50-shot settting, it shows accuracy of 53.53% and F1-Macro score of 49.27%. In general, machine learning models require suffi-cient amounts of training data, and this \"sufficient amount\" normally differs depending on the model being used.",
"However, as demonstrated earlier in our main experimental results (Section 5.4), 2 50 samples are insufficient data to properly train one of the winning fact-checking systems.",
"By ranking the claims-to-be-fact-checked in descending order of perplexity, we can increase the chance that the first k claims checked by a human fact-checker are Unsupported false claims.",
"This will be benefi-cial since fact-checkers can efficiently allocate their time and resources on fact-checking claims that are more likely to be false and harmful to society.",
"In Figure 2, we compare the precision at the top-k (P@k) between the perplexity-based ranking and random-score-based ranking.",
"We can view P@k to measure how many Unsupported pieces are prioritized in the first k of the ranked claims.",
"Across all datasets, perplexity-based ranking (blue marks) exhibits higher precision scores over random-score-based ranking (orange marks).",
"Moreover, for both Covid-Scientific and Covid-Social, our P@k is over 80% for all k values.",
"In this work, we conduct the FEVER experiments in a binary set-up to keep all the experimental settings consistent across all three datasets.",
"However, the original FEVER task has three classes Support, Refute, and Not Enough Info (NEI).",
"Since the distinction between NEI and Refute cases is also an important problem, it would be important future work to extend our binary-class setting to the three-class setting.",
"Moreover, we believe our method can easily be augmented into other existing approaches, for instance, leveraging the perplexity score in the final step of the FEVER fact-checkers as additional input.",
"It would be a useful future direction to explore and discover the most effective way of incorporating the perplexity-based approach into other existing fact-checking systems.",
"In this paper, we propose a novel way of leveraging the perplexity score from LMs for the few-shot fact-checking task.",
"Through experimental analysis from an ablation study to the discussion of potential applications, we further explore and evaluate the capability of the perplexity score to act as an indicator of unsupported claims.",
"We hope our proposed approach encourages future research to continue developing LM-based methodologies as well as the few-shot approach for fact-checking.",
"By doing so, our community can move towards a data-efficient approach that is not constrained by the requirement of a large labeled dataset."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"method",
"abstain",
"objective",
"objective",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"method",
"other",
"objective",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"other",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain"
] |
[
"The automatic detection of satire vs. regular news is relevant for downstream applications (for instance, knowledge base population) and to improve the understanding of linguistic characteristics of satire.",
"Recent approaches build upon corpora which have been labeled automatically based on article sources.",
"We hypothesize that this encourages the models to learn characteristics for different publication sources ( e.g. , The Onion vs. The Guardian) rather than characteristics of satire, leading to poor generalization performance to unseen publication sources.",
"We therefore propose a novel model for satire detection with an adversarial component to control for the confounding variable of publication source.",
"On a large novel data set collected from German news (which we make available to the research community), we observe comparable satire classification performance and, as desired, a considerable drop in publication classification performance with adversarial training.",
"Our analysis shows that the adversarial component is crucial for the model to learn to pay attention to linguistic properties of satire.",
"Satire is a form of art used to criticize in an entertaining manner ( cf. Sulzer, 1771, p. 995ff.).",
"It makes use of different stylistic devices, e.g. , humor, irony, sarcasm, exaggerations, parody or caricature (Knoche, 1982; Colletta, 2009).",
"The occurrence of harsh, offensive or banal and funny words is typical (Golbert, 1962; Brummack, 1971).",
"Satirical news are written with the aim of mimicking regular news in diction.",
"In contrast to misinformation and disinformation (Thorne and Vlachos, 2018), it does not have the intention of fooling the readers into actually believing something wrong in order to manipulate their opinion.",
"The task of satire detection is to automatically distinguish satirical news from regular news.",
"This is relevant, for instance, for downstream applications, such that satirical articles can be ignored in knowledge base population.",
"Solving this problem computationally is challenging.",
"Even human readers are sometimes not able to precisely recognize satire (Allcott and Gentzkow, 2017).",
"Thus, an automatic system for satire detection is both relevant for downstream applications and could help humans to better understand the characteristics of satire.",
"Previous work mostly builds on top of corpora of news articles which have been labeled automatically based on the publication source ( e.g. , The New York Times articles would be labeled as regular while The Onion articles as satire 1 ).",
"We hypothesize that such distant labeling approach leads to the model mostly representing characteristics of the publishers instead of actual satire.",
"This has two main issues: First, interpretation of the model to obtain a better understanding of concepts of satire would be misleading, and second, generalization of the model to unseen publication sources would be harmed.",
"We propose a new model with adversarial training to control for the confounding variable of publication sources, i.e. , we debias the model.",
"Our experiments and analysis show that (1) the satire detection performance stays comparable when the adversarial component is included, and (2) that adversarial training is crucial for the model to pay attention to satire instead of publication characteristics.",
"(3), we publish a large German data set for satire detection which is",
"a) the first data set in German,",
"b) the first data set including publication sources, enabling the experiments at hand, and",
"c) the largest resource for satire detection so far.",
"2 1 https://www.theonion.com/, https://www.nytimes.com/ 2 Data/code: www.ims.uni-stuttgart.de/data/germansatire.",
"Previous work tackled the task of automatic English satire detection with handcrafted features, for instance, the validity of the context of entity mentions (Burfoot and Baldwin, 2009), or the coherence of a story (Goldwasser and Zhang, 2016).",
"Rubin et al. (2016) use distributions of parts-of-speech, sentiment, and exaggerations.",
"In contrast to these approaches, our model uses only word embeddings as input representations.",
"Our work is therefore similar to Yang et al. (2017) and De Sarkar et al. (2018) who also use artificial neural networks to predict if a given text is satirical or regular news.",
"They develop a hierarchical model of convolutional and recurrent layers with attention over paragraphs or sentences.",
"We follow this line of work but our model is not hierarchical and introduces less parameters.",
"We apply attention to words instead of sentences or paragraphs, accounting for the fact that satire might be expressed on a sub-sentence level.",
"Adversarial training is popular to improve the robustness of models.",
"Originally introduced by Goodfellow et al. (2014) as generative adversarial networks with a generative and a discriminative component, Ganin et al. (2016) show that a related concept can also be used for domain adaptation: A domain-adversarial neural network consists of a classifier for the actual class labels and a domain discriminator.",
"The two components share the same feature extractor and are trained in a minimax optimization algorithm with gradient reversal: The sign of the gradient of the domain discriminator is flipped when backpropagating to the feature extractor.",
"Building upon the idea of eliminating domain-specific input representations, Wadsworth et al. (2018) debias input representations for recidivism prediction, or income prediction (Edwards and Storkey, 2016; Beutel et al., 2017; Madras et al., 2018; Zhang et al., 2018).",
"Debiasing mainly focuses on word embeddings, e.g. , to remove gender bias from embeddings (Bolukbasi et al., 2016).",
"Despite previous positive results with adversarial training, a recent study by Elazar and Goldberg (2018) calls for being cautious and not blindly trusting adversarial training for debiasing.",
"We therefore analyze whether it is possible at all to use adversarial training in another setting, namely to control for the confounding variable of publication sources in satire detection (see Section 3.1).",
"The data set used by Yang et al. (2017) and De Sarkar et al. (2018) consists of text from 14 satirical and 6 regular news websites.",
"Although the satire sources in train, validation, and test sets did not overlap, the sources of regular news were not split up according to the different data sets (Yang et al., 2017).",
"We hypothesize that this enables the classifier to learn which articles belong to which publication of regular news and classify everything else as satire, given that one of the most frequent words is the name of the website itself (see Section 4.1).",
"Unfortunately, we cannot analyze this potential limitation since their data set does not contain any information on the publication source 3 .",
"Therefore, we create a new corpus in German (see Section 4.1) including this information and investigate our hypothesis on it.",
"Motivated by our hypothesis in Section 3.1, we propose to consider two different classification problems (satire detection and publication identification) with a shared feature extractor.",
"Figure 1 provides an overview of our model.",
"We propose to train the publication identifier as an adversary.",
"Following De Sarkar et al. (2018), we only use word embeddings and no further handcrafted features to represent the input.",
"We pretrain word embeddings of 300 dimensions on the whole corpus using word2vec (Mikolov et al., 2013).",
"The feature generator f takes the embeddings of the words of each article as input for a bidirectional LSTM (Hochreiter and Schmidhuber, 1997), followed by a self-attention layer as proposed by Lin et al. (2017).",
"We refer to the union of all the parameters of the feature extractor as f in the following.",
"The gray part of Figure 1 shows the model part for our main task satire detection.",
"The satire detector feeds the representation from the feature extractor into a softmax layer and performs a binary classification task (satire: yes or no).",
"Note that, in contrast to De Sarkar et al. (2018), we classify satire solely 3 https://data.mendeley.com/datasets/hx3rzw5dwt/draft?",
"a=377d5571-af17-4e61-bf77-1b77b88316de, v.1, 2017, accessed on 2018-11-23 on the document level, as this is sufficient to analyze the impact of the adversarial component and the influence of the publication source.",
"The second classification branch of our model aims at identifying the publication source of the input.",
"Similar to the satire detector, the publication identifier consists of a single softmax layer which gets the extracted features as an input.",
"It then performs a multi-class classification task since our dataset consists of 15 publication sources (see Table 1).",
"Let f be the parameters of the feature extractors and s and p be the parameters of the satire detector and the publication identifier, respectively.",
"The objective function for satire detection is J s = E ( x,y s ) p data log P f s ( y s , x ) , (1) while the objective for publication identification is J p = E ( x,y p ) p data log P f p ( y p , x ) .",
"Note that the parameters of the feature extractor f are part of both model parts.",
"Since our goal is to control for the confounding variable of publication sources, we train the publication identifier as an adversary: The parameters of the classification part p are updated to optimize the publication identification while the parameters of the shared feature generator f are updated to fool the publication identifier.",
"This leads to the following update equations for the parameters s := s J s s (3) p := p J p p (4) f := f (cid:0) J s f J p f (cid:1) (5) with being the learning rate and being a weight for the reversed gradient that is tuned on the development set.",
"Figure 1 depicts the gradient flow.",
"Dataset.",
"We consider German regular news collected from 4 websites and German satirical news from 11 websites.",
"Table 1 shows statistics and input layer LSTM layer attention layer feature extractor satire detector publication identifier satire?",
"sources of the corpus, consisting of almost 330k articles.",
"The corpus contains articles published between January 1st, 2000 and May 1st, 2018.",
"Each publication has individual typical phrases and different most common words.",
"Among the most common words is typically the name of each publication, e.g. , Der Spiegel has SPIEGEL as fifth and Der Postillon Postillon as third most common word.",
"We did not delete those words to keep the dataset as realistic as possible.",
"We randomly split the data set into training, development (dev) and test (80/10/10 %) with the same label distributions in all sets.",
"Given the comparable large size of the corpus, we opt for using a well-defined test set for reproducability of our experiments in contrast to a crossvalidation setting.",
"Research questions.",
"We discuss two questions.",
"RQ1: How does a decrease in publication classification performance through adversarial training affect the satire classification performance?",
"RQ2: Is adversarial training effective for avoiding that the model pays most attention to the characteristics of publication source rather than actual satire?",
"Baseline.",
"As a baseline model, we train the satire detector part (gray area in Figure 1) on the satire task.",
"Then, we freeze the weights of the feature extractor and train the publication classifier on top of it.",
"In addition, we use a majority baseline model which predicts the most common class.",
"Hyperparameters.",
"We cut the input sentences to a maximum length of 500 words.",
"This enables us to fully represent almost all satire articles and Average Length Publication #Articles Article Sent.",
"capture most of the content of the regular articles while keeping the training time low.",
"As mentioned before, we represent the input words with 300 dimensional embeddings.",
"The feature extractor consists of a biLSTM layer with 300 hidden units in each direction and a self-attention layer with an internal hidden representation of 600.",
"For training, we use Adam (Kingma and Ba, 2014) with an initial learning rate of 0.0001 and a decay rate of 10 6 .",
"We use mini-batch gradient descent training with a batch size of 32 and alternating batches of the two branches of our model.",
"We avoid overfit-ting by early stopping based on the satire F1 score on the development set.",
"Evaluation.",
"For evaluating satire detection, we use precision, recall and F1 score of the satire class.",
"For publication identification, we calculate a weighted macro precision, recall and F1 score, i.e. , a weighted sum of class-specific scores with weights determined by the class distribution.",
"Table 2 (upper part) shows results for different values of , the hyperparameter of adversarial training, on dev.",
"For { 0 .",
"2 , 0 .",
"3 , 0 .",
"5 } , the results are comparably, with = 0 .",
"2 performing best for satire detection.",
"Setting = 0 .",
"7 leads to a performance drop for satire but also to F 1 = 0 for publication classification.",
"Hence, we chose = 0 .",
"2 (the best performing model on satire classification) and = 0 .",
"7 (the worst performing model on publication identification) to investigate RQ1.",
"The bottom part of Table 2 shows the results on test data.",
"The majority baseline fails since the corpus contains more regular than satirical news articles.",
"In comparison to the baseline model without adversarial training (no adv), the model with = 0 .",
"2 achieves a comparable satire classification performance.",
"As expected, the publication identification performance drops, especially the precision declines from 44 .",
"2 % to 30 .",
"8 %.",
"Thus, a model which is punished for identifying publication sources can still learn to identify satire.",
"Similar to the results on dev, the recall of the model with = 0 .",
"7 drops to (nearly) 0 %.",
"In this case, the satire classification performance also drops.",
"This suggests that there are overlapping features (cues) for both satire and publication classification.",
"This indicates that the two tasks cannot be entirely untangled.",
"To address RQ2, we analyze the results and attention weights of the baseline model and our model with adversarial training.",
"The baseline model (no adv) mostly predicts the correct publication for a given article (in 55 . 7 % of the cases).",
"The model with = 0 .",
"2 mainly (in 98 . 2 % of the cases) predicts the most common publication in our corpus (Suddeutsche Zeitung).",
"The model with = 0 .",
"7 shifts the majority of predictions ( 98 . 7 %) to a rare class (namely Eine Zeitung), leading to its bad performance.",
"Example 1 German original: no a dv Erfurt ( dpo ) Es ist eine Organisation , die ausserhalb von Recht und Ordnung agiert , zahlreiche NPD-Funktionare finanziert und in nicht unerheblichem Mae in die Mordserie der sogenannten Zwickauer Zelle verstrickt ist .",
"a dv Erfurt ( dpo ) It is an organization which operates outside of law and order , funds numerous NPD operatives and is to a not inconsiderable extent involved in the series of murders of the so called Zwickauer Zelle .",
"Example 2 German original: no a dv Immerhin wird derzeit der Vorschlag diskutiert , den Familiennachzug nur inklusive Schwiegermuttern zu erlauben , wovon sich die Union einen abschreckenden Effekt erhofft .",
"a dv Immerhin wird derzeit der Vorschlag diskutiert , den Familiennachzug nur inklusive Schwiegermuttern zu erlauben , wovon sich die Union einen abschreckenden Effekt erhofft .",
"English translation: no a dv After all , the proposal to allow family reunion only inclusive mothers-in-law is being discussed , whereof the Union hopes for an off-putting effect .",
"a dv After all , the proposal to allow family reunion only inclusive mothers-in-law is being discussed , whereof the Union hopes for an off-putting effect .",
"a dv Erfurt ( dpo ) Es ist eine Organisation , die ausserhalb von Recht und Ordnung agiert , zahlreiche NPD-Funktionare finanziert und in nicht unerheblichem Mae in die Mordserie der sogenannten Zwickauer Zelle verstrickt ist .",
"English translation: no a dv Erfurt ( dpo ) It is an organization which operates outside of law and order , funds numerous NPD operatives and is to a not inconsiderable extent involved in the series of murders of the so called Zwickauer Zelle .",
"Figure 2 exemplifies the attention weights for a selection of satirical instances.",
"In the first example the baseline model (no adv) focuses on a single word (dpo as a parody of the German newswire dpa) which is unique to the publication the article was picked from (Der Postillon).",
"In comparison the model using adversarial training ( = 0 . 2 ) ignores this word completely and pays attention to die Mordserie (series of murders) instead.",
"In the second example, there are no words unique to a publication and the baseline spreads the attention evenly across all words.",
"In contrast, the model with adversarial training is able to find cues for satire, being humor in this example (family reunion [for refugees] is only allowed including mothers-in-law).",
"We presented evidence that simple neural networks for satire detection learn to recognize characteristics of publication sources rather than satire and proposed a model that uses adversarial training to control for this effect.",
"Our results show a considerable reduction of publication identification performance while the satire detection remains on comparable levels.",
"The adversarial component enables the model to pay attention to linguistic characteristics of satire.",
"Future work could investigate the effect of other potential confounding variables in satire detection, such as the distribution of time and region of the articles.",
"Further, we propose to perform more quantitative but also more qualitative analysis to better understand the behaviour of the two classifier con-figurations in comparison.",
"This work has been partially funded by the German Research Council (DFG), project KL 2869/1-1."
] | [
"abstain",
"abstain",
"objective",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"objective",
"other"
] |
[
"Existing Natural Language Inference (NLI) datasets, while being instrumental in the advancement of Natural Language Understanding (NLU) research, are not related to scientific text.",
"In this paper, we introduce SCINLI, a large dataset for NLI that captures the formality in scientific text and contains 107 , 412 sentence pairs extracted from scholarly papers on NLP and computational linguistics.",
"Given that the text used in scientific literature differs vastly from the text used in everyday language both in terms of vocabulary and sentence structure, our dataset is well suited to serve as a benchmark for the evaluation of scientific NLU models.",
"Our experiments show that SCINLI is harder to classify than the existing NLI datasets.",
"Our best performing model with XLNet achieves a Macro F1 score of only 78 .",
"18% and an accuracy of 78 .",
"23% showing that there is substantial room for improvement.",
"Natural Language Inference (NLI) or Textual Entailment (Bowman et al., 2015) aims at recognizing the semantic relationship between a pair of sentenceswhether the second sentence entails the first sentence, contradicts it, or they are semantically independent.",
"NLI was introduced (Dagan, Glickman, and Magnini, 2005) to facilitate the evaluation of Natural Language Understanding (NLU) that significantly impacts the performance of many NLP tasks such as text summarization, question answering, and commonsense reasoning.",
"To date, several NLI datasets are made available (Bowman et al., 2015; Williams, Nangia, and Bowman, 2018; Marelli et al., 2014; Dagan, Glickman, and Magnini, 2005).",
"These datasets have not only been instrumental for developing and evaluating NLI models but also have been useful in advancing many other NLP areas such as: representation learning (Conneau et al., 2017), transfer learning (Pruksachatkun et al., 2020) and multi-task learning (Liu et al., 2019a).",
"However, despite their usefulness, none of the existing NLI datasets is related to scientific text that is found in research articles.",
"The vocabulary as well as the structure and formality used in sentences in scientific articles are very different from the sentences used in the everyday language.",
"Moreover, the scientific text captured in research papers brings additional challenges and complexities not only in terms of the language and its structure but also the inferences that exist in it which are not available in the existing NLI datasets.",
"For example, a sentence can present the reasoning behind the conclusion made in the previous sentence, while other sentences indicate a contrast or entailment with the preceding sentence.",
"These inferences are crucial for understanding, analyzing, and reasoning over scientific work (Luukkonen, 1992; Kuhn, 2012; Hall, Jurafsky, and Manning, 2008).",
"Therefore, ideally, the scientific language inference models should be evaluated on datasets which capture these inferences and the particularities seen only in scientific text.",
"To this end, we seek to enable deep learning for natural language inference over scientific text by introducing SCINLI, 1 a large dataset of 107 , 412 sentence pairs extracted from scientific papers related to NLP and computational linguistics (CL) and present a comprehensive investigation into the inference types that occur frequently in scientific text.",
"To capture the inference relations which are prevalent in scientific text but are unavailable in the existing NLI datasets, we introduce two new classesC ONTRASTING and REASONING .",
"We create SCINLI by harnessing cues in our data in the form of linking phrases between contiguous sentences, which are indicative of their semantic relations and provide a way to build a labeled dataset using distant supervision (Mintz et al., 2009).",
"Dur-1 https://github.com/msadat3/SciNLI 7399 Class First Sentence Second Sentence Linking Phrase CONTRASTING Essentially, that work examines how a word gains new senses, and how some senses of a word may become deprecated.",
"ing training, we directly utilize these (potentially noisy) sentence pairs, but to ensure a realistic evaluation of the NLI models over scientific text, we manually annotate 6,000 sentence pairs.",
"These clean pairs are used in two splits, 2,000 pairs for development and hyper-parameter tuning and 4,000 pairs for testing.",
"Table 1 shows examples from our dataset corresponding to all of our four classes.",
"We evaluate SCINLI by experimenting with traditional machine learning models using lexical and syntactic features, neural network models BiLSTM, CBOW, CNN, and pre-trained language modelsBERT (Devlin et al., 2019), SciBERT (Beltagy, Lo, and Cohan, 2019), RoBERTa (Liu et al., 2019b), and XLNet (Yang et al., 2019).",
"Our findings suggest that: (1) SCINLI is harder to classify than other datasets for NLI; (2) Lexical features are not enough for a model to achieve satisfactory performance on SCINLI and deep semantic understanding is necessary; (3) SCINLI is well suited for evaluating scientific NLI models; and (4) Our best performing model based on XLNet shows 78 .",
"18% Macro F1 and 78 .",
"23% accuracy illustrating that SCINLI is a challenging new benchmark.",
"To date, several datasets exist for NLI of varying size, number of labels, and degree of difficulty.",
"Dagan, Glickman, and Magnini (2006) introduced the RTE (Recognizing Textual Entailment) dataset of text-hypothesis pairs from the general news domain and considered two labels: entailment or no-entailment (i.e., a hypothesis is true or false given a text).",
"The RTE dataset is paramount in developing and advancing the entailment task.",
"The SICK (Sentences Involving Compositional Knowledge) dataset introduced by Marelli et al. (2014) was created from two existing datasets of image captions and video descriptions.",
"SICK consists of sentence pairs (premise-hypothesis) labeled as: entailment, contradiction, or neutral.",
"Despite being instrumental in the progress of NLI, both RTE and SICK datasets are less suitable for deep learning models due to their small size.",
"In recent years, SNLI (Bowman et al., 2015) and MNLI (Williams, Nangia, and Bowman, 2018) are the most popular datasets for training and evaluating NLI models, in part due to their large size.",
"Similar to SICK, SNLI is derived from an image caption dataset where the captions are used as premises and hypotheses are created by crowd-workers, with each sample being labeled as: entailment, contradiction, or neutral.",
"MNLI is created in a similar fashion to SNLI except that the premises are extracted from sources such as face-to-face conversations, travel guides, and the 9/11 event, to make the task more challenging and suitable for domain adaptation.",
"More recently, Nie et al. (2020) released ANLI which was created in an iterative adversarial manner where human annotators were used as adversaries to provide sentence pairs for which the state-of-the-art models make incorrect predictions.",
"Unlike the datasets specific to classifying the relationships between two sentences, Zellers et al. (2018) combined NLI with commonsense reasoning to introduce a new task of predicting the most likely next sentence from a number of options along with their new dataset 7400 called SWAG which was also created with an adversarial approach.",
"However, different from ANLI, the SWAG approach was automatic.",
"All these datasets have been widely used for evaluating NLU models and many of them appear in different NLU benchmarks such as GLUE (Wang et al., 2018) and SUPERGLUE (Wang et al., 2019).",
"Heretofore, Khot, Sabharwal, and Clark (2018) created the only NLI dataset related to science.",
"Their dataset, SCITAIL was derived from a school level science question-answer corpus.",
"As a result, the text used in SCITAIL is very different from the type of text used in scientific papers.",
"Furthermore, the sentence pairs in SCITAIL are classified into one of two classes: entailment or no-entailment.",
"Thus, SCITAIL does not cover all the inference relationships necessary to understand scientific text.",
"In other lines of research, discourse cues, e.g., linking phrases have been previously used to extract inter-sentence and/or inter-clause semantic relations in discourse parsing (Hobbs, 1978; Webber et al., 1999; Prasad et al., 2008; Jernite, Bowman, and Sontag, 2017; Nie, Bennett, and Goodman, 2019), causal inference (Do, Chan, and Roth, 2011; Radinsky, Davidovich, and Markovitch, 2012; Li et al., 2020; Dunietz, Levin, and Carbonell, 2017) and why-QA (Oh et al., 2013).",
"However, none of the aforementioned bodies of research investigates these relations in scientific text, nor do they exploit the discourse cues to create NLI datasets.",
"Furthermore, discourse parsing studies a broader range of semantic relations, many of which are unrelated to the task of NLI while causal inference and why-QA are limited to only cause-effect relations.",
"In contrast to these tasks, we focus on the semantic relations which are either relevant to the task of NLI or highly frequent in scientific text and leverage linking phrases to create the first ever scientific NLI dataset, which we call SCINLI.",
"In order to better understand the inter-sentence relationships that exist in scientific text, we started the process of creating our dataset by perusing through scientific literature with the intent of finding clues that are revealing of those relationships.",
"We found that to have a coherent structure, authors often use different linking phrases in the beginning of sentences, which is indicative of the relationship with the preceding sentence.",
"For example, to elaborate or make something specific, authors use linking phrases such as In other words or In particular, which indicate that the sentence supports or entails the previous sentence.",
"We also found that some linking phrases are used to indicate additional relationships that are prevalent in scientific text but are not captured in the existing NLI datasets.",
"For instance, when a sentence starts with Therefore or Thus, it indicates that the sentence is presenting a conclusion to the reasoning in the previous sentence.",
"Similarly, the phrase In contrast is used to indicate that the sentence is contrasting what was said in the previous sentence.",
"Therefore, inspired by the framework of discourse coherence theory (Hobbs, 1978; Webber et al., 1999; Prasad et al., 2008) that characterizes the inferences between discourse units, we extend the NLI relations commonly used in prior NLI workentailment, contradiction, and semantic independenceto a set of inference relations that manifest in scientific textcontrasting, reasoning, entailment, and semantic independence (3.1).",
"In order to create a large training set with minimal manual effort, we employ a distant supervision method based on linking phrases that are commonly used in scientific writing and are indicative of the semantic relationship between adjacent sentences (3.2).",
"We avoid the noise incurred by the distant supervision method in our development and test sets by manually annotating these sets (3.3).",
"Our CONTRASTING class is an extension of the CONTRADICTION class in the existing NLI datasets.",
"With this class, in addition to contradicting relations between sentences in a pair, we aim to capture inferences that occur when one sentence mentions a comparison, criticism, juxtaposition, or a limitation of something said in the other sentence.",
"We can see an example of a sentence pair from our CONTRASTING class in Table 1. Here, the authors discuss how their work differs from the other work mentioned in the first sentence thereby making a comparison between the two works.",
"The examples where the first sentence presents the reason, cause, or condition for the result or conclusion made in the second sentence are placed in",
"our REASONING class.",
"In Table 1, we can see an example where the authors mention that they use a multi-reference corpus for evaluation in the second sentence and provide the reason behind it in the first sentence.",
"Our ENTAILMENT class includes the sentence pairs where one sentence generalizes, specifies or has an equivalent meaning with the other sentence.",
"An example from this class can be seen in Table 1. In the example, the second sentence is specifying the proposed direction mentioned in the first sentence making the pair suitable for our ENTAILMENT class.",
"The NEUTRAL class includes the sentence pairs which are semantically independent.",
"We can see an example from this class in Table 1. Here, the first sentence discusses the span of the literature of a particular topic, whereas the second sentence mentions the challenges of handling abstract words in certain tasks.",
"Therefore, the sentences are semantically independent of each other.",
"We construct our training set from scientific papers on NLP and computational linguistics available in the ACL Anthology, published between 2000 and 2019 (Bird et al., 2008; Radev, Muthukrishnan, and Qazvinian, 2009).",
"For extracting textual data from the PDF papers, we use GROBID 2 which is a popular tool for parsing PDF files.",
"We employ the following distant supervision technique on the extracted text to select and label the sentence pairs.",
"sentence they occur in and the respective previous sentence.",
"We then group these linking phrases into three classes based on the type of relationship indicated by each of them.",
"The linking phrases and their assigned class can be seen in Table 2. We select the sentences which start with any of these phrases from each paper and include them in our dataset as hypotheses or second sentences; we include their respective preceding sentences as the premises or first sentences.",
"Each sentence pair is labeled based on the class assigned to the linking phrase present in the second sentence, e.g., if the second sentence starts with In contrast, the sentence pair is labeled as CONTRASTING .",
"After assigning the labels, we delete the linking phrases from the second sentence of each pair to ensure that the models cannot get any clues of the ground truth labels just by looking at them.",
"We also pair a large number of randomly selected sentences for our NEUTRAL class using three approaches: BOTHRAND : Two completely random sentences which do not contain any linking phrases are extracted (both from the same paper) and are paired together.",
"FIRSTRAND : First sentence is random; second sentence is selected randomly from the other three classes (both from the same paper).",
"SECONDRAND : Second sentence is random; first sentence is selected randomly from the other three classes (both from the same paper).",
"Our choice for including the last two approaches above was to make the dataset more challenging.",
"To create our development and test sets, we start by extracting and labeling sentence pairs using the same distant supervision approach described in the previous section from the papers published in 2020 which are available in the ACL anthology.",
"We then manually annotate a subset of these sentence pairs in order to make SCINLI a suitable benchmark for evaluation.",
"The annotation process is completed in two steps, as described below.",
"First, we manually clean the data by filtering out the examples which contain too many mathematical terms and by completing the sentences that are broken due to erroneous PDF extraction by looking at the papers they are from.",
"The second step of the annotation process is conducted in an 7402 #Examples #Words S' parser Dataset Train Dev Test Prem.",
"iterative fashion.",
"In each iteration, we randomly sample a balanced subset from the cleaned set of examples created in the previous step and present the sentence pair from each example to three expert annotators.",
"To avoid a performance ceiling due to lack of context, the annotators are instructed to label each example based only on the two sentences in each example.",
"If the label is not clear from the context available in the two sentences, the instruction is to label them as unclear.",
"The label with the majority of the votes from annotators is then cho-sen as the gold label.",
"No gold label is assigned to the examples ( 5% ) which do not have a majority vote.",
"The examples for which the gold label agrees with the label assigned based on the linking phrase are selected to be in our benchmark evaluation set .",
"We continue the iterations of sampling a balanced set of examples and annotating them until we have at least 1 , 500 examples from each class in the benchmark evaluation set.",
"In total, 8 , 044 sentence pairs 2 , 011 from each class are annotated among which 6 , 904 have an agreement between the gold label and the label assigned based on the linking phrase.",
"Therefore, these 6904 examples are selected to be in the benchmark evaluation set.",
"The percentage of overall agreement and the class-wise agreement between the gold labels and the labels assigned based on the linking phrases are reported in the last column of Table 3. The Fleiss-k score among the annotators is 0 .",
"62 which indicates that the agreement among the annotators is substantial (Landis and Koch, 1977).",
"We randomly select 36% of the papers in our benchmark evaluation set to be in our development set and the rest of the papers are assigned to the test set.",
"This is done based on our decision to have at least 500 samples from each class in the development set and 1000 samples from each class in the test set.",
"Splitting the dataset into train, development and test sets at paper level instead of sentence pair level is done to prevent any information leakage among the data splits caused by sentences from one paper being in more than one split.",
"Because of the differences in the frequency of occurrence of the linking phrases related to different classes, our initial dataset was unbalanced in all three splits.",
"In contrast, the examples in the related datasets such as SNLI (Bowman et al., 2015) and MNLI (Williams, Nangia, and Bowman, 2018) are almost equally distributed across their classes.",
"Therefore, for a fair comparison, we balance our dataset by downsampling the top three most frequent classes to the size of the least frequent class in each split.",
"We can see the number of examples in each class of our SCINLI dataset in Table 3. 3.5 Data Statistics A comparison of key statistics of SCINLI with four related datasets is also shown in Table 3. Dataset Size Although the total size of our dataset is smaller than SNLI and MNLI, SCINLI is still large enough to train and evaluate deep learning based NLI models.",
"Sentence Lengths From Table 3, we can see that the average number of words in both premise and hypothesis is higher in SCINLI compared with the other datasets.",
"This reflects the fact that sentences used in scientific articles tend to be longer than the sentences used in everyday language.",
"Sentence Parses Similar to the related datasets, we parse the sentences in SCINLI by using the Stanford PCFG Parser (3.5.2) (Klein and Manning, 7403 Dataset F1 Acc SICK 63 .",
"2003).",
"We can see that 97% of both first and second sentences have parses with an S' root which is higher than the sentences in SNLI and very competitive with the other datasets.",
"This illustrates that most of our sentences are syntactically complete.",
"Token Overlap We report the average percentage of tokens occurring in hypotheses which overlap with the tokens in their premises (Table 3).",
"We observe that the overlap percentage in SCINLI is much lower compared to the other datasets.",
"Therefore, our dataset has low surface-level lexical patterns revealing the relationship between sentences.",
"We evaluate our dataset by performing three sets of experiments.",
"First, we aim to understand the difficulty level of SCINLI compared to related datasets (4.1).",
"Second, we investigate a lexicalized classifier to test whether simple similarity based features can capture the particularities of our relations and potentially perform well on our dataset (4.2).",
"Third, we experiment with traditional machine learning models, neural network models and transformer based pre-trained language models to establish strong baselines (4.3).",
"To evaluate the difficulty of SCINLI, we compare the performance of a BiLSTM (Hochreiter and Schmidhuber, 1997) based classifier on our dataset and four related datasets: SICK, SNLI, MNLI and SCITAIL .",
"The architecture for this model is similar to the BiLSTM model used by Williams, Nangia, and Bowman (2018).",
"Precisely, the sentence level representations S 1 and S 2 are derived by sending the embedding vectors of the words in each of the sentences in a pair through two separate BiLSTM layers and averaging their hidden states.",
"The context vector S c is calculated using the following equation: S c = [ S 1 , S 2 , S 1 (cid:12) S 2 , S 1 S 2 ] (1) Here, the square brackets denote a concatenation operation of vectors and (cid:12) and are element-wise multiplication and subtraction operators, respectively.",
"S c is sent through a linear layer with Relu activation which is followed by a softmax layer to obtain the final output class.",
"Implementation details We pre-process the input sentences by tokenizing and stemming them using the NLTK tokenizer 3 and Porter stemmer, 4 respectively.",
"Any stemmed token which occurs less than two times in the training set is replaced with an [UNK] token.",
"We use 300D Glove embeddings (Pennington, Socher, and Manning, 2014) to represent the tokens which are allowed to be updated during training.",
"The hidden size for the BiLSTM models is 300.",
"The batch size is set at 64 and the models are trained for 30 epochs where we optimize a cross-entropy loss using Adam optimizer (Kingma and Ba, 2014) with an initial learning rate of 0 .",
"001 .",
"We employ early stopping with a patience size 10 where the Macro F1 score of the development set is used as the stopping criteria.",
"Since SICK does not have a development split, we randomly select 10% of its training examples to be used as the development set.",
"Similarly, since MNLI does not have a publicly available test split, we consider its development split as the test split and we randomly select 10 , 000 samples from the training set to be used as the development set.",
"We can see the performance of this model on different datasets in Table 4. We find the following: SCINLI is more challenging than other related datasets.",
"The BiLSTM model shows a much lower performance for SCINLI compared with the other datasets.",
"These results indicate that the task our dataset presents is more challenging compared to other datasets.",
"As we have seen in Table 3, there is a substantial amount of discrepancy in sentence lengths between SCINLI and the other datasets.",
"The longer sentences in our dataset make it harder for the models to retain long distance dependencies, which result in lower performance.",
"Furthermore, our dataset has low surface-level lexical cues and exhibits complex linguistic patterns that require a model to be less reliant on lexical cues but instead learn deep hidden semantics from text.",
"To verify that the examples in our dataset cannot be classified based only on syntactic and lexical similarities, we explore a simple lexicalized classifier similar to (Bowman et al., 2015).",
"We train a classifier using different combinations of the following features: (1) the second sentence's BLEU (Papineni et al., 2002) score with respect to the first sentence with an n-gram range of 1 to 4; (2) the difference in length between the two sentences in a pair; (3) overlap of all words, just nouns, verbs, adjectives, or adverbs both the actual number and the percentage over possible overlaps; and (4) un-igrams and bigrams from the second sentence as indicator features.",
"We compare the performance of these models on our dataset and the SICK dataset because given the small size of SICK, this is especially suitable for this kind of models.",
"The results can be seen in Table 5. We observe the following: Semantic understanding is required to perform well on SCINLI.",
"The lexicalized model fails to achieve satisfactory results on SCINLI even when all features are combined.",
"Both Macro F1 and accuracy are much lower for our dataset than SICK.",
"This means that without actually understanding the content in the sentences in SCINLI, a model cannot successfully predict their relationship.",
"To establish baselines on our dataset, we consider three types of models: a traditional machine learning model, neural network models, and pre-trained language models.",
"BiLSTM word embeddings are sent through a BiLSTM layer and the hidden states are averaged;",
"(b) CBOW word embedding vectors are summed;",
"(c) CNN 64 convolution filters of widths [3, 5, 9] on the word embeddings are applied, the outputs of which are mean pooled to get a single vector representation from the filters of each of the three widths.",
"These three vectors are then concatenated to get the sentence level representation.",
"For all three models, the sentence level representations are combined as in Eq.",
"1. The obtained representations are first sent through a linear layer with Relu activation followed by softmax for classification (i.e., project them with a weight matrix W R d 4 ).",
"The hyperparameters and other implementation details are the same as for the BiLSTM model described in 4.1.",
"Pre-trained Language Models We fine-tune four transformer based pre-trained language models:",
"(a) BERT (Devlin et al., 2019) pre-trained by masked language modeling (MLM) on BookCor-pus (Zhu et al., 2015) and Wikipedia;",
"(b) SciBERT (Beltagy, Lo, and Cohan, 2019) a variant of BERT pre-trained with a similar procedure but exclusively on scientific text;",
"(c) RoBERTa (Liu et al., 2019b) an extension of BERT which was pre-trained using dynamic masked language modeling, i.e., unlike BERT, different words were masked in each epoch during training.",
"It was also trained for a longer period of time on a larger amount of text compared with BERT; and",
"(d) XLNet (Yang et al., 2019) -pre-trained with a Permutation Language Mod-eling objective instead of MLM.",
"We employ the base variants of each of these models using the hug-gingface transformers library.",
"The input sequence for these models is derived by concatenating the two sentences in a pair with a [SEP] token in between.",
"The [CLS] token is then projected with a weight matrix W R d 4 by sending it as the input to a softmax layer to get the output class.",
"We fine-tune each transformer based model for 5 epochs where we minimize the cross-entropy loss using Adam optimizer (Kingma and Ba, 2014) with an initial learning rate of 2 e 5 .",
"Early stopping with a patience size 2 is employed.",
"The experiments are run on a single Tesla V10 GPU.",
"The transformer based models took approximately four hours to train and the traditional machine learning and neural network models were trained in less than one hour.",
"We run each experiment three times with different random seeds and 7405 CONTRASTINGREASONINGENTAILMENTNEUTRAL Macro F1 Acc Lexicalized 50 .",
"report the average and standard deviation of the F1 scores for each of the four classes, their Macro average and overall accuracy in Table 6. Our findings are discussed below.",
"Transformer based models consistently outperform the traditional models The transformer based models have a very high performance gap with the traditional lexicalized and neural models.",
"Their better performance can be attributed to their superior design for capturing the language semantics and their pre-training on large amounts of texts.",
"More sophisticated pre-training methods lead to better performance RoBERTa and XLNet are created by addressing different limitations of BERT.",
"Both of these models show a better performance than BERT on our dataset.",
"Therefore, the progress made in these two models for better NLU capability is reflected by the results on SCINLI.",
"This proves that SCINLI can be used as an additional resource for tracking the progress of NLU.",
"improve classification performance The results show that SciBERT consistently outperforms BERT on SCINLI.",
"This is because unlike BERT, SciBERT was pre-trained exclusively on scientific text.",
"Hence, it has a better capability to understand the text in the scientific domain.",
"We see that RoBERTa and XLNet show slightly better performances than SciBERT despite being pre-trained on non-scientific text, just like BERT.",
"However, it should be noted that these differences in performance are not statistically significant.",
"Moreover, both RoBERTa and XLNet were created by modifying the training procedure of BERT to further improve the performance, whereas SciBERT is just a plain BERT model pre-trained on scientific text.",
"Even without any modifications to the SCINLI Model F1 Acc BERT xxx BOTH SENTENCES 75 .",
"training procedure, SciBERT is able to perform similarly to these models proving the advantage of pre-training on domain specific text and suitability of our dataset for evaluating scientific NLI models.",
"Research has shown that some stylistic and annotation artifacts are present (only in the hypotheses) in NLI datasets created using crowdsource annotators (Gururangan et al., 2018).",
"To verify that the models do not learn similar spurious patterns in our dataset and predict the labels without understanding the semantic relation between the sentences, we start our analysis by experimenting with only the second sentence as the input to BERT and SciBERT models.",
"Next, to intuitively understand the errors made by the models, we perform a qualitative analysis of the predictions made by the SciBERT model on 100 randomly selected examples from our test set.",
"Finally, we show that the NEUTRAL examples extracted with FIRSTRAND and SECONDRAND approaches are harder to classify than the examples extracted with BOTHRAND .",
"Spuriosity Analysis A comparison between the only second sentence models and the models with both sentences concatenated as the input can be seen in Table 7. Clearly, as we can see from the 7406 First Sentence Second Sentence True Label Predicted Label Multiple studies of BERT concluded that it is considerably overparametrized.",
"table, there is a substantial amount of performance decrease when only the second sentence is used as input.",
"Therefore, in order to perform at the optimal level, both sentences are required for the models to make the correct inference by learning the semantic relation between them.",
"Qualitative Error Analysis We find that a ma-jor reason behind the wrong predictions is a lack of domain specific knowledge.",
"For example, in the first sentence pair in Table 8, without the domain knowledge that the number of parameters in a model affects the performance, one will not be able to make the correct inference.",
"We also find that the model is prone to making mistakes for longer sentences.",
"This issue is exemplified by the second sentence pair in Table 8. Neutral Class Performance Analysis We can see a plot of the accuracy shown by SciBERT on NEUTRAL pairs of our test set extracted with different approaches in Figure 1. Indeed, the examples in which one sentence comes from one of the other three classes are harder to classify.",
"In this paper, we introduced SCINLI, the first natural language inference dataset on scientific text created with our novel data annotation method.",
"We manually annotated a large number of examples to create our benchmark test and development sets.",
"Our experiments suggest that SCINLI is harder to classify than existing NLI datasets and deep semantic understanding is necessary for a model to perform well.",
"We establish strong baselines and show that our dataset can be used as a challenging benchmark to evaluate the progress of NLU models.",
"In the future, we will leverage knowledge bases to improve the models' ability to understand scientific text.",
"We make our code and the SCINLI dataset available to further research in scientific NLI.",
"This research is supported by NSF CAREER award 1802358 and NSF CRI award 1823292 to Cornelia Caragea.",
"Any opinions, findings, and conclusions expressed here are those of the authors and do not necessarily reflect the views of NSF.",
"We thank AWS for computing resources.",
"We also thank our anonymous reviewers for their constructive feedback, which helped improve our paper."
] | [
"abstain",
"method",
"objective",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"method",
"other",
"method",
"abstain",
"result",
"method",
"result",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"result",
"result",
"abstain",
"other",
"other",
"other",
"other"
] |
[
"Unsupervised machine translation (MT) has recently achieved impressive results with monolingual corpora only.",
"However, it is still challenging to associate source-target sentences in the latent space.",
"As people speak different languages biologically share similar visual systems, the potential of achieving better alignment through visual content is promising yet under-explored in unsupervised multimodal MT (MMT).",
"In this paper, we investigate how to utilize visual content for disambiguation and promoting latent space alignment in unsupervised MMT.",
"Our model employs multimodal back-translation and features pseudo visual pivoting in which we learn a shared multilingual visual-semantic embedding space and incorporate visually-pivoted captioning as additional weak supervision.",
"The experimental results on the widely used Multi30K dataset show that the proposed model significantly improves over the state-of-the-art methods and generalizes well when images are not available at the testing time.",
"Neural machine translation (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014) has achieved near human-level performance (Wu et al., 2016).",
"However, its effectiveness strongly relies on the availability of large-scale parallel corpora.",
"Unfortunately, preparing the parallel data remains a challenge as there are more than 6,500 languages in the world, and recruiting translators with bilingual or multilingual knowledge to cover all those languages is impractical.",
"As a result, developing methods alleviating the need of well-annotated large parallel corpora has recently attracted increasing attention in the community.",
"These methods fall into two broad categories.",
"The first type of methods use a third language as the pivot (Firat et al., 2016; Chen et al., 2017; Cheng et al., 2017; Johnson et al., 2017) to enable zero-resource translation.",
"Although the progress is encouraging, pivoting with a third language still demands bilingual knowledge for collecting large-scale parallel source-pivot and pivot-target corpora.",
"The second type of methods explore unsupervised approaches (Conneau et al., 2018a; Artetxe et al., 2018; Lample et al., 2018a) have recently achieved impressive translation quality.",
"These methods rely only on monolingual data and back-translation (Sennrich et al., 2016a).",
"However, as discussed in (Lample et al., 2018b), the alignment of source-target sentences is uncertain and highly subject to proper initialization.",
"Using visual content for unsupervised MT (Chen et al., 2018; Su et al., 2019) is a promising solution for pivoting and alignment based on its availability and feasibility.",
"Abundant multimodal content in various languages are available online ( e . g . Instagram and YouTube).",
"It is also easier to recruit monolingual annotators to describe an image than to find multilingual translators to translate sentences.",
"Importantly, visual content is eligible to improve the alignments in the language latent spaces since the physical visual perception is similar among people speaking different languages ( e . g . similar blue car for a German and a French).",
"Based on these insights, we propose a novel unsupervised multimodal MT framework incorporating images as pseudo pivots promoting latent space alignment.",
"In addition to use features of visual objects for multimodal back-translation, we align a shared multilingual visual-semantic embedding (VSE) space via leveraging disjoint image-sentence pairs in different languages.",
"As illustrated in Figure 2, for sentences approximately pivoted by similar images (srcimg -tgt), drawing embeddings of corresponding image-sentence pairs closer results in better alignments of semantically equivalent sentences in the language latent spaces.",
"Inspired by back-translation, we further explore another pseudo pivoting strategy, which approximates multilingual sentence pairs ( src -imgtgt ) conditioned on a real image via captioning.",
"Instead of using annotation of images for pivoting as in (Chen et al., 2018), we generate sentences in two languages pivoted on the real image, and then approximately pairing them as weak supervision for training unsupervised MT system.",
"This approach is analogous to a cross-modal version of back-translation.",
"We make the following contributions: (1) Building a unified view of employing visual content for pseudo pivoting.",
"(2) We learn and improve the alignments in the shared multilingual multimodal embedding space for unsupervised MMT with disjoint image-text pairs in different languages.",
"(3) Our model achieves state of the art on Multi30K and generalizes well to the text-only scenario.",
"Neural Machine Translation Typical NMT models are based on the encoder-decoder framework with attention (Bahdanau et al., 2015).",
"Let x = ( x 1 , , x N ) denotes a source sentence and y = ( y 1 , , y M ) denotes a target sentence, where ( x , y ) ( X , Y ) .",
"The encoder-decoder model learns to estimate the following likelihood from the source sentence to the target sentence: p x y ( y | x ) = M (cid:89) i =1 p ( y i | y <i , x ) (1) When a parallel corpus is available, the maximum likelihood estimation (MLE) is usually adopted to optimize the (source to target language) NMT model by minimizing the following loss: L MTx y = E ( x , y ) ( X , Y ) [ log p x y ( y | x )] (2) Among all encoder-decoder models, the Transformer (Vaswani et al., 2017) architecture recently achieves state-of-the-art translation quality.",
"Instead of using recurrent or convolutional operations, it facilitates multi-head self-attention (Lin et al., 2017).",
"In this paper, we choose the Transformer as the underlying architecture for both the translation and the captioning modules.",
"Unsupervised Machine Translation While conventional MT systems rely on the availability of a large parallel corpus, translation with zero-resource (unsupervised MT) (Lample et al., 2018a; Artetxe et al., 2018; Lample et al., 2018b) has drawn increasing research attention.",
"Only monolingual sentences are presented at the training and validation phase, i .",
"e",
"., only x X and y Y are available.",
"Successful unsupervised MT systems share several common principles.",
"First, they require the pre-training step to initialize the model and establish strong monolingual language models properly.",
"For example, XLM (Conneau and Lample, 2019) utilizes the masked language model objective in BERT (Devlin et al., 2019).",
"MASS (Song et al., 2019) utilizes a span-based sequence-to-sequence masking objective for language model pre-training.",
"Second, these systems transform the unsupervised problem into a weakly or self-supervised one by automatically generating pseudo sentence pairs via back-translation (Sennrich et al., 2016a).",
"The idea behind can be analogous to the cycle-consistency objective in CycleGAN (Zhu et al., 2017) for image-image translation with unpaired data.",
"Specifically, let us denote by h ( y ) = ( x 1 , , x N ) the sentence in the source language inferred from y Y such that h ( y ) = argmax p y x ( x | y ) .",
"Similarly, let us denote by g ( x ) = ( y 1 , , y M ) the sentence in the target language inferred from x X such that g ( x ) = argmax p x y ( y | x ) .",
"Then the pseudo parallel sentences ( h ( y ) , y ) and ( x , g ( x )) can be further used to train two two MT models ( X Y and Y X ) by minimizing the following back-translation loss: L BTx y = E x X [ log p y x ( x | g ( x ))] + E y Y [ log p x y ( y | h ( y ))] (3) Although reinforcement learning-based approaches (He et al., 2016a) and Gumbel-softmax reparametrization (Maddison et al., 2017) have been used to handle back-propagation thorough non-differentiable argmax predictions.",
"in this paper, we do not back-propagate through h ( y ) and g ( x ) to simplify the training process.",
"As illustrated in Figure 1, our model is composed of seven modules: Two encoder-decoder pairs for translation, two decoders for captioning, and one shared visual encoder.",
"In this section, we first detail our basic MMT model architecture and the unsupervised setup.",
"Then we introduce pseudo visual pivoting: learning multilingual VSE and pivoted captioning.",
"Multimodal machine translation (Specia et al., 2016) (MMT) considers additional images as a",
"complementary information source for MT. An image z and the description in two languages form a triplet ( x , y , z ) ( X , Y , Z ) .",
"The Transformer encoder reads the source sentence and encodes it with hierarchical self-attention into h x = { h x 1 , , h xN } , h xi R d , where d is the dimension of the embedding space.",
"The visual encoder encodes the image into h z = { h z 1 , , h zK } , h zi R d , K max = 36 .",
"Most previous work (Chen et al., 2018; Su et al., 2019) use 2D ( K = 14 14 ) feature maps of ImageNet pre-trained ResNet (He et al., 2016b).",
"In contrast, we utilize the regional features of K salient visual objects in an image extracted by Faster-RCNN (Ren et al., 2015) and a 1-layer MLP as the encoder to encode visual objects.",
"Various attention strategies for sequence-to-sequence learning have been addressed in (Li-bovicky and Helcl, 2017).",
"Our model employs the hierarchical multi-head multimodal attention for decoding.",
"For decoding at time stamp i , the textual attention Attn ( h yi , h x ) computes the context vector c i = (cid:80) j j h xj via a attention-based alignment j = Align ( h yi , h xj ) , where (cid:80) j j = 1 and h yi is the decoder state.",
"Essentially, the one-head attention in Transformer is implemented as c i = softmax ( Q i ( K x ) (cid:62) / d ) V x where { Q , K x , V x } are the packed d -dimensional Query, Key, Value vectors, which are the mapped and packed version of { h yi , h x , h x } .",
"For decoding with encoded visual and textual inputs, we utilize multimodal attention to compute the context vector c i : c xi = Attn ( h yi 1 , h x ) + v Attn ( h yi 1 , h z ) (4) In practice we set v = 1 .",
"0 .",
"Our multimodal decoder models the likelihood to predict the next token as: p ( y i | y <i , x , z ) = softmax ( f ( c i , y i 1 , h y i 1 ) , (5) where f ( . ) denotes the aggregated non-linear feature mapping in Transformer.",
"Unsupervised multimodal MT (Nakayama and Nishida, 2017; Chen et al., 2018; Su et al., 2019) poses a new yet challenging problem.",
"On both the source and target sides, only non-overlapping monolingual multimodal data are presented for training and validation.",
"Specifically, the data available are: ( x , z x ) ( X , Z ) , ( y , z y ) ( Y , Z ) , such that { x } { y } = , { z x } { z y } = .",
"Note that there are no parallel translation pairs available (un-supervised), and the images are mutually exclusive for different languages.",
"For multimodal back-translation, the generated pseudo target sentence conditioned on the source sentence and image can be re-written as g ( x , z x ) = argmax p xz y ( y | x , z x ) , where p xz y ( y | x , z ) = (cid:81) Mi =1 p ( y i | y <i , x , z ) .",
"Similar for p yz x ( x | y , z ) and h ( y , z y ) .",
"For unsupervised multimodal MT, the multimodal back-translation objective can be extended as: L MBTx y = E ( x , z x ) (cid:104) -log p yz x ( x | g ( x , z x ) , z x ) (cid:105) + E ( y , z y ) (cid:104) -log p xz y (cid:0) y | h ( y , z y ) , z y ) (cid:1)(cid:105) (6) We simplify the notation of expectation for clarity.",
"Aligning the latent spaces of the source and target languages without supervision is challenging, as discussed in (Lample et al., 2018b).",
"However, as people speak different languages biologically share similar visual systems, we envision that the shared visual space can serve as the pivot for alignment.",
"Unlike most previous work (Chen et al., 2018; Su et al., 2019) treating images merely as a feature, we propose two visual pivoting approaches: (1) Aligning the multilingual VSE space; (2) Image pseudo pivoting via captioning.",
"As illustrated in Figure 2, for (1), we use images as the approximate pivots connecting real non-parallel sentences.",
"(srcimg -tgt.)",
"In (2), for each pivoting real image, we genera dog running in a field einhundluftin einerwiese a biker with a white helmet is in midair.",
"ate captions in both languages to construct pseudo source-target sentence pairs.",
"( src -imgtgt ), where the italic item is pseudo.",
"We collectively term the proposed approach pseudo visual pivoting .",
"We posit that for X , Y , Z , the two language spaces X , Y could be properly associated by respectively aligning two monolingual VSE spaces X Z and Y Z .",
"We leverage the contrastive objective in cross-modal retrieval (Kiros et al., 2014; Huang et al., 2019b) for aligning multimodal inputs in the shared VSE space where the embeddings are close if they are semantically associated or paired.",
"Specifically, we generalize the fine-grained (object-level and token-level), monolingual textual-to-visual, and visual-to-textual attention (Lee et al., 2018; Huang et al., 2019c) into the multilingual setup.",
"For fine-grained image-sentence alignment, let s ij = cos ( h xi , h zj ) denotes the cosine similarity between the i -th encoded token and the j -th encoded visual object.",
"The image-sentence similarity can be measured by averaging the cosine similarities between the visually-attend sentence embeddings and the visual embeddings of the objects.",
"The visually-attended sentence embeddings h zx are the weighted combination of the encoded tokens h x .",
"Precisely, we compute h zxj = (cid:80) Ni =1 ij h xi , where j = 1 K and ij = softmax i ( s ij ) .",
"Let us denote by S ( x , z ) = 12 K (cid:80) Kj =1 cos ( h zxj , h zj ) + 12 N (cid:80) Ni =1 cos ( h xzi , h xi ) as the image-sentence similarity, the contrastive triplet loss encouraging image-sentence alignment in the VSE space can be written as: L c ( x , z ) = max x (cid:2) S ( x , z ) + S ( x , z ) (cid:3) + + max z (cid:2) S ( x , z ) + S ( x , z ) (cid:3) + , (7) where [ . ] + is the hinge function, and x and z are the non-paired (negative) instances for x and z .",
"Intuitively, when the loss decreases, the matched images and sentences will be drawn closer down to a margin than the hardest non-paired ones.",
"Formally, we minimizing the following objective for cross-modal alignments in the two VSE spaces: LV SE x,y,z = E ( x , z x ) (cid:104) L c ( x , z x ) (cid:105) + E ( y , z y ) (cid:104) L c ( y , z y ) (cid:105) (8) 3.4 Image Captioning for Pseudo Pivoting Inspired by back-translation with monolingual corpora, we propose a novel cross-modal approach to generate weakly-supervised pairs to guide language space alignment for unsupervised MMT.",
"Precisely, we leverage image captioning to synthesize pseudo sentence pairs (pivoted and conditioned on the image) for back-translation and paired-translation.",
"Image Captioning Image captioning models are akin to MT models besides the non-sequential visual encoder.",
"For example, an image-to-source captioning model estimates the likelihood as p z x ( x | z ) = (cid:81) Ni =1 p ( x i | x <i , z ) , where z is the encoded image.",
"Essentially, the captioning model learns to minimize the following loss: L CAPz x = E ( z x , x ) [ log p z x ( x | z x )] (9) As illustrated in Figure 2, we incorporate two captioning models Z X and Z Y to generate additional pseudo parallel sentences pivoted on the image as additional weak supervision to better align language latent spaces in unsupervised MMT.",
"For example, with Image English and Image German, the generated pseudo (English, German) pair is then pivoted on the Image.",
"Learning captioning models is practical as it is easier to collect large-scale image-text pairs than translation pairs.",
"We pre-train these captioning models and use them to generate sentences in two languages depicting the same image, i .",
"e",
"., c x ( z x ) = argmax p z x ( x | z x ) and c y ( z x ) = argmax p z y ( y | z x ) .",
"The pivoted captions then enable the following two objectives: Pivoted Captioning for Back-Translation We utilize the synthetic multilingual captions ( i .",
"e",
"., c x ( z x ) , c y ( z x ) from the source images and c x ( z y ) , c y ( z y ) from the target images) to reversely reconstruct the synthetic captions from their translations in both directions.",
"Formally, we compute the following caption-based back-translation loss: L CBTx y = E z x (cid:104) -log p yz x (cid:0) c x ( z x ) | g ( c x ( z x ) , z x ) , z x (cid:1) -log p xz y (cid:0) c y ( z x ) | g ( c y ( z x ) , z x ) , z x (cid:1)(cid:105) + E z y (cid:104) -log p yz x (cid:0) c x ( z y ) | h ( c x ( z y ) , z y ) , z y (cid:1) -log p xz y (cid:0) c y ( z y ) | h ( c y ( z y ) , z y ) , z y (cid:1)(cid:105) (10) Pivoted Captioning for Paired-Translation With the synthetic pseudo paired (source, target) captions pivoted on a image ( e .",
"g .",
"( c y ( z x ) , c x ( z x ) ), the caption-based paired-translation loss is defined as: L CPTx y = E z x (cid:104) -log p xz y ( c y ( z x ) | c x ( z x ) , z x ) (cid:105) + E z y (cid:104) -log p yz x ( c x ( z y ) | c y ( z y ) , z y ) (cid:105) (11) Note that similar to the text back-translation, for L CPTx y and L CBTx y , we do not back-prop through the captioning step.",
"For optimization, we sample mini-batches and minimizing the following loss: L = L MBTx y + LV SE x,y,z + L CBTx y + L CPTx y (12) Here we drop the weights w of each loss for clarity.",
"In practice, all the weights are set to 1.0 except for w CPT where we employ a decreasing learning scheduler specified in the next section.",
"We first describe the implementation details and the experimental setup.",
"Then we compare our approach with baselines with detailed analysis.",
"multimodal MT. It contains 29K training, 1K validation, and 1K testing images.",
"Each image has three descriptions in English/German/French, which are translations of each other.",
"To ensure the model never learn from parallel sentences, we randomly split Multi30K training and validation sets in half for one language and use the complementary half for the other.",
"The resulting M30k-half are two corpora with non-overlapping 14,500 training and 507 validation image-sentence pairs, respectively.",
"For text pre-processing, we use Moses (Koehn et al., 2007) scripts for tokenization and apply the Byte Pair Encoding (BPE) (Sennrich et al., 2016b) from XLM.",
"To identify and extract features of visual objects in images, we use the Faster-RCNN (Ren et al., 2015) model in (Anderson et al., 2018) to detect up to 36 salient visual objects per image and extract their corresponding 2048-dim regional features.",
"We use Transformer as the underlying architecture for the translation and captioning modules.",
"Each encoder/decoder of the translator is with 6-layer stacked Transformer network, 8 heads, 1024 hidden units, and 4096 feed-forward filter size.",
"The captioner is a 6-layer Transformer decoder with the same configuration.",
"The visual encoder is a 1-layer MLP which maps visual feature to the shared 1,024-dim embedding space then adds the positional encoding to encode spatial locations (nor-malized top-left and bottom-right coordinates) of visual objects.",
"Our implementation is based on the codebase of XLM and MASS. 4.3 Experimental Details We respectively conduct unsupervised MMT experiments on Multi30K-half for two language pairs: English-French and English-German.",
"Pre-Training Pre-training is a critical step for unsupervised MT. We follow the setup in UMMT (Su et al., 2019) for a fair comparison.",
"For each language, we create a text-only pre-training set by combining the shuffled first 10 million sentences of the WMT News Crawl datasets from 2007 to 2017 with 10 times of M30k-half, resulting in a text-only dataset with 10.145 million unparalleled sentences in English, French, German respectively.",
"For text pre-training, we leverage the script and the masked seq-to-seq objective proposed in MASS, which randomly masks a span in a sentence then encourages the model to decode and reconstruct the masked sequence as the monolingual language model pre-training.",
"More details can be found in the original paper.",
"Note that there is no fine-tuning (back-translation) on WMT for a fair comparison with other baselines.",
"For multimodal pre-training of the captioning modules, we use the out-of-domain MS-COCO (Lin et al., 2014) dataset.",
"We randomly split the training set into two disjoint subsets.",
"Each set contains 56,643 images and 283,215 sentences.",
"We use the translate-train strategy as in XNLI (Con-neau et al., 2018b).",
"We leverage Google Translate to translate one set of English sentences into French and German.",
"We pre-train the captioning modules with Eq.",
"9 and fix them during fine-tuning to avoid overfitting.",
"Note that the captioning modules are trained on non-parallel sentences with disjoint image subsets, which implies no overlap between English-German or English-French sentences.",
"Fine-tuning on Multi30K-half We fine-tune on the training set of Multi30K-half for 18 epochs.",
"We train our model with the Adam optimizer (Kingma and Ba, 2014) with a linear warm-up and a learning rate varying from 10 7 to 10 5 .",
"We apply a linearly decreasing weight from 1.0 to 0.1 at 10-th epoch for w CPT as we empirically observe that the generated captions are relatively too noisy to serve as good pseudo pairs in the later stage of training.",
"The margin in VSE is set to 0.1.",
"Other hyper-parameters in Transformer follow the default setting in MASS.",
"We use 4 Titan Xp GPUs with 1,000 tokens in each mini-batch for training.",
"Evaluation and Model selection For evaluation, we report BLEU scores by multi-bleu.pl 1 in Moses and METEOR 2 scorea on the Multi30K testing set.",
"For model selection without a parallel validation corpus, we consider the unsupervised criterion proposed in (Lample et al., 2018a) based on the BLEU scores of round-trip translations (source target source and target source target) which have been empirically shown to correlate well with the testing metrics.",
"We compare recent unsupervised text-only and multimodal MT baselines listed in the following: (1) MUSE (Conneau et al., 2018a) is a word-to-word",
"MT model with pre-trained Wikipedia embeddings.",
"(2) UNMT (Lample et al., 2018a) sets the tone of using denoising autoencoder and back-translation for unsupervised MT. (3) XLM (Conneau and Lample, 2019) deploys masked language model from BERT.",
"(4) MASS (Song et al., 2019) uses a masked seq-to-seq pre-training objective, achieves the current state-of-the-art performance in text-only unsupervised MT. (5) Game-MMT (Chen et al., 2018) is a reinforcement learning-based unsupervised MMT.",
"(6) UMMT (Su et al., 2019) use visual feature for denoising autoencoder and back-translation.",
"UMMT is the current state of the art in unsupervised MMT.",
"We either use the reported scores in the original papers or use their best scripts with their pre-trained language models publicly available for fine-tuning on Multi30K-half.",
"Table 1 presents the benchmark results with other state-of-the-art unsupervised MT and MMT models on the Multi30K testing set.",
"The first four rows show the results of the recent text-only MT models.",
"Game-MMT and UMMT are MMT models using both image and text inputs.",
"Our full model (T+V+VSE+CBT+CPT) yields new state-of-the-art performance in BLEU and METEOR, outperforming the text-only and multimodal baseline model by a large margin.",
"Notably, our full model outperforms UMMT by +5.5 12.5 BLEU scores, sets a new state of the art in unsupervised MMT.",
"Although pre-training plays a vital role in unsupervised MT, comparing Ours-Text only and Ours-Full, the results suggest that multimodal content can further boost the performance for unsupervised MT. Images provide +2.7 3.7 BLEU score improvement across four tasks.",
"Note that our model uses different monolingual pre-training corpora to MASS and XLM for the fair comparison with UMMT.",
"With a similar pre-training objective, our text-only model is worse than MASS, while Ours-Full outperforms MASS by +2.3 3.7 in BLEU.",
"Comparing the multimodal models trained with and without visual content (UMMT-T vs. UMMT-Full and Ours-T vs. Ours-Full), our model achieves +2.5 3.7 improvements in BLEU while +1.4 2.5 for UMMT.",
"The results imply that, even with a higher text-only baseline ( e . g . 49.5 vs. 37.2 in en fr), the proposed model incorporates visual en fr fr en en de de en Model BLEU METEOR BLEU METEOR BLEU METEOR BLEU METEORMUSE (Conneau et al., 2018a) 8.5 -16.8 -15.7 -5.4 UNMT (Lample et al., 2018a) 32.8 -32.1 -22.7 -26.3 XLM (Conneau and Lample, 2019) 46.3 64.3 42.0 38.1 27.4 48.7 30.7 31.0 MASS (Song et al., 2019) 49.8 65.8 43.7 38.7 30.2 51.3 32.5 33.4 Game-MMT (Chen et al., 2018) --16.6 -19.6 -UMMT-T (Su et al., 2019) 37.2 33.7 * 38.5 36.4 21.0 25.4 * 25.0 28.4 UMMT-Full (Su et al., 2019) 39.8 35.5 * 40.5 37.2 23.5 26.1 * 26.4 29.7 Ours-Text only 49.5 65.7 43.5 38.5 30.1 51.5 32.4 33.0 Ours-Full 52.3 67.6 46.0 39.8 33.9 54.1 36.1 34.7 Table 1: Results on unsupervised MT .",
"In Figure 3, we provide some qualitative results on the Multi30K testing set.",
"We observe a consistent improvement of unsupervised translation quality with our full model to the text-only one.",
"Without parallel translation pairs as the vital supervision, the proposed pseudo visual pivoting successfully disambiguates the word semantics in the similar syntactic category and results in improved cross-lingual word alignment; for instance, cafe vs. soda machine in the third French example, and felsigen (rocky) vs. verschneiten (snowy) in the first German example.",
"To quantify module-wise contribution in pseudo visual pivoting, we summarize our ablation studies in Table 2.",
"Comparing the performance improvement from text-only to the model with regional visual features (T+V), the features of salient visual objects contribute +0.6 0.9 BLEU score over a much higher text-only baseline compared to UMMT.",
"In pseudo visual pivoting, +VSE promotes the alignments in the monolingual VSE spaces and results in an additional +1.3 2.0 gain in BLEU.",
"This improvement validates our hypothesis that the visual space can effectively serve as the bridge connecting the source and target language latent spaces.",
"Also, synthesizing image-pivoted pseudo caption pairs effectively provides weak supervision for aligning the cross-lingual latent space in unsupervised MMT.",
"We observe that the pivoted captions for paired translation (CPT) is more effective than treating them as back-translation pairs (CBT).",
"Utilizing generated image-pivoted captions is shown to be a promising approach for weakly supervised Model (Ours) en fr fr en en de de en Text only 49.52 43.48 30.10 32.35 T+V 50.43 44.10 31.01 32.95 T+V+VSE 51.72 45.73 32.67 34.94 T+V+CPT 51.64 45.55 33.04 35.02 T+V+CBT 51.23 45.21 32.51 33.87 T+V+VSE+CBT 51.81 45.83 33.01 34.38 T+V+CPT+CBT 51.85 45.65 33.61 35.85 T+V+VSE+CPT 52.19 46.10 33.73 35.60 Full Model 52.29 45.98 33.85 36.07 Table 2: Ablation studies.",
"or unsupervised MMT.",
"The full model which employs VSE, CBT, and CPT achieves +1.9 3.1 improvements compared to our multimodal baseline (row two, visual feature only).",
"How does our unsupervised MMT model generalize when images are not available at the testing time?",
"Table 3 shows the testing results without images.",
"As can be observed, our model generalizes well.",
"The differences are mostly less than 1.0 in BLEU.",
"As our model, when being tested without visual content, still outperforms other unsupervised text-only or multimodal MT models listed in Table 1, the minor drop in BLEU implies that the improved cross-lingual latent space alignment via pseudo visual pivoting is likely to be more critical than using images as an input feature for decoding.",
"Luckily, such alignment is already preserved in the training phase with the proposed approach.",
"An interesting question is: How much does the visual content (as a feature) contribute?",
"As in leave-one-feature-out cross-validation, we compare T: un jeunegaron se tientsur un chariot de vtements.",
"the difference of performance between inferencing with and without images.",
"The larger the difference (the subscripts in Table 3) implies a model better utilizes visual content.",
"Compared with UMMT, our model has better utilization.",
"We observe that the key to such difference is the VSE objective.",
"Our model trained without the VSE objective results in worse utilization (smaller difference at the testing time), possibly because the source text-image pairs are distant in the multilingual VSE space.",
"Will our model benefit from real pivoting (src-img 1 , img 1 -tgt, overall src-img 1 -tgt)?",
"We train our models with overlapped images while leaving sentences in the source and target languages unparalleled (use no translation pairs).",
"From the first three rows in Table 4, the performance is improved when training with the overlapped images and their corresponding sentences.",
"Comparing the improvement from 0% to 100% of the text-only model and the full model, a larger gain is observed with the proposed pseudo visual pivoting which aligns and reduces uncertainty in the language latent spaces.",
"Furthermore, under the low-resource setting (3.0K non-parallel data, row six and seven), a substantial improvement over the text-only model is still observed.",
"These results suggest that the proposed pseudo visual pivoting is likely to generalize to the semi-supervised and the low-resource setting, which we consider as our future work.",
"Although the proposed pseudo visual pivoting targets unsupervised MMT, we are also interested in its performance under the fully supervised setup.",
"To gain insights, we conduct supervised MMT experiments by changing the back-translation objective for unsupervised MT (Eq. 6) to the supervised MT objective (Eq. 2) with additional visual inputs.",
"We benchmark with recent supervised MMT models, including Imagination (Elliott and Kadar, 2017), LIUM-CVC (Caglayan et al., 2017), and VAG (Zhou et al., 2018) on Multi30K.",
"Table 5 shows the testing results.",
"Our model significantly outperforms other baselines and achieves state-of-the-art performance.",
"Comparing to the unsupervised model trained with full Multi30K (Table 4,100% (29K/29K)), the direct supervision from parallel translation pairs results in a +6.5 7.1 gain in BLEU.",
"Notably, images provide a minor improvement with full supervision from translation pairs.",
"This result implies that, compared to serving as a complementary feature, visual information likely contributes more to improving crosslingual alignment via pseudo visual pivoting for MMT with limited supervision.",
"Unsupervised MT For pivoting with a third language, Firat et al. (2016) pre-train a multi-way multilingual model to generate pseudo pairs to improve zero-shot translation.",
"Chen et al. (2017) use a teacher-student framework and assume parallel sentences share a similar likelihood for generating sentences in the third language while Cheng et al. (2017) maximize the expected likelihood.",
"Our model does not rely on a third language.",
"Our framework is along the line of research in (Lample et al., 2018a,b; Conneau and Lample, 2019), which aims at learning an aligned latent space between the two languages to translate by reconstruction.",
"Nevertheless, we focus on the multimodal setup where the visual space is dissimilar to the language spaces with challenging asymmetric interactions between modalities.",
"Supervised MMT Supervised MMT is introduced in (Specia et al., 2016) as a multi-encoder single-decoder framework with additional image inputs.",
"Huang et al. (2016) encode word sequences with regional visual objects while Calixto and Liu (2017) leverage global visual feature.",
"LIUM-CVC (Caglayan et al., 2017) uses element-wise multiplication to model the image-text interaction.",
"Imagination (Elliott and Kadar, 2017) and VAG (Zhou et al., 2018) learns with the auxiliary image reconstruction and source-image-target triplet alignment tasks, respectively.",
"While these methods achieve improvements, their advantage over the text-only models is still minor under the supervised scenario.",
"As analyzed in (Caglayan et al., 2019), visual content is more critical when the textual content is limited or uncertain in MMT.",
"We study the more challenging unsupervised MMT.",
"Unsupervised MMT To our best knowledge, three recent works have generalized MMT to the unsupervised setting.",
"Nakayama and Nishida (2017) learn modal-agnostic fixed length image/sentence embeddings.",
"In contrast, our model promotes fine-grained (object-token) varying-length embedding, which better aligns VSE space.",
"Game-MMT (Chen et al., 2018) use a captioning and a translation model maximizing the likelihood of translated captions to original sentences.",
"We synthesize captions for symmetric back-translation and considers no ground truth image annotation in the loop.",
"Empirically, it is preferred to separate real and generated captions.",
"UMMT (Su et al., 2019) uses Transformers, autoencoder loss, and multimodal back-translation.",
"We do not use autoencoder.",
"Our model leverages object detection for multimodal back-translation and equips pseudo visual pivoting.",
"Image Captioning and VSE Our method draws inspiration from captioning and cross-modal retrieval.",
"Recent progress in captioning aims at using reinforcement learning to improve diversity (Dai et al., 2017) or maximize metric (Rennie et al., 2017).",
"We use a vanilla MLE objective.",
"For learning VSE, we leverage the contrastive loss (Kiros et al., 2014) from cross-modal retrieval, which is shown more robust than maximizing canonical correlation among modalities as in (Andrew et al., 2013; Huang et al., 2018).",
"For encoding image and text, we generalize the cross-modality attention from SCAN (Lee et al., 2018) to the multilingual scenario for learning a multilingual VSE space (Gella et al., 2017; Huang et al., 2019a).",
"We have presented a novel approach: pseudo visual pivoting for unsupervised multimodal MT. Beyond features, we use visual content to improve the crosslingual alignments in the shared latent space.",
"Precisely, our model utilizes the visual space as the approximate pivot for aligning the multilingual multimodal embedding space.",
"Besides, it synthesizes image-pivoted pseudo sentences in two languages and pairs them to translate by reconstruction without parallel corpora.",
"The experiments on Multi30K show that the proposed model generalizes well and yields new state-of-the-art performance.",
"This work is supported by the DARPA grants funded under the AIDA program (FA8750-18-2-0018), the LWLL program (FA8750-18-2-0501), and the GAILA program (award HR00111990063).",
"Xiaojun Chang is supported by Australian Research Council Discovery Early Career Award (DE190100626).",
"The authors would like to thank the anonymous reviewers for their suggestions and Google Cloud for providing the research credits."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"objective",
"method",
"abstain",
"objective",
"result",
"result",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"other",
"objective",
"other",
"method",
"other",
"other",
"method",
"abstain",
"abstain",
"other",
"objective",
"abstain",
"method",
"objective",
"method",
"abstain",
"abstain",
"other",
"other",
"other"
] |
[
"Beam search optimization (Wiseman and Rush, 2016) resolves many issues in neural machine translation.",
"However, this method lacks principled stopping criteria and does not learn how to stop during training, and the model naturally prefers longer hypotheses during the testing time in practice since they use the raw score instead of the probability-based score.",
"We propose a novel ranking method which enables an optimal beam search stopping criteria.",
"We further introduce a structured prediction loss function which penalizes suboptimal finished candidates produced by beam search during training.",
"Experiments of neural machine translation on both synthetic data and real languages (German English and Chinese English) demonstrate our proposed methods lead to better length and BLEU score.",
"Sequence-to-sequence (seq2seq) models based on RNNs (Sutskever et al., 2014; Bahdanau et al., 2014), CNNs (Gehring et al., 2017) and self-attention (Vaswani et al., 2017) have achieved great successes in Neural Machine Translation (NMT).",
"The above family of models encode the source sentence and predict the next word in an autoregressive fashion at each decoding time step.",
"The classical cross-entropy training objective of seq2seq models is to maximize the likelihood of each word in the translation reference given the source sentence and all previous words in that reference.",
"This word-level loss ensures efficient and scalable training of seq2seq models.",
"However, this word-level training objective suffers from a few crucial limitations, namely the label bias , the exposure bias , and the loss-evaluation mismatch (Lafferty et al., 2001; Bengio et al., Equal contribution BSOBSO This work This work Figure 1: The BLEU score of BSO decreases after beam size 3 as results of increasing length ratio 1 in German English translation. Our model gets higher BLEU with larger beam. 2015a; Venkatraman et al., 2015).",
"In addition, more importantly, at decoding time, beam search is universally adopted to improve the search quality, while training is fundamentally local and greedy.",
"Several researchers have proposed different approaches to alleviate above problems, such as reinforcement learning-based methods (Ran-zato et al., 2016; Rennie et al., 2017; Zheng et al., 2018b), training with alternative references (Shen et al., 2016; Zheng et al., 2018a).",
"Recently, Wiseman and Rush (2016) attempt to address these issues with a structured training method, Beam Search Optimization (BSO).",
"While BSO outperforms other proposed methods on German-to-English translation, it also brings a different set of problems as partially discussed in (Ma, 2018) which we present with details below.",
"1 There are two types of length ratios in this paper:",
"(a) target to reference ratio ( | y | / | y | ), which is used in BLEU, and",
"(b) target to source ratio ( | y | / | x | ).",
"By default, the term length ratio in this paper refers to the former.",
"BSO relies on unnormalized raw scores instead of locally-normalized probabilities to get rid of the label bias problem.",
"However, since the raw score can be either positive or negative, the optimal stopping criteria (Huang et al., 2017) no longer holds, e.g., one extra decoding step would increase the entire unfinished hypothesis's model score when we have positive word score.",
"This leads to two consequences: we do not know when to stop the beam search and it could return overlength translations (Fig. 1) or underlength translations (Fig. 3) in practice.",
"As shown in Fig. 1, the BLEU score of BSO drops significantly when beam size gets larger as a result of overlong translations (as evidenced by length ratios larger than 1).",
"Furthermore, BSO performs poorly (shown in Section 4) on hard translation pairs, e.g., Chinese English (Zh En) translation, when the target / source ratio is more diverse (Table 1).",
"To overcome the above issues, we propose to use the sigmoid function instead of the raw score at each time step to rank candidates.",
"In this way, the model still has probability properties to hold optimal stopping criteria without label bias effects.",
"Moreover, we also encourage the model to generate the hypothesis which is more similar to gold reference in length.",
"Compared with length reward-based methods (Huang et al., 2017; Yang et al., 2018), our model does not need to tune the predicted length and per-word reward.",
"Experiments on both synthetic and real language translations (De En and Zh En) demonstrate significant improvements in BLEU score over strong baselines and other methods.",
"Here we briefly review the conventional NMT and BSO (Wiseman and Rush, 2016) to set up the notations.",
"For simplicity, we choose to use RNN-based model but our methods can be easily applied to other designs of seq2seq model as well.",
"Regardless of the particular design of different seq2seq models, generally speaking, the decoder always has the following form: p ( y | x ) = (cid:81) | y | t =1 p ( y t | x , y <t ) (1) where x RN D represents the D -dimension hidden states from encoder with N words and y <t denotes the gold prefix ( y 1 , ..., y ( t 1) ) before t.",
"The conventional NMT model is locally trained to maximize the above probability.",
"Instead of maximizing each gold word's probability, BSO tries to promote the non-probabilistic scores of gold sequence within a certain beam size b .",
"BSO removes the softmax layer and directly uses the raw score after hidden-to-vocabulary layer, and the non-probabilistic scoring function f x ( y t | y <t ) represents the score of word y t given gold prefix y <t and x .",
"Similarly, f x ( y bt | y b<t ) is the b th sequence with beam size b at time step t .",
"Then, we have the following loss function to penalize the b th candidate and promote gold sequence: L = | y | (cid:88) t =1 ( y b t )(1 + f x ( y bt | y b<t ) f x ( y t | y <t )) + (2) where ( y b t ) is defined as (1 BLEU ( y b t , y t )) which scales the loss according to BLEU score between gold and b th hypothesis in the beam.",
"The notation ( ) + represents a max function between any value and 0 , i.e., z + = max (0 , z ) .",
"When Eq.",
"1 equals to 0 at time step t , then the gold sequence's score is higher than the last hypothesis in the beam by 1, and a positive number otherwise.",
"Finally, at the end of beam search ( t = | y | ), BSO requires the score of y exceed the score of the highest incorrect hypothesis by 1 .",
"Note that the above non-probabilistic score function f x ( ) is not bounded as probabilistic score in conventional NMT.",
"In practice, when we have positive word score, then the unfinished candidates always get higher model scores with one extra decoding step and the optimal stopping criteria 2 (Huang et al., 2017) is no longer hold.",
"BSO implements a similar shrinking beam strategy which duplicates top unfinished candidate to replace finished hypotheses and terminates the beam search when there are only </eos> in the beam.",
"Non-probabilistic score function works well in parsing and Statical MT where we know when to stop beam search.",
"However, in the NMT scenario, without optimal stopping criteria, we don't know when to stop beam search.",
"As mentioned in Section 2, BSO relies on raw score function to eliminate label bias effects.",
"2 Beam search stops when the score of the top unfinished hypothesis is lower than any finished hypothesis, or the </eos> is the highest score candidate in the beam.",
"However, without using locally-normalized score does not mean that we should stop using the probabilistic value function.",
"Similar with multi-label classification in (Ma et al., 2017), instead of using locally normalized softmax-based score and non-probabilistic raw scores, we propose to use another form of probabilistic scoring function, sigmoid function, which is defined as follows: g x ( y t | y <t ) = (1 + e w f x ( y t | y <t ) ) 1 (3) where w is a trainable scalar parameter which shifts the return value of f x ( y t | y <t ) into a nonsaturated value region of sigmoid function.",
"Eq.",
"3 measures the probability of each word independently which is different from locally-normalized softmax function.",
"Similar to the scenario in multi-label classification, g x ( y t | y <t ) only promotes the words which are preferred by gold reference and does not degrade other words.",
"Eq.",
"3 enables the model to keep the probability nature of scoring function without introducing label bias effects.",
"After the model regain probability-based scoring function, the optimal stopping criteria can be used in testing time decoding.",
"Similar to Eq.",
"1, testing time decoder multiplies the new word's probabilistic score with prefix's score when there is a new word appends to an unfinished hypothesis.",
"Though the new word's probabilistic score is upper bounded by 1, in practice, the score usually far less than one.",
"As described in (Huang et al., 2017; Yang et al., 2018), decoder always prefers short sentence when we use the probabilistic score function.",
"To overcome the above so-called beam search curse , we propose to penalize early-stopped hypothesis within the beam during training.",
"The procedure during training is illustrated in Fig.",
"2. Data Split | x | ( | x | ) ( | y | | x | ) ( | y | | x | ) # sents Synthetic Train 9.47 5.45 3.0 0.52 5K Valid 9.54 5.42 3.0 0.53 1K Test 9.51 5.49 3.0 0.52 1K De En Train 17.53 9.93 1.07 0.16 153K Valid 17.55 9.97 1.07 0.16 7K Test 18.89 12.82 1.06 0.16 6.5K Zh En Train 23.21 13.44 1.30 0.33 1M Valid 29.53 16.62 1.34 0.22 0.6K Test 26.53 15.99 1.4 0.24 0.7K Table 1: Dataset statistics of source sentence length and the ratio between target and source sentences.",
"Different from BSO, to penalize the underlength finished translation hypotheses, we include additional violations when there is an </eos> within the beam before the gold reference finishes and we force the score of that </eos> lower than the b + 1 candidate by a margin.",
"This underlength translation violation is formally defined as follows: L s = | y | (cid:88) t =1 b (cid:88) j =1 1 ( y jt = </eos> ) Q ( y jt , y b +1 t ) , Q ( y jt , y b +1 t ) = (1 + f x ( y jt | y j<t ) f x ( y b +1 t | y b +1 <t )) + (4) where notation 1 is identification function which only equals to 1 when i th candidate in beam y jt is </eos> , e.g. in Fig.",
"2. We only have non-zero loss when the model score of underlength translation candidates are greater than the b + 1 candidate by a margin.",
"In this way, we penalize all the short hypotheses during training time.",
"Note that during both training and testing time, the decoder stops beam search when it satisfies the optimal stopping criteria (Huang et al., 2017).",
"Therefore, we do not need to penalize the overlength translations since we have already promoted the gold reference to the top of the beam at time step | y | during training.",
"We showcase the performance comparisons over three different datasets.",
"We implement seq2seq model, BSO and our proposed model based on PyTorch-based OpenNMT (Klein et al., 2017).",
"We use a two-layer bidirectional LSTM as the encoder and a two layer LSTM as the decoder.",
"We train Seq2seq model for 20 epochs to minimize perplexity on the training dataset, with a batch size of 64, word embedding size of 512, the learning rate of 0.1, learning rate decay of 0.5 and dropout rate of 0.2.",
"Following Wiseman and Rush (2016), we then train BSO and our model based on the previous Seq2seq model with the learning rate of 1 3 5 7 9 Beam Size 0 1 2 3 L e n g t h R a t i o Seq2seq BSO This work Figure 3: Length ratio on synthetic test dataset.",
"0.01 and learning rate decay of 0.75, batch size of 40.",
"Note that our pretrained model is softmax-based, and we only replace the softmax layer with the sigmoid layer for later training for simplicity.",
"The performance will have another boost when our pretrained model is sigmoid-based.",
"We use Adagrad (Duchi et al., 2011) as the optimizer.",
"In Zh En task, we employ BPE (Sennrich et al., 2015) which reduces the source and target language vocabulary sizes to 18k and 10k.",
"Following BSO, we set the decoding beam size smaller than the training beam size by",
"1. 4.1 Synthetic Task Table 1 shows the statistics of source sentence length and the ratio between target and source sentences.",
"The synthetic dataset is a simple translation task which generates target sentences from this grammar: { a x, b x x, c x x x, d x x x x, e x x x x x } .",
"For example:",
"1. source sentence [ b c a ] will generate the target sentence [ x x x x x x ] (2 x from b , 3 x from c and 1 x from a ).",
"2. source sentence [ a, b, c, d, e ] will be translated into [ x x x x x x x x x x x x x x x ] in target side (1 x from a , 2 x from b , 3 x from c , 4 x from d and 5 x from e ).",
"This dataset is designed to evaluate the length prediction ability of different models.",
"Fig. 3 shows the length ratio of different models on the test set.",
"Only our model can predict target sentence length correctly with all beam sizes which shows a better ability to learn target length.",
"The De En dataset is previously used in BSO and MIXER (Ranzato et al., 2016), which is from IWSLT 2014 machine translation evaluation campaign (Cettolo et al., 2014) 3 .",
"3 The test set of De En involves some mismatched source-reference pairs.",
"We have cleaned this test set and report the statistics based on the cleaned version.",
"Table 2 shows the BLEU score and length ratio of different models on dev-set.",
"Similar to seq2seq, our proposed model achieves better BLEU score with larger beam size and outperforms the best BSO b = 4 model with 0.76 BLEU.",
"The ablation study in Table 3 shows that the model produces shorter sentence without scale augment (term ( y b t ) in Eq.",
"2) and early stopping loss.",
"The model also performs worse when replacing softmax to sigmoid because of the label bias problem.",
"Fig. 4 shows BLEU score and length ratio of BSO and our models trained with beam size b = 6 with different decoding beam size.",
"Compared with BSO, whose BLEU score degrades dramatically when increasing beam size, our model performs much more stable.",
"Moreover, BSO achieves much better BLEU score with decoding beam b = 3 while trained with b = 6 because of a better Model Original Test Set Cleaned Test Set BLEU Len.",
"length ratio, this is inconsistent with their claim that decoding beam size should smaller than training beam size by",
"1. Table 4 shows better accuracy of our proposed model than not only published test results of BSO (Wiseman and Rush, 2016), DAD (Bengio et al., 2015b) and MIXER (Ranzato et al., 2016), but also our implemented seq2seq and BSO model.",
"4.3 Zh En Translation Model Train Decode BLEU Len.",
"translation dataset.",
"We use the NIST 06 and 08 dataset with 4 references as the validation and test set respectively.",
"Table 1 shows that the characteristic of Zh En translation is very different from De En in source length and variance in tar-get/source length ratio.",
"We compare our model with seq2seq, BSO and seq2seq with length reward (Huang et al., 2017) which involves hyper-parameter to solve neural model's tendency for shorter hypotheses (our proposed method does not require tuning of hyper-parameter).",
"Fig. 5 shows that BSO prefers overlength hypotheses in short source sentences and underlength hypotheses when the source sentences are long.",
"This phenomenon degrades the BLEU score in dev-set from Table 5.",
"Our proposed model comparatively achieves better length ratio on almost all source sentence length in dev-set.",
"Our proposed methods are general techniques which also can be applied to the Transformer (Vaswani et al., 2017).",
"As part of our future works, we plan to adapt our techniques to the Transformer to further evaluate our model's performance.",
"There are some scenarios that decoding time beam search is not applicable, such as the simultaneous translation system proposed by Ma et al. (2018) which does not allow for adjusting the committed words, the training time beam search still will be helpful to the greedy decoding performance.",
"We plan to further investigate the performance of testing time greedy decoding with beam search optimization during training.",
"We propose two modifications to BSO to provide better scoring function and under-translation penalties, which improves the accuracy in De-En and Zh-En by 0.8 and 3.7 in BLEU respectively.",
"This work was supported in part by DARPA grant N66001-17-2-4030, and NSF grants IIS-1817231 and IIS-1656051."
] | [
"abstain",
"abstain",
"objective",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"objective",
"other"
] |
[
"We present an effective end-to-end memory network model that jointly ( i ) predicts whether a given document can be considered as relevant evidence for a given claim, and ( ii ) extracts snippets of evidence that can be used to reason about the factuality of the target claim.",
"Our model combines the advantages of convolutional and recurrent neural networks as part of a memory network.",
"We further introduce a similarity matrix at the inference level of the memory network in order to extract snippets of evidence for input claims more accurately.",
"Our experiments on a public benchmark dataset, FakeNewsChallenge, demonstrate the effectiveness of our approach.",
"Recently, an unprecedented amount of false information has been flooding the Internet with aims ranging from affecting individual people's beliefs and decisions (Mihaylov et al., 2015; Mihaylov and Nakov, 2016) to influencing major events such as political elections (Vosoughi et al., 2018).",
"Consequently, manual fact checking has emerged with the promise to support accurate and unbiased analysis of rumors spreading in social medias, as well as of claims made by public figures or news media.",
"As manual fact checking is a very tedious task, automatic fact checking has been proposed as a possible alternative.",
"This is often broken into intermediate steps in order to alleviate the task complexity.",
"One such step is stance detection , which is also useful for human experts as a stand-alone task.",
"The task aims to identify the relative perspective of a piece of text with respect to a claim, typically modeled using labels such as agree , disagree , discuss , and unrelated ; Figure 1 gives some examples.",
"Work conducted while these authors were at QCRI.",
"Claim: Robert Plant Ripped up $800M Led Zeppelin Reunion Contract.",
"Here, we address the problem of stance detection using a novel model based on end-to-end memory networks (Sukhbaatar et al., 2015), which incorporates convolutional and recurrent neural networks, as well as a similarity matrix.",
"Our model jointly addresses the problems of predicting the stance of a text with respect to a given claim, and of extracting relevant text snippets as support for the prediction of the model.",
"We further introduce a similarity matrix, which we use at inference time in order to improve the extraction of relevant snippets.",
"The experimental results on the Fake News Challenge benchmark dataset show that our model, which is very feature-light, performs close to the state of the art.",
"Our contributions can be summarized as follows: ( i ) We apply a novel memory network model enhanced with CNN and LSTM networks for stance detection.",
"( ii )",
"We further propose a novel extension of the general architecture based on a similarity matrix, which we use at inference time, and we show that this extension offers sizable performance gains.",
"( iii )",
"Finally, we show that our model is capable of extracting meaningful snippets from a given text document, which is useful not only for stance detection, but more importantly can support human experts who need to decide on the factuality of a given claim.",
"Long-term memory is necessary to determine the stance of a long document with respect to a claim, as relevant parts of a documentparagraphs or text snippetscan indicate the perspective of a document with respect to a claim.",
"Memory networks have been designed to remember past information (Sukhbaatar et al., 2015) and they can be particularly well-suited for stance detection since they can use a variety of inference strategies alongside their memory component.",
"In this section, we present a novel memory network for stance detection.",
"It contains a new inference component that incorporates a similarity matrix to extract, with better accuracy, textual snippets that are relevant to the input claims.",
"A memory network is a 5-tuple { M, I, G, O, R } , where the memory M is a sequence of objects or representations, the input I is a component that maps the input to its representation, the generalization component G (Sukhbaatar et al., 2015) updates the memory with respect to new input, the output O generates an output for each new input and the current memory state, and finally, the response R converts the output into a desired response format, e.g., a textual response or an action.",
"These components can potentially use many different machine learning models.",
"Our new memory network for stance detection is a 6-tuple { M, I, F, G, O, R } , where F represents the new inference component.",
"It takes an input document d as evidence and a textual statement s as a claim and converts them into their corresponding representations in the input I .",
"Then, it passes them to the memory M .",
"Next, the relevant parts of the input are identified in F , and afterwards they are used by G to update the memory.",
"Finally, O generates an output from the updated memory, and converts it to a desired response format with R .",
"The network architecture is depicted in Figure 2. We describe the components below.",
"The input to the stance detection algorithm is a document d as evidence and a textual statement s as a claim, (see lines 2 and 3 in Table 1).",
"Each d is segmented into paragraphs x j of varied lengths, where each x j is considered as a piece of evidence for stance detection.",
"Outputs: (1) predicting the relative perspective (or stance) of ( d,s to a claim as agree , disagree , discuss , unrelated .",
"Memory Network Model: 1. Input memory representation (I): d ( X,W,E ) ( X,W,E ) TimeDistributed ( LSTM ) { m 1 ,...,m n } ( X,W,E ) TimeDistributed ( CNN ) { c 1",
"Indeed, a paragraph usually presents a coherent argument, unified under one or more inter-related topics.",
"The input component in our model converts each d into a set of pieces of evidence in a three dimensional (3D) tensor space as shown below (see line 11 in Table 1): d = ( X, W, E ) (1) where X = { x 1 , ..., x n } is a set of paragraphs considered as pieces of evidence; each x j is represented by a set of words W = { w 1 , ..., w v } drawn from a global vocabulary of size v and a set of neural representations E = { e 1 , ..., e v } for words in W .",
"This 3D space is illustrated as a cube in Figure 2. Each x j is encoded from the 3D space into a semantic representation at the input component using a Long Short-Term Memory (LSTM) network.",
"The lower left component in Figure 2 shows our LSTM network, which operates on our input as follows (see also line 12 in Table 1): ( X, W, E ) TimeDistributed ( LSTM ) { m 1 , ..., m n } (2) 768 Figure 2: The architecture of our memory network model for stance detection.",
"where m j is the LSTM representation of x j , and TimeDistributed() indicates a wrapper that enables training the LSTM over all pieces of evidence by applying the same LSTM model to each time-step of a 3D input tensor, i.e., ( X, W, E ) .",
"While LSTM networks were designed to effectively capture and memorize their inputs (Tan et al., 2016), Convolutional Neural Networks (CNNs) emphasize the local interaction between the individual words in the input word sequence, which is important for obtaining an effective representation.",
"Here, we use a CNN in order to encode each x j into its representation c j as shown below (see line 13 in Table 1).",
"( X, W, E )",
"TimeDistributed ( CNN ) { c 1 ,",
".., c n } (3) As shown in the left-top corner of Figure 2, this representation is passed as a new input to the component M of our memory network model.",
"Moreover, we keep track of the computed n -grams from the CNN so that we can use them later in the inference and in the response components (see sections 2.3 and 2.6).",
"For this purpose, we use a Maxout layer (Goodfellow et al., 2013) to take the maximum across k affine feature maps computed by the CNN, i.e., pooling across channels.",
"Previous work investigated the combination of convolutional and recurrent representations, which were fed to the other network as input (Tan et al., 2016; Donahue et al., 2015; Zuo et al., 2015; Sainath et al., 2015).",
"In contrast, we feed individual outputs into our memory network separately, and we let it decide which representation better helps the target task.",
"We demonstrate the effectiveness of this choice in our experiments.",
"Furthermore, we convert each input claim s into its representation using the corresponding LSTM and CNN networks as follows: s LSTM,CNN s lstm , s cnn (4) where s lstm and s cnn are the representations of s computed using LST M and CNN networks, respectively.",
"Note that these are separate networks with different parameters from those used to encode the pieces of evidence.",
"Lines 1014 of Table 1 describe the above steps in representing I in our memory network.",
"This component encodes each input document d into a set of pieces of evidence { x j } j : it computes LSTM and CNN representations, m j and c j , respectively, for each x j , and LSTM and CNN representations, s lstm and s cnn , for each claim s .",
"The resulting representations can serve to compute semantic similarity between claims and pieces of evidence.",
"We define the similarity P j lstm between s and x j as follows (see line 17 in Table 1): P j lstm = s lstm | M m j , j (5) where s lstm R q and m j R d are the LSTM representations of s and x j , respectively, and M R q d is a similarity matrix capturing their similarity.",
"For this purpose, M maps s and x j into the same space as shown in Figure 3. M is a set of q d parameters of the network, which are optimized during the training.",
"In a similar fashion, we compute the similarity P j cnn between x j and s using the CNN representations as follows (see line 19 in Table 1): P j cnn = s cnn | M 0 c j , j (6) 769 s : ( s l s t m , s c nn ) M s' x j : ( m j , c j ) sim ( s lstm , m j ) or sim ( s cnn , c j ) Figure 3: Matching a claim s and a piece of evidence x j using a similarity matrix M .",
"where s cnn R q 0 and c j R d 0 are the representations of s and x j obtained with CNN, respectively.",
"The similarity matrix M 0 R q 0 d 0 is a set of q 0 d 0 parameters of the network and is optimized during the training.",
"P j lstm and P j cnn indicate the claim-evidence similarity vectors computed based on the LSTM and on the CNN representations of s and x j , respectively.",
"The rationale behind using the similarity matrix is that in our memory network model, as Figure 3 shows, we seek a transformation of the input claim such that s 0 = M s in order to obtain the closest facts to the claim.",
"In fact, the relevant parts of the input document with respect to the input claim can be captured at a different level, e.g., using M 0 for the n -gram level or using the claim-evidence P j lstm or P j cnn , j at the paragraph level.",
"We note that ( i ) P j lstm uses LSTM to take the word order and long-length dependencies into account, and ( ii ) P j cnn uses CNN to take n -grams and local dependencies into account, as explained in sections 2.2 and 2.3.",
"Additionally, we compute another semantic similarity vector, P j tfidf , by applying a cosine similarity between the TF.IDF (Sprck Jones, 2004) representation of x j and s .",
"This is particularly useful for stance detection as it can help detect unrelated pieces of evidence.",
"The information flow and updates in the memory is as follows: first, the representation vector { m j } j is passed to the memory and updated using the claim-evidence similarity vector { P j tfidf } :",
"The reason for this weighting is to filter out most unrelated evidence with respect to the claim.",
"The updated m j in conjunction with s lstm are used by the inference componentcomponent F to compute { P j lstm } as explained in Section 2.3.",
"Then, { P j lstm } is used to update the new input set { c j } j to the memory: c j = c j (cid:12) P j lstm , j (8) Finally, the updated c j in conjunction with s cnn are used to compute P j cnn as explained in Sec. 2.3.",
"In memory networks, the memory output depends on the final goal, which, in our case, is to detect the relative perspective of a document to a claim.",
"For this purpose, we apply the following equation: o = h mean ( { c j } ); (cid:2) max( { P j cnn } ); mean ( { P j cnn } ) (cid:3) ; (cid:2) max( { P j lstm } ); mean ( { P j lstm } ) (cid:3) ; (cid:2) max( { P j tfidf } ); mean ( { P j tfidf } ) (cid:3)i (9) where mean ( { c j } ) is the average vector of c j representations.",
"Furthermore, we compute the maximum and the average similarity between each piece of evidence and the claim using P j tfidf , P j lstm and P j cnn , which are computed for each evidence and claim in the inference component F .",
"The maximum similarity identifies the part of document x j that is most similar to the claim, while the average similarity measures the overall similarity between the document and the claim.",
"This component computes the final stance of a document with respect to a claim.",
"For this purpose, the concatenation of vectors o , s lstm and s cnn is fed into a Multi-Layer Perceptron (MLP), where a softmax predicts the stance of the document with respect to the claim, as shown below (see also lines 2223 in Table 1): [ o ; s lstm ; s cnn ] MLP (10) where is a softmax function.",
"In addition to the resulting stance, we extract snippets from the input document that best indicate the perspective of the document with respect to the claim.",
"For this purpose, we use P j cnn and M 0 as explained in Section 2.3 (see also lines 2426 in Table 1).",
"The overall model is shown in Figure 2 and a summary of the model is presented in Table 1. All the model parameters, including those of ( i ) CNN and LSTM in I , ( ii ) the similarity matrices M and M 0 in F , and ( iii ) the MLP in R , are jointly learned during the training process.",
"We use the dataset provided by the Fake News Challenge, 1 where each example consists of a claimdocument pair with the following possible relations between them: agree (the document agrees with the claim), disagree (the document disagrees with the claim), discuss (the document discusses the same topic as the claim, but does not take a stance with respect to the claim), unrelated (the document discusses a different topic than the topic of the claim).",
"The data includes a total of 75.4K claimdocument pairs, which link 2.5K unique articles with 2.5K unique claims, i.e., each claim is associated with 29.8 articles on average.",
"We use 100-dimensional word embeddings from GloVe (Pennington et al., 2014), which were trained on two billion tweets.",
"We further use Adam as an optimizer and categorical cross-entropy as a loss.",
"We use 100-dimensional units for the LSTM embeddings, and 100 feature maps with filter width of 5 for the CNN.",
"We consider the first p = 9 paragraphs for each document, where p is the median of the number of paragraphs.",
"We optimize the hyper-parameters of the models using a validation dataset (20% of the training data).",
"Finally, as the data is largely imbalanced towards the unrelated class, during training, we randomly select an equal number of instances from each class at each epoch.",
"We use the following evaluation measures: Accuracy : The number of correctly classified examples divided by the total number of examples.",
"It is equivalent to micro-averaged F 1 .",
"Macro-F1 : We calculate F 1 for each class, and then we average across all classes.",
"Weighted Accuracy : This is a weighted, two-level scoring scheme, which is applied to each test example.",
"First, if the example is from the unrelated class and the model correctly predicts it, the score is incremented by 0.25; otherwise, if the example is related and the model predicts agree , disagree , or discuss , the score is incremented by 0.25.",
"Second, there is a further increment by 0.75 for each related example if the model predicts the correct label: agree , disagree , or discuss .",
"1 Available at www.fakenewschallenge.org Finally, the score is normalized by dividing it by the total number of test examples.",
"The rationale behind this metric is that the binary re-lated/unrelated classification task is expected to be much easier, while also being arguably less relevant to fake news detection than the stance detection task, which aims to further classify relevant instances as agree , disagree , or discuss .",
"Therefore, the former task is given less weight and the latter task is given more weight through the weighted accuracy metric.",
"Given the imbalanced nature of our data, we use two baselines in which we label all testing examples with the same label: ( i ) unrelated and ( ii ) discuss .",
"The former is the majority class baseline, which is a reasonable baseline for Accuracy and macro-F 1 , while the latter is a potentially better baseline for Weighted Accuracy .",
"We further use CNN and LSTM, and combinations thereof as baselines, since they form components of our model, and also because they yield state-of-the-art results for text, image, and video classification (Tan et al., 2016; Donahue et al., 2015; Zuo et al., 2015; Sainath et al., 2015).",
"Finally, we include the official baseline from the challenge, which is a Gradient Boosting classifier with word and n -gram overlap features, as well as indicators for refutation and polarity.",
"sMemNN : This is our model presented in Figure 2. Note that unlike the CNN+LSTM and the LSTM+CNN baselines above, which feed the output of one network into the other one, the sMemNN model feeds the individual outputs of both the CNN and the LSTM networks into the memory network, and lets it decide how much to rely on each of them.",
"This consideration also facilitates reasoning and explaining model predictions, as we will discuss in more detail below.",
"sMemNN (dotProduct) : This is a version of sMemNN, where the similarity matrices are replaced by the dot product between the representation of the claims and of the evidence.",
"For this purpose, we first project the claim representation to a dense layer that has the same size as the representation of each piece of evidence, and then we compute the dot product between the resulting representation and the representation of the evidence.",
"sMemNN (with TF) : Since our LSTM and CNN networks use a limited number of starting paragraphs 2 for an input document, we enrich our model with the BOW representation of documents and claims as well as their TF.IDF-based cosine similarity.",
"We concatenate these vectors with the memory outputs (section 2.5) and pass them to the R component (section 2.6) of sMemNN.",
"We expect these BOW vectors provide useful information that are not initially incorporated into the sMemNN model.",
"Table 2 reports the performance of all models on the test dataset.",
"The Allunrelated and the All-discuss baselines perform poorly across the evaluation measures, except for Allunrelated , which achieves high accuracy, which is due to unrelated being by far the dominant class in the dataset.",
"Next, we can see that the LSTM model consistently outperforms the CNN across all evaluation measures.",
"Although the larger number of parameters of the LSTM can play a role, we believe that its superiority comes from it being able to remember previously observed relevant pieces of text.",
"Next, we see systematic improvements for the combinations of the CNN and the LSTM models: CNN+LSTM is better than CNN alone, and LSTM+CNN is better than LSTM alone.",
"Better performance is achieved by the LSTM+CNN model, where claims and evidence are first processed by a LSTM and then fed into a CNN.",
"The Gradient Boosting model achieves sizable improvement over the above baseline neural models.",
"However, we should note that these neural models do not have the rich hand-crafted features that were used in the Gradient Boosting model.",
"Row 9 shows the results for our memory network model (sMemNN), which consistently outperforms all other baseline models across all evaluation metrics, achieving 10.62 and 3.77 points of absolute improvement in terms of Macro-F1 and Weighted Accuracy, respectively, over the best baseline (Gradient Boosting).",
"We believe this is due to the memory network being able to capture good text snippets.",
"As we will see below, these snippets are also useful for explaining the model's predictions.",
"Comparing row 9 to row 8, we can see the importance of our proposed similarity matrix: replacing that matrix by a simple dot product hurts the performance of the model considerably across all evaluation measures, thus lowering it to the level of the Gradient Boosting model.",
"Finally, row 10 shows the results for our memory network model enriched by BOW representation.",
"As we expected, it improves the performance of sMemNN perhaps by capturing useful information from paragraphs beyond the starting few.",
"To put the results of sMemNN in perspective, we should mention that the best system at the Fake News Challenge (Baird et al., 2017) achieved a macro-F1 of 57.79, which is not significantly different from our results at the 0.05 significance level (p-value=0.53).",
"Yet, they have an ensemble combining the feature-rich Gradient Boosting system with neural networks.",
"In contrast, we only use raw text as input and no ensembles, and our main goal is to study a new memory network model and its explainability component.",
"Further analysis of the outputs (namely, the confusion matrices) of the different models we experimented with reveals the following general trends: ( i ) The unrelated examples are easy to detect, and most models show high performance for this class.",
"( ii )",
"The agree and the disagree examples are often misclassified as discuss by the baseline models.",
"This is mainly because the document that discusses a claim often shares the same topic with the claim, but then it does not take a stance with respect to that claim.",
"( iii )",
"The disagree examples are the most difficult ones for all models, probably because they represent by far the smallest class.",
"As discussed previously, we balance the data at each training iteration by randomly selecting z instances from each of the four target classes, where z is the size of the class with the minimum number of training instances.",
"Here, we investigate the amount of training data that gets actually used.",
"For this purpose, at each training iteration, we report the proportion of the training instances from each class that have been used for training so far, either at the current or at any of the previous iterations.",
"As Figure 4 shows, our random data sampling procedure eventually covers almost all training instances.",
"Since the disagree class is the smallest, its instances remain fully covered throughout the process.",
"Moreover, almost all other related instances, i.e., agree and discuss , are observed during training, as well as a large fraction of the dominating unrelated examples.",
"Note that the model achieves its best (lowest) loss on the validation dataset at iteration 31, when almost all related training instances are observed.",
"This happens while the corresponding fraction for the unrelated pairs is around 50%, i.e., a considerable number of the unrelated instances are not required to be used for training.",
"One of the main advantages of our memory network model, compared to the baselines and to related work in general, is that it has the capacity to explain its predictions by extracting snippets from each piece of evidence that supports its prediction.",
"As we explained in Section 2.3, our inference component predicts the similarity between each piece of evidence x j and the claim s at the n grams level using the similarity matrix M 0 and the claim-evidence similarity vector P jcnn .",
"Below, we explore our model's explainability in more detail.",
"Table 3 shows examples of two claims and the snippets extracted as evidence.",
"Column P jcnn shows the overall similarity between the evidence and the corresponding claim as computed by the inference component of our model.",
"The highlighted texts are the snippets with the highest similarity to the claim as extracted by the same component.",
"The values on the snippets' top-right show the claim-snippet similarity values obtained by the inference component.",
"Note that all snippets are fixed-length, namely 5-grams; however, in case there are several consecutive n -grams with similar scores, for better illustration, we combine them into a single snippet and we report their average values (see the snippet for evidence 2069-3).",
"As these examples show, our model can accurately predict the stance of these pieces of evidence against their corresponding claims.",
"Also, claim 2 and its corresponding evidence are shown at the second row of Table 3. As this example shows, the similarity values associated with snippets are either too small or negative, e.g., see the similarity value for the snippet biologist has killed off claims .",
"We can see that these help the model to make accurate predictions.",
"We conduct the following experiment to quantify the performance of our memory network at explaining its predictions: we randomly sample 100 agree / disagree claimdocument examples from our gold data, and we manually evaluate the top five pieces of evidence that our model provides to support/oppose the corresponding claims.",
"3 3 In 76 cases, our model correctly classified the agree / disagree examples when the evaluation was conducted, and it further provided arguably adequate snippets.",
"Figure",
"5(a) shows the precision of our memory network model at explaining its predictions when each supporting/opposing piece of evidence is an n -gram snippet of fixed length ( n = 5 ) for the agree and the disagree classes, and their combinations at the topk ranks, k = { 1 , . . . , 5 } .",
"We can see in the figure that the model achieves precision of 0.28, 0.32, 0.35, 0.25, and 0.33 at ranks 15.",
"Moreover, we find that it can accurately identify useful key phrases such as officials declared the video , according to previous reports , believed will come , president in his tweets as supporting pieces of evidence, and proved a hoax , shot down a cnn report , would be skeptical as opposing pieces of evidence.",
"Note that this relatively low precision of our memory network model at explaining its agree / disagree predictions is mainly due to the unsupervised nature of this task as no gold snippets justifying the document's gold stance with respect to the target claim are available in the Fake News Challenge dataset.",
"4 Furthermore, our evaluation setup is at the n gram level in Figure",
"5(a).",
"However, if we conduct a more coarse-grained evaluation where we combine consecutive n -grams with similar scores into a single snippet, the precision for these new snippets will improve to 0.40, 0.38, 0.42, 0.38, and 0.42 at ranks 15, as Figure",
"5(b) shows.",
"If we further extend the evaluation to the sentence level, the precision will jump to 0.60, 0.58, 0.55, 0.62, and 0.57 at ranks 15, as we can see in Figure",
"5(c).",
"4 Some other recent datasets, to be presented at this same HLT-NAACL'2018 conference, do have such gold evidence annotations (Baly et al., 2018; Thorne et al., 2018).",
"While stance detection is an interesting task in its own right, e.g., for media monitoring, it is also an important component for fact checking and veracity inference.",
"5 Automatic fact checking was envisioned by Vlachos and Riedel (2014) as a multi-step process that ( i ) identifies check-worthy statements (Hassan et al., 2015; Gencheva et al., 2017; Jaradat et al., 2018), ( ii ) generates questions to be asked about these statements (Karadzhov et al., 2017), ( iii ) retrieves relevant information to create a knowledge base (Shiralkar et al., 2017), and ( iv ) infers the veracity of these statements, e.g., using text analysis (Castillo et al., 2011; Rashkin et al., 2017) or information from external sources (Mihaylova et al., 2018; Karadzhov et al., 2017; Popat et al., 2017).",
"There have been some nuances in the way researchers have defined the stance detection task.",
"SemEval-2016 Task 6 (Mohammad et al., 2016) targets stances with respect to some target proposition, e.g., entities, concepts or events, as in-favor , against , or neither .",
"The winning model in the task was based on transfer learning: a Recurrent Neural Network trained on a large Twitter corpus was used to predict task-relevant hashtags and to initialize a second recurrent neural network trained on the provided dataset for stance prediction (Zarrella and Marsh, 2016).",
"Subsequently, Zubiaga et al. (2016) detected the stance of tweets toward rumors and hot topics using linear-chain conditional random fields (CRFs) and tree CRFs that analyze tweets based on their position in treelike conversational threads.",
"Most commonly, stance detection is defined with respect to a claim , e.g., as in the 2017 Fake News Challenge.",
"The best system in the challenge was an ensemble of gradient-boosted decision trees with rich features (e.g., sentiment , word2vec , singular value decomposition ( SVD ) and TF.IDF features,",
"etc.) and a deep convolutional neural network to address the stance detection problem (Baird et al., 2017).",
"Unlike the above work, we use a feature-light memory network that jointly infers the stance and highlights relevant snippets of evidence with respect to a given claim.",
"5 Yet, stance detection and fact checking are typically supported by separate datasets.",
"Two notable upcoming exceptions, both appearing in this HLT-NAACL'2018, are (Thorne et al., 2018) for English and (Baly et al., 2018) for Arabic.",
"We studied the problem of stance detection, which aims to predict whether a given document supports, challenges, or just discusses a given claim.",
"The nature of the task requires a machine learning model to focus on the relevant paragraphs of the evidence.",
"Moreover, in order to understand whether a paragraph supports a claim, there is a need to refer to information in other paragraphs.",
"CNNs or LSTMs are not well-suited for this task as they cannot model complex dependencies such as semantic relationships with respect to entire previous paragraphs.",
"In contrast, memory networks are exactly designed to remember previous information.",
"However, given the large size of documents and paragraphs, basic memory networks do not handle well irrelevant and noisy information, which we confirmed in our experiments.",
"Thus, we proposed a novel extension of general memory networks based on a similarity matrix and a stance filtering component, which we apply at the inference level, and we have shown that this extension offers sizable performance gains making memory networks competitive.",
"Moreover, our model can extract meaningful snippets from documents that can explain the stance of a given claim.",
"In future work, we plan to extend the inference component to select an optimal set of explanations for each prediction, and to explain the model as a whole, not only at the instance level.",
"This research was carried out in collaboration between the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and the HBKU Qatar Computing Research Institute (QCRI)."
] | [
"method",
"method",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"result",
"objective",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"method",
"method",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"method",
"objective",
"other"
] |
[
"Igor Malioutov Bloomberg L.P. [email protected]",
"Abstract",
"While the predictive performance of modern statistical dependency parsers relies heavily on the availability of expensive expert-annotated treebank data, not all annotations contribute equally to the training of the parsers.",
"In this paper, we attempt to reduce the number of labeled examples needed to train a strong dependency parser using batch active learning (AL).",
"In particular, we investigate whether enforcing diversity in the sampled batches, using determinantal point processes (DPPs), can improve over their diversity-agnostic counterparts.",
"Simulation experiments on an English newswire corpus show that selecting diverse batches with DPPs is superior to strong selection strategies that do not enforce batch diversity, especially during the initial stages of the learning process.",
"Additionally, our diversity-aware strategy is robust under a corpus duplication setting, where diversity-agnostic sampling strategies exhibit significant degradation.",
"Though critical to parser training, data annotations for dependency parsing are both expensive and time-consuming to obtain.",
"Syntactic analysis requires linguistic expertise and even after extensive training, data annotation can still be burdensome.",
"The Penn Treebank project (Marcus et al., 1993) reports that after two months of training, the annotators average 750 tokens per hour on the bracketing task; the Prague Dependency Treebank (Bohmova et al., 2003) cost over $ 600 , 000 and required 5 years to annotate roughly 90 , 000 sentences (over $ 5 per sentence).",
"These high annotation costs present a significant challenge to developing accurate dependency parsers for under-resourced languages and domains.",
"Active learning (AL; Settles, 2009) is a promising technique to reduce the annotation effort required to train a strong dependency parser by intelWork done during an internship at Bloomberg L.P. ligently selecting samples to annotate such that the return of each annotator hour is as high as possible.",
"Popular selection strategies, such as uncertainty sampling, associate each instance with a quality measure based on the uncertainty or confidence level of the current parser, and higher-quality instances are selected for annotation.",
"We focus on batch mode AL, since it is generally more efficient for annotators to label in bulk.",
"While early work in AL for parsing (Tang et al., 2002; Hwa, 2000, 2004) cautions against using individually-computed quality measures in the batch setting, more recent work demonstrates empirical success (e.g., Li et al., 2016) without explicitly handling intra-batch diversity .",
"In this paper, we explore whether a diversity-aware approach can improve the state of the art in AL for dependency parsing.",
"Specifically, we consider samples drawn from determinantal point processes (DPPs) as a query strategy to select batches of high-quality, yet dissimilar instances (Kulesza and Taskar, 2012).",
"In this paper, we (1) propose a diversity-aware batch AL query strategy for dependency parsing compatible with existing selection strategies, (2) empirically study three AL strategies with and without diversity factors, and (3) find that diversity-aware selection strategies are superior to their diversity-agnostic counterparts, especially during the early stages of the learning process, in simulation experiments on an English newswire corpus.",
"This is critical in low-budget AL settings, which we further confirm in a corpus duplication setting.",
"1 2 Active Learning for Dependency Parsing 2.1 Dependency Parsing Dependency parsing (Kubler et al., 2008) aims to find the syntactic dependency structure, y , given a lengthn input sentence x = x 1 , x 2 , . . . , x n , where 1 Our code is publicly available at https://github.com/ tzshi/dpp-al-parsing-naacl21 .",
"y is a set of n arcs over the tokens and the dummy root symbol x 0 , and each arc ( h, m ) y specifies the head, h , and modifier word, m .",
"2 In this work, we adopt the conceptually-simple edge-factored deep biaffine dependency parser (Dozat and Manning, 2017), which is competitive with the state of the art in terms of accuracy, The parser assigns a locally-normalized attachment probability P att ( head ( m ) = h | x ) to each attachment candidate pair ( h, m ) based on a biaffine scoring function.",
"Refer to Appendix A for architecture details.",
"We define the score of the candidate parse tree s ( y | x ) as (cid:80) ( h,m ) y log P att ( head ( m ) = h | x ) .",
"The decoder finds the best scoring y among all valid trees Y ( x ) : y = arg max y Y ( x ) s ( y | x ) .",
"We consider the pool-based batch AL scenario where we assume a large collection of unlabeled instances U from which we sample a small subset at a time to annotate after each round to form an expanding labeled training set L (Lewis and Gale, 1994).",
"We use the superscript i to denote the pool of instances U i and L i after the i -th round.",
"L 0 is a small set of seed labeled instances to initiate the process.",
"Each iteration starts with training a model M i based on L i .",
"Next, all unlabeled data instances in U i are parsed by M i and we select a batch U (cid:48) to annotate based on some criterion U (cid:48) = C ( M i , U i ) .",
"The resulting labeled subset L (cid:48) is added to L i +1 = L i (cid:83) L (cid:48) and U i +1 = U i U (cid:48) .",
"The definition of the selection criterion C is critical.",
"A typical strategy associates each unlabeled instance U i with a quality measure q i based on, for example, the model uncertainty level when parsing U i .",
"A diversity-agnostic criterion sorts all unlabeled instances by their quality measures and takes the topk as U (cid:48) for a budget k .",
"We consider three commonly-used quality measures adapted to the task of dependency parsing, including uncertainty sampling, Bayesian active learning, and a representativeness-based strategy.",
"( h,m ) y and P ( y | x ) = exp( s ( y | x )) (cid:80) y (cid:48)Y ( x ) exp( s ( y (cid:48) | x )) .",
"The marginal probabilities can be derived efficiently using Kirch-hoff's theorem (Tutte, 1984; Koo et al., 2007).",
"Bayesian Active Learning by Disagreement (BALD) measures the mutual information between the model parameters and the predictions.",
"We adopt the Monte Carlo dropout-based variant (Gal et al., 2017; Siddhant and Lipton, 2018) and measure the disagreement among predictions from a neural model with K different dropout masks, which has been applied to active learning in NLP.",
"We adapt BALD to dependency parsing by aggregating disagreement at a token level: BALD = 1 1 n (cid:80) m count ( mode ( h 1 m ,...,h Km )) K , where h km denotes that ( h km , m ) appears in the prediction given by the k -th model.",
"Information Density (ID) mitigates the tendency of uncertainty sampling to favor outliers by weighing examples by how representative they are of the entire dataset (Settles and Craven, 2008): ID = AMP (cid:16) 1 |U| (cid:80) x (cid:48) U sim cos ( x, x (cid:48) ) (cid:17) , where cosine similarity is computed from the averaged contextualized features (3.2).",
"We follow Li et al. (2016) and select tokens to annotate their heads instead of annotating full sentences.",
"We first pick the most informative sentences and then choose p % tokens from them based on the token-level versions of the quality measures (e.g., marginal probability instead of AMP).",
"Near-duplicate examples are common in real-world data (Broder et al., 1997; Manku et al., 2007), but they provide overlapping utility to model training.",
"In the extreme case, with a diversity-agnostic strategy for active learning, identical examples will be selected/excluded at the same time (Hwa, 2004).",
"To address this issue and to best utilize the annotation budget, it is important to consider diversity.",
"We adapt Byk et al. (2019) to explicitly model diversity using determinantal point processes (DPPs).",
"A DPP defines a probability distribution over subsets of some ground set of elements (Kulesza, 2012).",
"In AL, the ground set is the unlabeled pool U and a subset corresponds to a batch of instances U (cid:48) drawn from U .",
"DPPs provide an explicit mechanism to ensure high-quality yet diverse sample selection by modeling both the quality measures and the similarities among examples.",
"We adopt the L -ensemble representation of DPPs using the quality-diversity decomposition (Kulesza and Taskar, 2012) and parameterize the matrix L as L ij = q i i Tj q j , where each q i R is the quality measure for U i and each i R 1 d is a d -dimensional vector representation of U i , which we refer to as U i 's diversity features .",
"3 The probability of selecting a batch B is given by P ( B U ) det( LB ) , where det( ) calculates the determinant and LB is the submatrix of L indexed by elements in B .",
"DPPs place high probability on diverse subsets of high-quality items.",
"Intuitively, the determinant of LB corresponds to the volume spanned by the set of vectors { q i i | i B } , and subsets with larger q values and orthogonal vectors span larger volumes than those with smaller q values or similar vectors.",
"We follow Kulesza (2012) and adapt their greedy algorithm for finding the approximate mode arg max BP ( B U ) .",
"This algorithm is reproduced in Algorithm E1 in the appendix.",
"Averaged Contextualized Features are defined as 1 n (cid:80) i x i , where x i is a contextualized vector of x i from the feature extractor used by the parser.",
"By this definition, we consider the instances to be similar to each other when the neural feature extractor returns similar features such that the parser is likely to predict similar structures for these instances.",
"Predicted Subgraph Counts explicitly represent the predicted tree structure.",
"To balance richness and sparsity, we count the labeled but unlex-icalized subgraph formed by the grandparent, the parent and the token itself.",
"Specifically, for each token m , we can extract a subgraph denoted by ( r 1 , r 2 ) , assuming the predicted dependency relation between its grandparent g and its parent h is r 1 , and the relation between h and m is r 2 .",
"The parse tree for a lengthn sentence contains n such subgraphs.",
"We apply tf-idf weighting to discount 3 Although certain applications of DPPs may learn q and representations from supervision, we define q and a priori , since acquiring supervision in AL is, by definition, expensive.",
"Dataset We use the Revised English News Text Treebank 4 (Bies et al., 2015) converted to Universal Dependencies 2.0 using the conversion tool included in Stanford Parser (Manning et al., 2014) version 4.0.0.",
"We use sections 02-21 for training, 22 for development and 23 for test.",
"Setting We perform experiments by simulating the annotation process using treebank data.",
"We sample 128 sentences uniformly for the initial labeled pool and each following round selects 500 tokens for partial annotation.",
"We run each setting five times using different random initializations and report the means and standard deviations of the labeled attachment scores (LAS).",
"Appendix B has unlabeled attachment score (UAS) results.",
"Baselines While we construct our own baselines for self-contained comparisons, the diversity-agnostic AMP (w/o DPP) largely replicates the state-of-the-art selection strategy of Li et al. (2016).",
"Implementation We finetune a pretrained multilingual XLM-RoBERTa base model (Conneau et al., 2020) as our feature extractor.",
"5 See Appendix E for implementation details.",
"Main Results Table 1 compares LAS after 5 and 10 rounds of annotation.",
"Our dependency parser reaches 95 .",
"64 UAS and 94 .",
"06 LAS, when trained with the full dataset (more than one million tokens).",
"Training data collected from 30 annotation rounds ( 17 , 500 tokens) correspond to roughly 2% of the full dataset, but already support an LAS of up to 92 through AL.",
"We find that diversity-aware strategies generally improve over their diversity-agnostic counterparts.",
"Even for a random selection strategy, ensuring diversity with a DPP is superior 4 https://catalog.ldc.upenn.edu/LDC2015T13 5 To construct the averaged contextualized features, we also use the fine-tuned feature extractor.",
"In our preliminary experiments, we have tried freezing the feature extractors, but this variant did not perform as well.",
"to simple random selection.",
"With AMP and BALD, our diversity-aware strategy sees a larger improvement earlier in the learning process.",
"ID models representativeness of instances, and our diversity-aware strategy adds less utility compared with other quality measures, although we do notice a large improvement after the first annotation round for ID: 82 .",
"40 .",
"48 vs. 83 .",
"36 .",
"54 (w/ DPP) a similar trend to AMP and BALD, but at an earlier stage of AL.",
"Experiments with Different Diversity Features Figure 1 compares our two definitions of diversity features, and we find that predicted subgraph counts provide stronger performance than that of averaged contextualized features.",
"We hypothesize this is due to the fact that the subgraph counts represent structures more explicitly, thus they are more useful in maintaining structural diversity in AL.",
"Intra-Batch Diversity To quantify intra-batch diversity among the set of sentences B picked by the selection strategies, we adapt the measures used by Chen et al. (2018) and define intra-batch average distance (IBAD) and intra-batch minimal distance (IBMD) as follows: IBAD = mean i,j B,i (cid:54) = j (1 sim cos ( i, j )) , IBMD = mean i B min j B,i (cid:54) = j (1 sim cos ( i, j )) .",
"A higher value on these measures indicates better intra-batch diversity.",
"Figure 2 compares diversity-agnostic and diversity-aware sampling strategies using the two different diversity features.",
"We confirm that DPPs indeed promote diverse samples in the selected batches, while intra-batch diversity naturally increases even for the diversity-agnostic strategies.",
"Additionally, we observe that the bene-fits of DPPs are more prominent when using pre-5 10 15 20 0 0 .",
"dicted subgraph counts compared with averaged contextualized features.",
"This can help explain the relative success of the former diversity features.",
"Corpus Duplication Setting In our qualitative analysis (Appendix C), we find that diversity-agnostic selection strategies tend to select near-duplicate sentences.",
"To examine this phenomenon in isolation, we repeat the training corpus twice and observe the effect of diversity-aware strategies.",
"The corpus duplication technique has been previously used to probe semantic models (Schofield et al., 2017).",
"Figure 3 shows learning curves for strategies under the original and corpus duplication settings.",
"As expected, diversity-aware strategies consistently outperform their diversity-agnostic counterparts across both settings, while some diversity-agnostic strategies (e.g., AMP) even underperform uniform random selection in the duplicated setting.",
"Interpreting the Effectiveness of Diversity-Ag-nostic Models Figure 4 visualizes the density distributions of the top 200 data instances by AMP over the diversity feature space reduced to two dimensions through t-SNE (van der Maaten and Hinton, 2008).",
"During the initial stage of active learning, data with the highest quality measures are con-10 20 30 82 84 86 88 90 92 LAS Random AMP AMPw/DPP AMP(dup) AMPw/DPP(dup) 10 20 30 82 84 86 88 90 92 Random BALD BALDw/DPP BALD(dup) BALDw/DPP(dup) 10 20 30 82 84 86 88 90 92 Random ID IDw/DPP ID(dup) IDw/DPP(dup) Figure 3: Learning curves of different sampling strategies based on AMP (left), BALD (middle) and ID (right), comparing diversity-aware (w/ DPP) and diversity-agnostic variants using the original and duplicated corpus (dup).",
"60 40 20 0 20 40 60 60 40 20 0 20 40 60 60 40 20 0 20 40 60 10 20 30 40 50 60 70 80 counts Figure 4: t-SNE visualization of the distributions of the 200 highest-quality unlabeled sentences over the diversity feature space after the 1 st (left) and the 10 th (right) annotation rounds using AMP without DPPs.",
"Darker region indicates more data points residing in that diversity feature neighborhood.",
"The left figure contains a dense region, while the data in the right figure are spread out in the feature space.",
"centrated within a small neighborhood.",
"A diversity-agnostic strategy will sample similar examples for annotation.",
"After a few rounds of annotation and model training, the distribution of high quality examples spreads out, and an AMP selection strategy is likely to sample a diverse set of examples without explicitly modeling diversity.",
"Our analysis corroborates previous findings (Thompson et al., 1999) that small annotation batches are effective early in uncertainty sampling, avoiding selecting many near-duplicate examples when intra-batch diversity is low, but a larger batch size is more efficient later in training once intra-batch diversity increases.",
"Modeling diversity in batch-mode AL (Brinker, 2003) has recently attracted attention in the machine learning community.",
"Kirsch et al. (2019) introduce a Bayesian batch-mode selection strategy by estimating the mutual information between a set of samples and the model parameters.",
"Ash et al. (2020) present a diversity-inducing sampling method using gradient embeddings.",
"Most related to our work, Byk et al. (2019) first apply DPPs to batch-mode AL.",
"Building on their approach, we flesh out a DPP treatment for AL for a structured prediction task, dependency parsing.",
"Previously, Shen et al. (2018) consider named entity recognition but they report negative results for a diversity-inducing variant of their sampling method.",
"Due to the high annotation cost, AL is a popular technique for parsing and parse selection (Osborne and Baldridge, 2004).",
"Recent advances focus on reducing full-sentence annotations to a subset of tokens within a sentence (Sassano and Kurohashi, 2010; Mirroshandel and Nasr, 2011; Majidi and Crane, 2013; Flannery and Mori, 2015; Li et al., 2016).",
"We show that AL for parsing can further benefit from diversity-aware sampling strategies.",
"DPPs have previously been successfully applied to the tasks of extractive text summarization (Cho et al., 2019a,b) and modeling phoneme inventories (Cotterell and Eisner, 2017).",
"In this work, we show that DPPs also provide a useful framework for understanding and modeling quality and diversity in active learning for NLP tasks.",
"We show that compared with their diversity-agnostic counterparts, diversity-aware sampling strategies not only lead to higher data efficiency, but are also more robust under corpus duplication settings.",
"Our work invites future research into methods, utility and success conditions for modeling diversity in active learning for NLP tasks.",
"We thank the anonymous reviewers for their insightful reviews, and Prabhanjan Kambadur, Chen-Tse Tsai, and Minjie Xu for discussion and comments.",
"Tianze Shi acknowledges support from Bloomberg's Data Science Ph.D.",
"Fellowship."
] | [
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"method",
"objective",
"method",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"abstain",
"method",
"other",
"other",
"other",
"abstain",
"other",
"abstain",
"result",
"abstain",
"other",
"other",
"other"
] |
[
"We present a new neural model for text summarization that first extracts sentences from a document and then compresses them.",
"The proposed model offers a balance that sidesteps the difficulties in abstractive methods while generating more concise summaries than extractive methods.",
"In addition, our model dynamically determines the length of the output summary based on the gold summaries it observes during training, and does not require length constraints typical to extractive summarization.",
"The model achieves state-of-the-art results on the CNN/DailyMail and Newsroom datasets, improving over current extractive and abstractive methods.",
"Human evaluations demonstrate that our model generates concise and informative summaries.",
"We also make available a new dataset of oracle compressive summaries derived automatically from the CNN/DailyMail reference summaries.",
"1 1 Introduction Text summarization is an important NLP problem with a wide range of applications in data-driven industries (e.g., news, health, and defense).",
"Single document summarizationthe task of generating a short summary of a document preserving its informative content (Sparck Jones, 2007)has been a highly studied research topic in recent years (Nallapati et al., 2016b; See et al., 2017; Fan et al., 2018; Pasunuru and Bansal, 2018).",
"(EXCONSUMM Extractive) (CNN) A top al Qaeda in the Arabian Peninsula leaderwho a few years ago was in a U.S. detention facilitywas among five killed in an airstrike in Yemen, the terror group said, showing the organization is vulnerable even as Yemen appears close to civil war.",
"Ibrahim al-Rubaish died Monday night in what AQAP's media wing, Al-Malahem Media, called a crusader airstrike. (EXCONSUMM Compressive) (CNN) A top al Qaeda in the Arabian Peninsula leaderwho a few years ago was in a U.S. detention facilitywas among five killed in an airstrike in Yemen , the terror group said, showing the organization is vulnerable even as Yemen appears close to civil war.",
"have primarily focused on two strategies: extractive and abstractive .",
"The former select a subset of the sentences to assemble a summary (Cheng and Lapata, 2016; Nallapati et al., 2017; Narayan et al., 2018a,c).",
"The latter generates sentences that do not appear in the original document (See et al., 2017; Narayan et al., 2018b; Paulus et al., 2018).",
"Both methods suffer from significant drawbacks: extractive systems are wasteful since they cannot trim the original sentences to fit into the summary, and they lack a mechanism to ensure overall coherence.",
"In contrast, abstractive systems require natural language generation and semantic representation, problems that are inherently harder to solve than just extracting sentences from the original document.",
"In this paper, we present a novel architecture that attempts to mitigate the problems above via a middle ground, compressive summarization (Martins and Smith, 2009).",
"Our model selects a set of sentences from the input document, and compresses them by removing unnecessary words, while keeping the summaries informative, concise and grammatical.",
"We achieve this by dynamically modeling the generated summary using a Long Short Term Memory (LSTM; Hochreiter and Schmidhuber, 1997) to produce summary state representations .",
"This state provides crucial information to iteratively increment summaries based on previously extracted information.",
"It also facilitates the generation of variable length summaries as opposed to fixed lengths, in previous extractive systems (Cheng and Lapata, 2016; Nallapati et al., 2017; Narayan et al., 2018c; Zhang et al., 2018).",
"Our model can be trained in both extractive (labeling sentences for extraction) or compressive (labeling words for extraction) settings.",
"Figure 1 shows a summary example generated by our model.",
"Our contributions in this paper are three-fold: we present the first end-to-end neural architecture for EXtractive and COmpressive Neural SUMMarization (dubbed EXCONSUMM , see 3), we validate this architecture on the CNN/DailyMail and the Newsroom datasets (Hermann et al., 2015; Grusky et al., 2018), showing that our model generates variable-length summaries which correlate well with gold summaries in length and are concise and informative (see 5), and we provide a new CNN/DailyMail dataset annotated with automatic compressions for each sentence, and a set of compressed oracle summaries (see 4).",
"Experimental results show that when evaluated automatically, both the extractive and compressive variants of our model provide state-of-the-art results.",
"Human evaluation further shows that our model is better than previous state-of-the-art systems at generating informative and concise summaries.",
"Recent work on neural summarization has mainly focused on sequence-to-sequence (seq2seq) architectures (Sutskever et al., 2014), a formulation particularly suited and initially employed for abstractive summarization (Rush et al., 2015).",
"However, state-of-the-art results have been achieved by RNN-based methods which are extractive.",
"They select sentences based on an LSTM classifier that predicts a binary label for each sentence (Cheng and Lapata, 2016), based on ranking using reinforcement learning (Narayan et al., 2018c), or even by training an extractive latent model (Zhang et al., 2018).",
"Other methods rely on an abstractive approach with strongly conditioned generation on the source document (See et al., 2017).",
"In fact, the best results for abstractive summarization have been achieved with models that are more extractive in nature than abstractive, since most of the words in the summary are copied from the document (Gehrmann et al., 2018).",
"Due to the lack of training corpora, there is almost no work on neural architectures for compressive summarization.",
"Most compressive summarization work has been applied to smaller datasets (Martins and Smith, 2009; Berg-Kirkpatrick et al., 2011; Almeida and Martins, 2013).",
"Other non-neural summarization systems apply this idea to select and compress the summary.",
"Dorr et al. (2003) introduced a method to first extract the first sentence of a news article and then use linguistically-motivated heuristics to iteratively trim parts of it.",
"Durrett et al. (2016) also learns a system that selects textual units to include in the summary and compresses them by deleting word spans guided by anaphoric constraints to improve coherence.",
"Recently, Zhang et al. (2018) trained an abstractive sentence compression model using attention-based sequence-to-sequence architecture (Rush et al., 2015) to map a sentence in the document selected by the extractive model to a sentence in the summary.",
"However, as the sentences in the document and in the summary are not aligned for compression, their compression model is significantly inferior to the extractive model.",
"In this paper, we propose a novel seq2seq architecture for compressive summarization and demonstrate that it avoids the over-extraction of existing extractive approaches (Cheng and Lapata, 2016; Dlikman and Last, 2016; Nallapati et al., 2016a).",
"Our model builds on recent approaches to neural extractive summarization as a sequence labeling problem, where sentences in the document are labeled to specify whether or not they should be included in the summary (Cheng and Lapata, 2016; Narayan et al., 2018a).",
"These models often condition their labeling decisions on the document representation only.",
"Nallapati et al. (2017) tries to model the summary as the average representation = 1 ... ... ... ...",
"of the positively labeled sentences.",
"However, as we show later, this strategy is not the most adequate to ensure summary coherence, as it does not take the order of the selected sentences into account.",
"Our approach addresses this problem by maintaining an LSTM cell to dynamically model the generated summary.",
"To the best of our knowledge, our work is the first to use a model that keeps a state of already generated summary to effectively model variable-length summaries in an extractive setting, and the first to learn a compressive summarizer with an end-to end approach.",
"Our model extracts sentences from a given document and further compresses these sentences by deleting words.",
"More formally, we denote a document D = ( s 1 , . . . , s M ) as a sequence of M sentences, and a sentence s i = ( w i 1 , . . . , w iN ) as a sequence of N words.",
"We denote by e ( w ij ) , e ( s i ) and e ( D ) the embedding of words, sentences and document in a continuous space.",
"We model document summarization as a sequence labeling problem where the labeler transitions between internal states.",
"Each state is dynamically computed based on the context, and it combines an extractive summarizer followed by a compressive one.",
"First, we encode a document in a multi-level approach, to extract the embeddings of words and sentences (Document Encoder).",
"Second, we decode these embeddings using a hierarchical De-cision Decoder.",
"The extractive summarizer labels each sentence s i with a label z i { 0 , 1 } where 1 indicates that the sentence should be included in the final summary and 0 otherwise.",
"An extractive summary is then assembled by selecting all sentences with the label 1 .",
"Analogously, the compressive summarizer labels each word w ij with a label y ij { 0 , 1 } , denoting whether the word j in sentence i is included in the summary or not.",
"The final summary is then assembled as the sequence of words w ij for each z i = 1 and y ij = 1 .",
"See Figures 2 and 3 for an overview of our model.",
"We next describe each of its components in more detail.",
"The document encoder is a two layer biLSTM, one layer encoding each sentence, and the second layer encoding the document.",
"The first layer takes as input the word embeddings e ( w ij ) for each word j in sentence s i , and outputs the hidden representa-SentStates Compressive decoder Extractive decoder WordStates Figure 3: Decision decoder architecture.",
"tion of each word h wij .",
"The hidden representation consist of the concatenation of a forward h wij and a backward h wij LSTM (WordEncoder in Figure 2).",
"This layer eventually outputs a representation for each sentence e ( s i ) = [ h wiN , h wi 1 ] that corresponds to the concatenation of the last forward and first backward LSTMs.",
"The second layer encodes information about the document and is also a biLSTM that runs at the sentence-level.",
"This biLSTM takes as input the sentence representation from the previous layer e ( s i ) and outputs the hidden representation for each sentence s i in the document as h si (SentEncoder in Figure 2).",
"We consider the output of the last forward LSTM over M sentences and first backward LSTM to be the final representation of the document e ( D ) = [ h sM , h s 1 ] .",
"The encoder returns two output vectors, d si = [ e ( D ) , e ( s i ) , h si ] associated with each sentence s i , and d wij = [ e ( D ) , e ( s i ) , e ( w ij ) , h si , h wij ] for each word j at the specific state of the encoder i .",
"Given that our model operates both at the sentence-level and at the word-level, the decision decoder maintains two state LSTMs denoted by SentStates and WordStates as in Figure 3.",
"For the sentence-level decoder sentences are selected and the state of the summary gets updated by SentStates .",
"For the word-level, all compressed word representations in a sentence are pushed to the word-level layer.",
"In the compressive decoder, words that get selected are pushed onto the WordStates , and once the decoder has reached the end of the sentence, it pushes the output representation of the last state onto the sentence-level layer for the next sentence.",
"Extractive Decoder The extractive decoder selects the sentences that should go to the summary.",
"For each sentence s i at time step i , the decoder takes a decision based on the encoder representation d si and the state of the summary o si , computed as follows: o si = SentStates ({ e ( c k )} k < i,z k = 1 ) .",
"where the o si is modeled by an LSTM taking as input the already selected and compressed sentences comprising the summary so far { e ( c k )} k < i,z k = 1 .",
"This way, at each point in time, we have a representation of the summary given by the SentStates LSTM that encodes the state of summary generated so far, based on the past sentences already processed by the compressive decoder e ( c i 1 ) (in WordStates ).",
"2 The summary representation at step i ( o si ) is then used to determine whether to keep or not the current sentence in the summary ( z i = 1 or 0 respectively).",
"The summarizer state subsumes information about the document, sentence and summary as: p i = tanh ( WE [ d si ; o si ] + b s ) , where WE is a model parameter, o si is the dynamic LSTM state, and b s is a bias term.",
"This modeling decision is crucial in order to generate variable length summaries.",
"It captures information about the sentences or words already present in the summary, helping in better understanding the true length of the summary given the document.",
"Finally, the summarizer state p i is used to compute the probability of the action at time i as: p ( z i p i ) = exp ( W z i p i + x z i ) z { 0 , 1 } exp ( W z p i + x z ) , 2 When using only the extractive model the summary state o si is generated from an LSTM whose inputs correspond to the sentence encoded embeddings { e ( s k )} k < i,z k = 1 instead of the previously generated compressed representations { e ( c k )} k < i,z k = 1 .",
"where W z is a model parameter and x z is a bias term for the summarizer action z .",
"We minimize the negative log-likelihood of the observed labels at training time (Dimitroff et al., 2013), where s 0 and s 1 represent the distribution of each class for the given sentences: 3 L ( s ) = c { 0 , 1 } sc M i = 1 1 z i = c i,z i = 0 log p ( z i p i ) , where 1 z i = c is the indicator function of class c and s represents all the training parameters of the sentence encode/decoder.",
"At test time, the model emits probability p ( z i p i ) , which is used as the soft prediction sequentially extracting the sentence i .",
"We admit sentences when p ( z i = 1 p i ) > 0 .",
"5 .",
"Compressive Decoder Our compressive decoder shares its architecture with the extractive decoder.",
"The compressive layer is triggered every time a sentence is selected in the summary and is responsible for selecting the words within each selected sentence.",
"In practice, WordStates LSTM (see Figure 3) is applied hierarchically after the sentence-level decoder, using as input the collected word embeddings so far: o wij = WordStates ({ e ( w ik )} k j,y ik = 1 ) .",
"After making the selection decision for all words pertaining to a sentence, the final state of the WordStates , e ( c i ) = o wiN is fed back to SentStates of the extractive level decoder for the consecutive sentence, as depicted in Figure 3.",
"The word-level summarizer state representation depends on the encoding of words, document and sentence d wij , on the dynamic LSTM encoding for the summary based on the selected words ( WordStates ) o wij and sentences ( SentStates ) o si : q ij = tanh ( WC [ d wij ; o si ; o wij ] + b w ) , where WC is a model parameter and b w is a bias term.",
"3 If M Mi = 1 z i = 0 or Mi = 1 z i = 0 , we simply consider the whole term to be zero.",
"Here M represents the number of sentences in the document.",
"with parameter W y ij and bias x y ij .",
"The final loss for the compressive layer is L ( w ) = M i = 1 z i ( i w ) , where w represents the set of all the training parameters of the word-level encoder/decoder, ( i ) is the compressive layer loss over N words: ( i w ) = c { 0 , 1 } wc M i = 1 1 y ij = c i,z i = 0 log p ( y ij q ij ) .",
"The total final loss is then given by the sum of the extractive and compressive counterparts, L ( ) = L ( s ) + L ( w ) .",
"We mainly used the CNN/DailyMail corpus (Hermann et al., 2015) to evaluate our models.",
"We used the standard splits of Hermann et al. (2015) for training, validation, and testing (90,266/1,220/1,093 documents for CNN and 196,961/12,148/10,397 for DailyMail).",
"To evaluate the flexibility of our model, we also evaluated our models on the Newsroom dataset (Grusky et al., 2018), which includes articles form a diverse collection of sources (38 publishers) with different summary style subsets: extractive (Ext.), mixed (Mixed) and abstractive (Abs.).",
"We used the standard splits of Grusky et al. (2018) for training, validation, and testing (331,778/36,332/36,122 documents for Ext., 328,634/35,879/36,006 for Mixed and 332,554/36,380/36,522 for Abs.).",
"We did not anonymize entities or lower case tokens.",
"Datasets for training extractive summarization systems do not naturally contain sentence/word-level labels.",
"Instead, they are typically accompanied by abstractive summaries from which extraction labels are extrapolated.",
"We create extractive and compressive summaries prior to training using two types of oracles .",
"We used an extractive oracle to identify the set of sentences which collectively gives the highest ROUGE (Lin and Hovy, 2003) with respect to the gold summary (Narayan et al., 2018c).",
"To build a compressive oracle , we trained a supervised sentence labeling classifier, adapted from Oracle R1 R2 RL Extractive Oracle 54.67 30.37 50.81 Compressive Oracle 57.12 32.59 53.27 Table 1: Oracle scores obtained for the CNN and DailyMail testsets.",
"the Transition-Based Chunking Model (Lample et al., 2016), to annotate spans in every sentence that can be dropped in the final summary.",
"We used the publicly released set of 10,000 sentence-compression pairs from the Google sentence compression dataset (Filippova and Altun, 2013; Filippova et al., 2015) for training.",
"After tagging all sentences in the CNN and DailyMail corpora using this compression model, we generated oracle compressive summaries based on the best average of ROUGE-1 (R1) and ROUGE-2 (R2) F 1 scores from the combination of all possible sentences and all removals of the marked compression chunks.",
"To verify the adequacy of our proposed oracles, we show in Table 1 a comparison of their scores.",
"Our compressive oracle achieves much better scores than the extractive oracle, because of its capability to make summaries concise.",
"Moreover, the linguistic quality of these oracles was preserved due to the tagging of the entire span by the sentence compressor trained on the sentence compression dataset.",
"4 We believe that our dataset with oracle compression labels will be of significant interest to the sentence compression and summarization community.",
"The parameters for the loss at the sentence-level were s 0 = 2 and s 1 = 1 and at the word-level, w 0 = 1 and w 1 = 0 .",
"5 .",
"We used LSTMs with d = 512 for all hidden layers.",
"We performed mini-batch negative log-likelihood training with a batch size of 2 documents for 5 training epochs.We observed the convergence of the model between the 2nd and the 3rd epochs.",
"It took around 12 hrs on a single GTX 1080 GPU to train.",
"We evaluated our model on the validation set after every 5,000 batches.",
"We trained with Adam (Kingma and Ba, 2015) with an initial learning rate of 0 .",
"001 .",
"Our system was implemented using DyNet (Neubig et al., 2017).",
"We evaluated summarization quality using F 1 ROUGE (Lin and Hovy, 2003).",
"We report results 4 We show examples of both oracles in Appendix A.1.",
"in terms of unigram and bigram overlap (R1) and (R2) as a means of assessing informativeness, and the longest common subsequence (RL) as a means of assessing fluency.",
"5 In addition to ROUGE, which can be misleading when used as the only means to assess summaries (Schluter, 2017), we also conducted a question-answering based human evaluation to assess the informativeness of our summaries in their ability to preserve key information from the document (Narayan et al., 2018c).",
"6 First, questions are written using the gold summary, we then examined how many questions participants were able to answer by reading system summaries alone, without access to the article.",
"7 Figure 5 shows a set of candidate summaries along with questions used for this evaluation.",
"We evaluated our model EXCONSUMM in two settings: Extractive (selects sentences to assemble the summary) and Compressive (selects sentences and compresses them by removing unnecessary spans of words).",
"We compared our models against a baseline (LEAD) that selects the first m leading sentences from each document, 8 three neural extractive models, and various abstractive models.",
"For the extractive models, we used SUMMARUNNER (Nallapati et al., 2017), since it shares some similarity to our model, REFRESH (Narayan et al., 2018c) trained with reinforcement learning and LATENT (Zhang et al., 2018) a neural architecture that makes use of latent variable to avoid creating oracle summaries.",
"We further compare against LATENT +C OMPRESS (Zhang et al., 2018), an extension of the LATENT model that learns to map extracted sentences to final summaries using an attention-based seq2seq model (Rush et al., 2015).",
"All models, unlike ours, extract a fixed number of sentences to assemble their summaries.",
"For abstractive models, we compare against the state-of-the art models of POINTER +C OVERAGE (See et al., 2017), ML+RL (Paulus et al., 2018), and Tan et al. (2017) among others.",
"See Appendix A.2 for more details.",
"8 We follow Narayan et al. (2018c) and set m = 3 for CNN and 4 for DailyMail.",
"We follow Grusky et al. (2018) and set m = 2 for Newsroom.",
"Comparison with Extractive Systems.",
"EXCONSUMM Compressive performs best on the CNN dataset and EXCONSUMM Extractive on the DailyMail dataset, probably due to the fact that the CNN dataset is less biased towards extractive methods than the DailyMail dataset (Narayan et al., 2018b).",
"We report similar results on the Newsroom dataset.",
"EXCONSUMM Compressive tends to perform better for mixed (Mixed) and abstractive (Abs.) subsets, while EXCONSUMM Extractive performs better for the extractive (Ext.) subset.",
"Our experiments demonstrate that our compressive model tends to perform better on the dataset which promotes abstractive summaries.",
"We find that EXCONSUMM Extractive consistently performs better on all metrics when compared to any of the other extractive models, except for the single case where it is narrowly behind LATENT on R2 (18.6 vs 18.8) for the CNN/DailyMail combined test set.",
"It even outperforms REFRESH , which is trained with reinforcement learning.",
"We hypothesize that its superior performance stems from the ability to generate variable length summaries.",
"REFRESH or LATENT , on the other hand, always produces a fixed length summary.",
"EXCONSUMM Compressive reports superior performance compared to LATENT +C OMPRESS (+4.2 for R1, +2.6 for R2 and +3.1 for RL).",
"Our results demonstrate that our compressive system is more suitable for document summarization.",
"It first selects sentences and then compresses them by removing irrelevant spans of words.",
"It makes use of an advance oracle sentence compressor trained on a dedicated sentence compression dataset (Sec. 4.1).",
"In contrast, LATENT +C OMPRESS naively trains a sequence-to-sequence compressor to map a sentence in the document to a sentence in the summary.",
"Comparison with Abstractive Systems.",
"Both EXCONSUMM Extractive and Compressive outperform most of the abstractive systems including Pointer+Coverage (See et al., 2017).",
"When comparing with more recent methods (Pasunuru and Bansal, 2018; Gehrmann et al., 2018), our model has comparable performance.",
"Summary Versatility.",
"We evaluate the ability of our model to generate variable length summaries.",
"Table 4 show the Pearson correlation coefficient between the lengths of the human generated summaries against each unbounded model.",
"Our compressive approach obtains the best results, with a Pearson correlation coefficient of 0.72 ( p < 0 . 001 ).",
"Figure 4 also shows the distribution of words Models Bounded Unbounded Human QA ROUGE Human QA ROUGE Pearson score rank R1 R2 RL score rank R1 R2 RL r LEAD 25.50 4 rd 30.9 11.9 29.1 36.33 5 th 31.6 13.5 29.3 0.40 REFRESH 20.88 6 th 37.4 17.3 34.8 66.34 1 st 43.8 25.8 41.6 0.60 LATENT 38.45 2 nd 38.9 19.6 36.4 53.38 4 th 40.7 22.0 38.1 -0.02 EXCONSUMM Extractive 36.34 3 rd 38.4 18.5 35.9 54.93 3 rd 40.8 21.0 38.2 0.68 EXCONSUMM Compressive 39.44 1 ST 38.8 19.0 37.0 57.32 2 nd 41.4 22.6 39.1 0.72 Pointer+Coverage 24.51 5 th 38.4 19.7 36.7 28.73 6 th 40.2 21.4 38.0 0.30 Table 4: QA evaluations: limited length (Bounded) and full length (Unbounded) summaries.",
"per summary for the models where predictions were available.",
"Interestingly, both EXCONSUMM Extractive and Compressive follow the human distribution much better than other extractive systems (LEAD , REFRESH and LATENT ), since they are able to generate variable-length summaries depending on the input text.",
"Our compressive model generates a word distribution much closer to the abstractive Pointer+Coverage model but achieves better compression ratio; the summaries generated by Pointer+Coverage contain 59.8 words, while those generated by EXCONSUMM Compressive have 54.3 words on average.",
"Table 4 shows results from our question answering based human evaluation.",
"We elicited human judgements in two settings: the Unbounded, where participants were shown the full system produced summaries; and the Bounded, where participants were shown summaries that were limited to the same size as the gold summaries.",
"For the Unbounded setting, the output summaries produced by REFRESH were able to answer most of the questions correctly, our Compressive and Extractive systems were placed at the 2nd and 3rd places respectively.",
"9 We observed that our systems were able to produce more concise summaries than those produced by REFRESH (avg. length in words: 76.0 for REFRESH , 56.2 for EXCONSUMM Extractive and 54.3 for EXCONSUMM Compressive; see Figure 4).",
"REFRESH is prone to generating verbose summaries, consequently it has an advantage of accumulating more information.",
"In the Bounded setting, we aim to reduce this unfair advantage.",
"Scores are overall lower since the summary sizes are truncated to gold size.",
"The EXCONSUMM Compressive summaries rank first and can answer 39.44% of questions correctly.",
"EXCONSUMM Extractive retains its 3rd place answering 36.34% of questions correctly.",
"10 These results demonstrate that our models generate concise and informative summaries that correlate well with the human summary lengths.",
"11 5.3 Summary State Representation Next, we performed an ablation study to investigate the importance of the summary state representation o si w.r.t. the quality of the overall sum-9 We carried out pairwise comparisons between all models to assess whether system differences are statistically significant.",
"We found that there is no statistically significant difference between REFRESH and EXCONSUMM Compressive.",
"We use a one-way ANOVA with posthoc Tukey HSD tests with p < 0 .",
"01 .",
"The differences among LATENT and both variants of EXCONSUMM , and between LEAD and Pointer+Coverage are also statistically insignificant.",
"All other differences are statistically significant.",
"10 The differences among both variants of EXCONSUMM and LATENT , and among LEAD , REFRESH and Pointer+Coverage are statistically insignificant.",
"All other differences are statistically significant.",
"We use a one-way ANOVA with posthoc Tukey HSD tests with p < 0 .",
"01 .",
"11 App.",
"A.2 shows more examples of our summaries.",
"LEAD (CNN) A top al Qaeda in the Arabian Peninsula leaderwho a few years ago was in a U.S. detention facilitywas among five killed in an airstrike in Yemen, the terror group said, showing the organization is vulnerable even as Yemen appears close to civil war.",
"Ibrahim al-Rubaish died Monday night in what AQAP's media wing, Al-Malahem Media, called a crusader airstrike. The Al-Malahem Media obituary characterized al-Rubaish as a religious scholar and combat commander.",
"REFRESH (CNN) A top al Qaeda in the Arabian Peninsula leaderwho a few years ago was in a U.S. detention facilitywas among five killed in an airstrike in Yemen, the terror group said, showing the organization is vulnerable even as Yemen appears close to civil war.",
"Ibrahim al-Rubaish died Monday night in what AQAP's media wing, Al-Malahem Media, called a crusader airstrike. Al-Rubaish was once held by the U.S. government at its detention facility in Guantanamo Bay, Cuba.",
"LATENT (CNN) A top al Qaeda in the Arabian Peninsula leaderwho a few years ago was in a U.S. detention facilitywas among five killed in an airstrike in Yemen, the terror group said, showing the organization is vulnerable even as Yemen appears close to civil war.",
"Ibrahim al-Rubaish died Monday night in what AQAP's media wing, Al-Malahem Media, called a crusader airstrike.",
"The Al-Malahem Media obituary characterized al-Rubaish as a religious scholar and combat commander.",
"A Yemeni Defense Ministry official and two Yemeni national security officials not authorized to speak on record confirmed that al-Rubaish had been killed, but could not specify how he died.",
"EXCONSUMM Extractive (CNN) A top al Qaeda in the Arabian Peninsula leaderwho a few years ago was in a U.S. detention facilitywas among five killed in an airstrike in Yemen, the terror group said, showing the organization is vulnerable even as Yemen appears close to civil war.",
"Ibrahim al-Rubaish died Monday night in what AQAP's media wing, Al-Malahem Media, called a crusader airstrike.",
"EXCONSUMM Compressive A top al Qaeda in the Arabian Peninsula leaderwho a few years ago was in a U.S. detention facilitywas among five killed in an airstrike in Yemen.",
"Ibrahim al-Rubaish died in what AQAP's media wing, Al-Malahem Media, called a crusader airstrike.",
"Pointer+Coverage Ibrahim al-Rubaish was among a number of detainees who sued the administration of then-president George W. Bush to challenge the legality of their confinement in Gitmo.",
"alRubaish was once held by the U.S. government at its detention facility in Guantanamo bay, Cuba.",
"GOLD AQAP says a crusader airstrike killed Ibrahim al-Rubaish Al-Rubaish was once detained by the United States in Guantanamo Question-Answer Pairs Who said that an airstrike killed Ibrahim al-Rubaish?",
"mary.",
"We tested against a STATE AVERAGING variant, where we replace o si by a weighted average, analogous to Nallapati et al. (2017), o avg s i = j 1 i = 1 e ( s i ) p ( z i p avgi ) , where p avgi has the same State ROUGE R1 R2 RL EXCONSUMM Extractive 32.5 12.6 28.5 STATE AVERAGING 30.0 12.3 26.9 EXCONSUMM Compressive 32.5 12.7 29.2 EXCONSUMM Ext+Comp oracle 25.5 9.3 23.7 Table 5: Summary state ablation for the CNN dataset.",
"form as p i but depends recursively on the previous summary state o avg s i 1 .",
"Table 5 shows that using an LSTM state o si to model the current sentences in the summary is very important.",
"The other ablation study shows how learning to extract and compress in a disjoint approach (EXCONSUMM Ext+Comp oracle) performs against a joint learning approach (EXCONSUMM Compressive).",
"We compared summaries generated from our best extractive model and compressed them with a compressive oracle.",
"Our joint learning model achieves the best performance in all metrics compared with the other ablations, suggesting that joint learning and using a summary state representation is bene-ficial for summarization.",
"We developed EXCONSUMM , a novel summarization model to generate variable length extractive and compressive summaries.",
"Experimental results show that the ability of our model to learn a dynamic representation of the summary produces summaries that are informative, concise, and correlate well with human generated summary lengths.",
"Our model outperforms state-of-the-art extractive and most of abstractive systems on the CNN and DailyMail datasets, when evaluated automatically, and through human evaluation for the bounded scenario.",
"We further obtain state-of-the-art results on Newsroom, a more abstractive summary dataset.",
"This work is supported by the EU H2020 SUMMA project (grant agreement N o 688139), by Lisbon Regional Operational Programme (Lis-boa 2020), under the Portugal 2020 Partnership Agreement, through the European Regional Development Fund (ERDF), within project INSIGHT (N o 033869), by the European Research Council (ERC StG DeepSPIN 758969), and by the Fundacao para a Ciencia e Tecnolo-gia through contracts UID/EEA/50008/2019 and CMUPERI/TIC/0046/2014 (GoLocal)."
] | [
"objective",
"abstain",
"result",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"method",
"result",
"other",
"result",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"other",
"other",
"other",
"abstain",
"method",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"other",
"objective",
"method",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"result",
"result",
"result",
"other"
] |
[
"Generating fluent and informative responses is of critical importance for task-oriented dialogue systems.",
"Existing pipeline approaches generally predict multiple dialogue acts first and use them to assist response generation.",
"There are at least two shortcomings with such approaches.",
"First, the inherent structures of multi-domain dialogue acts are neglected.",
"Second, the semantic associations between acts and responses are not taken into account for response generation.",
"To address these issues, we propose a neural co-generation model that generates dialogue acts and responses concurrently.",
"Unlike those pipeline approaches, our act generation module preserves the semantic structures of multi-domain dialogue acts and our response generation module dynamically attends to different acts as needed.",
"We train the two modules jointly using an uncertainty loss to adjust their task weights adaptively.",
"Extensive experiments are conducted on the large-scale MultiWOZ dataset and the results show that our model achieves very favorable improvement over several state-of-the-art models in both automatic and human evaluations.",
"Task-oriented dialogue systems aim to facilitate people with such services as hotel reservation and ticket booking through natural language conversations.",
"Recent years have seen a rapid proliferation of interests in this task from both academia and industry (Bordes et al., 2017; Budzianowski et al., 2018; Wu et al., 2019).",
"A standard architecture of these systems generally decomposes this task into several subtasks, including natural language understanding (Gupta et al., 2018), dialogue state tracking (Zhong et al., 2018) and natural language Xiaojun Quan is the corresponding author of this paper.",
"Most of this work was done when Kai Wang was working as an intern at Alibaba DAMO Academy.",
"generation (Su et al., 2018).",
"They can be modeled separately and combined into a pipeline system.",
"Figure 1 shows a dialogue example, from which we can notice that the natural language generation subtask can be further divided into dialogue act prediction and response generation (Chen et al., 2019; Zhao et al., 2019; Wen et al., 2017).",
"While the former is intended to predict the next action(s) based on current conversational state and database information, response generation is used to produce a natural language response based on the action(s).",
"In order for dialogues to be natural and effective, responses should be fluent, informative, and relevant.",
"Nevertheless, current sequence-to-sequence models often generate uninformative responses like I don't know (Li et al., 2016a), hindering the dialogues to continue or even leading to a failure.",
"Some researchers (Pei et al., 2019; Mehri et al., area root hotel restaurant attraction name request inform phone domain action slot Dialog Act Graph Sequence Generation (Ours) Multiple Binary Classification (HDSA) hotel ... inform...phone reference < sos > rest.",
"2019) sought to combine multiple decoders into a stronger one to avoid such responses, while others (Chen et al., 2019; Wen et al., 2015; Zhao et al., 2019; Wen et al., 2017) represent dialogue acts in a global, static vector to assist response generation.",
"As pointed out by Chen et al. (2019), dialogue acts can be naturally organized in hierarchical structures, which has yet to be explored seriously.",
"Take two acts station-request-stars and restaurant-inform-address as an example.",
"While the first act rarely appears in real-world dialogues, the second is more often.",
"Moreover, there can be multiple dialogue acts mentioned in a single dialogue turn, which requires the model to attend to different acts for different sub-sequences.",
"Thus, a global vector is unable to capture the inter-relationships among acts, nor is it flexible for response generation especially when more than one act is mentioned.",
"To overcome the above issues, we treat dialogue act prediction as another sequence generation problem like response generation and propose a co-generation model to generate them concurrently.",
"Unlike those classification approaches, act sequence generation not only preserves the interrelationships among dialogue acts but also allows close interactions with response generation.",
"By attending to different acts, the response generation module can dynamically capture salient acts and produce higher-quality responses.",
"Figure 2 demonstrates the difference between the classification and the generation approaches for act prediction.",
"As for training, most joint learning models rely on hand-crafted or tunable weights on development sets (Liu and Lane, 2017; Mrksic et al., 2017; Rastogi et al., 2018).",
"The challenge here is to combine two sequence generators with varied vocabularies and sequence lengths.",
"The model is sensitive during training and nontrivial to generate an optimal weight.",
"To address this issue, we opt for an uncertainty loss (Kendall et al., 2018) to adaptively adjust the weight according to task-specific uncertainty.",
"We conduct extensive studies on a large-scale task-oriented dataset to evaluate the model.",
"The experimental results confirm the effectiveness of our model with very favorable performance over several state-of-the-art methods.",
"The contributions of this work include: We model dialogue act prediction as a sequence generation problem that allows to exploit act structures for the prediction.",
"We propose a co-generation model to generate act and response sequences jointly, with an uncertainty loss used for adaptive weighting.",
"Experiments on MultiWOZ verify that our model outperforms several state-of-the-art methods in automatic and human evaluations.",
"Dialogue act prediction and response generation are closely related in general in the research of dialogue systems (Chen et al., 2019; Zhao et al., 2019; Wen et al., 2017), where dialogue act prediction is first conducted and used for response generation.",
"Each dialogue act can be treated as a triple (domain-action-slot) and all acts together are represented in a one-hot vector (Wen et al., 2015; Budzianowski et al., 2018).",
"Such sparse representation makes the act space very large.",
"To overcome this issue, Chen et al. (2019) took into account act structures and proposed to represent the dialogue acts with level-specific one-hot vectors.",
"Each dimension of the vectors is predicted by a binary classifier.",
"To improve response generation, Pei et al. (2019) proposed to learn different expert decoders for different domains and acts, and combined them with a chair decoder.",
"Mehri et al. (2019) applied a cold-fusion method (Sriram et al., 2018) to combine their response decoder with a language model.",
"Zhao et al. (2019) treated dialogue acts as latent variables and used reinforcement learning to optimize them.",
"Reinforcement learning was also applied to find optimal dialogue policies in task-oriented dialogue systems (Su et al., 2017; 7127 Williams et al., 2017) or obtain higher dialog-level rewards in chatting (Li et al., 2016b; Serban et al., 2017).",
"Besides, Chen et al. (2019) proposed to predict the acts explicitly with a compact act graph representation and employed hierarchical disentangled self-attention to control response text generation.",
"Unlike those pipeline architectures, joint learning approaches try to explore the interactions between act prediction and response generation.",
"A large body of research in this direction uses a shared user utterance encoder and train natural language understanding jointly with dialogue state tracking (Mrksic et al., 2017; Rastogi et al., 2018).",
"Liu and Lane (2017) proposed to train a unified network for two subtasks of dialogue state tracking, i.e., knowledge base operation and response candidate selection.",
"Jiang et al. (2019) showed that joint learning of dialogue act and response benefits representation learning.",
"These works generally demonstrate that joint learning of the subtasks of dialogue systems is able to improve each other and the overall system performance.",
"Let T = { U 1 , R 1 , . . . , U t 1 , R t 1 , U t } denote the dialogue history in a multi-turn conversational setting, where U i and R i are the i -th user utterance and system response, respectively.",
"D = { d 1 , d 2 , . . . , d n } includes the attributes of related database records for current turn.",
"The objective of a dialogue system is to generate a natural language response R t = y 1 y 2 . . . y n of n words based on the current belief state and database attributes.",
"In our framework, dialogue acts and response are co-generated based on the transformer encoder-decoder architecture (Vaswani et al., 2017).",
"A standard transformer includes a multi-head attention layer that encodes a value V according to the attention weights from query Q to key K , followed by a position-wise feed-forward network ( G f ): O = V + G f ( MultiHead ( Q, K, V )) (1) where Q, K, V, O R n d .",
"Encoder We use E = Emb ([ T ; D ]) to represent the concatenated word embeddings of dialogue history T and database attributes D .",
"The transformer F ( Q, K, V ) is then used to encode E and output its hidden state H e : H e = F ( E, E, E ) (2) Decoder At each time step t of response generation, the decoder first computes a self-attention h rt over already-generated words y 1: t 1 : h rt = F ( e rt 1 , e r 1: t 1 , e r 1: t 1 ) (3) where e rt 1 is the embedding of the ( t 1) -th generated word and e r 1: t 1 is an embedding matrix of e r 1 to e rt 1 .",
"Cross-attention from h rt to dialogue history T is then executed: c rt = F ( h rt , H e , H e ) (4) The resulting vectors of Equations 3 and 4, h rt and c rt , are concatenated and mapped to a distribution of vocabulary size to predict next word: p ( y t | y 1: t 1 ) = softmax ( W r [ c rt ; h rt ]) (5) 4 The MARCO Approach Based on the above encoder-decoder architecture, our model is designed to consist of three components, namely, a shared encoder, a dialogue act generator, and a response generator.",
"As shown in Figure 3, instead of predicting each act token individually and separately from response generation, our model aims to generate act sequence and response concurrently in a joint model which is optimized by the uncertainty loss (Kendall et al., 2018).",
"Dialogue acts can be viewed as a semantic plan for response generation.",
"As shown in Figure 2, they can be naturally organized in hierarchical structures, including domain level, action level, and slot level.",
"Most existing methods treat dialogue acts as triples represented in one-hot vectors and predict the vector values with binary classifiers (Wen et al., 2015; Budzianowski et al., 2018).",
"Such representations ignore the inter-relationships and associations among acts, domains, actions and slots.",
"For example, the slot area may appear in more than one domain.",
"Unlike them, we model the prediction of acts as a sequence generation problem, which takes into consideration the structures of acts and generates each act token conditioned on its previously-generated tokens.",
"In this approach, different domains are allowed to share common slots and the search space of dialogue act is greatly reduced.",
"sequence is organized by domain, action and slot, while items at each level are arranged in dictionary order, where identical items are merged.",
"When decoding each act token, we first represent the current belief state with an embedding vector v b and add it to each act word embedding e at as: u at = W b v b + e at .",
"Finally, the decoder of Section 3.2 is used to generate hidden states H a and act tokens accordingly.",
"Dialogue acts and responses are closely related in dialogue systems.",
"On one hand, system responses are generated based on dialogue acts.",
"On the other, their shared information can improve each other through joint learning.",
"Shared Encoder Our dialogue act generator and response generator share one same encoder and input, but having different masking strategies for the input to focus on different information.",
"In particular, only the current utterance is kept for act generation, while the entire history utterances are used for response generation.",
"1 1 Empirical evidences show that act generation is more related to the current utterance, while response generation benefits more from long dialogue history.",
"Dynamic Act Attention A response usually corresponds to more than one dialogue act in multi-domain dialogue systems.",
"Nevertheless, existing methods mostly use a static act vector to represent all the acts, and add the vector to each response token representation.",
"They ignore the fact that different subsequences of a response may need to attend to different acts.",
"To address this issue, we compute dynamic act attention o rt from the response to acts when generating a response word: o rt = F ( h rt , H a , H a ) (7) where h rt is the current hidden state produced by Equation 3.",
"Then, we combine o rt and h rt with response-to-history attention c rt (by Equation",
"4) to estimate the probabilities of next word: p ( y t | y 1: t 1 ) = softmax ( W r [ h rt ; c rt ; o rt ]) (8) Uncertainty Loss The cross-entropy function is used to measure the generation losses, L a ( ) and L r ( ) , of dialogue acts and responses, respectively: L a ( ) = T a (cid:2) j =1 log p ( a ( i ) j | a ( i ) 1: j 1 , T, D, v b ) (9) L r ( ) = T r (cid:2) j =1 log p ( y ( i ) j | y ( i ) 1: j 1 , T, D, A ) (10) 7129 where the ground-truth tokens of acts and response of each turn are represented by A and Y , while the predicted tokens by A and Y .",
"To optimize the above functions jointly, a general approach is to compute a weighted sum like: L ( ) = L a ( ) + (1 ) L r ( ) (11) However, dialogue acts and responses vary seriously in sequence length and vocabulary size, making the weight unstable to tune.",
"Instead, we opt for an uncertainty loss (Kendall et al., 2018) to adjust it adaptively: L ( , 1 , 2 ) = 1 2 21 L a ( )+ 1 2 22 L r ( )+log 21 22 (12) where 1 and 2 are two learnable parameters.",
"The advantage of this uncertainty loss is that it models the homoscedastic uncertainty of each task and provides task-dependent weight for multi-task learning (Kendall et al., 2018).",
"Our experiments also confirm that it leads to more stable weighting than the traditional approach (Section 6.3).",
"MultiWOZ 2.0 (Budzianowski et al., 2018) is a large-scale multi-domain conversational datatset consisting of thousands of dialogues in seven domains.",
"For fair comparison, we use the same validation set and test set as previous studies (Chen et al., 2019; Zhao et al., 2019; Budzianowski et al., 2018), each set including 1000 dialogues.",
"2 We use the Inform Rate and Request Success metrics to evaluate dialog completion, with one measuring whether a system has provided an appropriate entity and the other assessing if it has answered all requested attributes.",
"Besides, we use BLEU (Papineni et al., 2002) to measure the fluency of generated responses.",
"To measure the overall system performance, we compute a combined score: ( Inform Rate + Request Success ) 0 .",
"5 + BLEU as before (Budzianowski et al., 2018; Mehri et al., 2019; Pei et al., 2019).",
"The implementation 3 is on a single Tesla P100 GPU with a batch size of 512.",
"The dimension of 2 There are only five domains ( restaurant , hotel , attract , taxi , train ) of dialogues in the test set as the other two ( hospital , police ) have insufficient dialogues.",
"word embeddings and hidden size are both set to 128.",
"We use a 3-layer transformer with 4 heads for the multi-head attention layer.",
"For decoding, we use a beam size of 2 to search for optimal results, and apply trigram avoidance (Paulus et al., 2018) to fight trigram-level repetition.",
"During training, we first train the act generator for 10 epochs for warmup and then optimize the uncertainty loss with the Adam optimizer (Kingma and Ba, 2015).",
"A few mainstream models are used as baselines for comparison with our neural co-generation model (MARCO ), being categorized into three categories:",
"Without Act .",
"Models in this category directly generate responses without act prediction, including LSTM (Budzianowski et al., 2018), Transformer (Vaswani et al., 2017), TokenMoE (Pei et al., 2019) and Structured Fusion (Mehri et al., 2019).",
"One-Hot Act .",
"In SC-LSTM (Wen et al., 2015), dialogue acts are treated as triples and information flow from acts to response generation is controlled by gates.",
"HDSA (Chen et al., 2019) is a strong two-stage model, which relies on BERT (Devlin et al., 2019) to predict a one-hot act vector for response generation.",
"Sequential Act .",
"Since our model does not rely on BERT, to make a fair comparison with HDSA, we design the experiments from two aspects to ensure they have the same dialogue act inputs for response generation.",
"First, the act sequences produced by our co-generation model are converted into one-hot vectors and fed to HDSA.",
"Second, the predicted one-hot act vectors by BERT are transformed into act sequences and passed to our model as inputs.",
"The overall results are shown in Table 1, in which HDSA (MARCO ) means HDSA using MARCO 's dialogue act information, and MARCO (BERT) means MARCO based on BERT's act prediction.",
"From the table we can notice that our co-generation model (MARCO ) outperforms all the baselines in Inform Rate , Request Success , and especially in combined score which is an overall metric.",
"By comparing the two HDSA models, we can find HDSA derives its main performance from the external BERT, which can also be used to improve our MARCO considerably (MARCO (BERT)).",
"These 7130 Dialog Act Model Inform Success BLEU Combined Score Without Act LSTM 71.29 60.96 18.80 84.93 Transformer 71.10 59.90 19.10 84.60 TokenMoE 75.30 59.70 16.81 84.31 Structured Fusion 82.70 72.10 16.34 93.74 One-hot Act SC-LSTM 74.50 62.50 20.50 89.00 HDSA (MARCO ) 76.50 62.30 21.85 91.25 HDSA 82.90 68.90 23.60 99.50 Sequential Act MARCO 90.30 75.20 19.45 102.20 MARCO (BERT) 92.30 78.60 20.02 105.47 Table 1: Overall results on the MultiWOZ 2.0 dataset.",
"results confirm the success of MARCO by modeling act prediction as a generation problem and training it jointly with response generation.",
"Another observation is that despite its strong overall performance, MARCO shows inferior BLEU performance to the two HDSA models.",
"The reason behind this is studied and analyzed in human evaluation (Section 7), showing that our model often generates responses inconsistent with references but favored by human judges.",
"The performance of our model across different domains is also compared against HDSA.",
"The average number of turns is 8.93 for single-domain dialogues and 15.39 for multi-domain dialogues (Budzianowski et al., 2018).",
"As in Figure 4, our model shows superior performance to HDSA across all domains.",
"The results suggest that MARCO is good at dealing with long dialogues.",
"which is an updated version of MultiWOZ 2.0.",
"As shown in Table 2, the overall results are consistent with that on MultiWOZ 2.0.",
"More thorough studies and analysis are conducted in this section, trying to answer three questions: (1) How is the performance of our act generator in comparison with existing classification methods?",
"(2) Can our joint model successfully build semantic associations between acts and responses?",
"(3) How does the uncertainty loss contribute to our co-generation model?",
"To evaluate the performance of our act generator, we compare it with several baseline methods mentioned in (Chen et al., 2019), including BiLSTM, Word-CNN, and 3-layer Transformer.",
"We use MARCO to represent our act generator which is trained jointly with the response generator, and use Transformer (GEN) to denote our act generator without joint training.",
"From Table 3, we notice that the separate generator, Transformer (GEN), performs much better than BiLSTM and Word-CNN, but comparable with Transformer.",
"But after trained jointly with the response generator, MARCO manages to show the best performance, confirming the effect of the co-generation.",
"To study the influence of the joint training and the dynamic act attention on response generation, we implement two pipeline approaches for comparison.",
"We first train our act generator separately from response generation.",
"Then, we keep its parameters fixed and train the response generator.",
"The first baseline is created by replacing the dynamic act attention (Equation",
"7) with an average of the act hidden states, while the second baseline uses the dynamic act attention.",
"As shown in Table 4, Pipeline 2 with dynamic act attention is superior to Pipeline 1 without it in all metrics, but inferior to the joint approach.",
"Our joint model also surpasses the currently state-of-the-art pipeline system HDSA, even HDSA uses BERT.",
"We find that by utilizing sequential acts, the dynamic act attention mechanism helps the response generator capture the local information by attending to different acts.",
"An illustrative example is shown in Figure 5, where the response generator can attend to the local information such as day and stay as needed when generating a response asking about picking a different day or shorter stay.",
"We reckon that by utilizing sequential acts, response generation benefits in two ways.",
"First, the dynamic act attention allows the generator to attend to different acts when S e q u e n c i a l A c t Response Sequence Figure 5: An illustrative example of the dynamic act attention mechanism.",
"generating a subsequence.",
"Second, the joint training makes the two stages interact with each other, easing error propagation of pipeline systems.",
"We opt for an uncertainty loss to optimize our joint model, rather than a traditional weighted-sum loss.",
"To illustrate their difference, we conduct an experiment on the development set.",
"For the traditional loss (Equation 11), we run for each weight from 0 to 1 stepped by 0.1.",
"Note that since the weights, 1 and 2 , in the uncertainty loss are not hyperparam-eters but learned internally to each batch, we only record the best score within each round without giving the values of 1 and 2 .",
"As shown in Figure 6, the uncertainty loss can learn adaptive weights with consistently superior performance.",
"We conduct a human study to evaluate our model by crowd-sourcing.",
"4 For this purpose we randomly selected 100 sample dialogues (742 turns in total) from the test dataset and constructed two groups of systems for comparison: MARCO vs. HDSA and 4 The annotation results are available at https: //github.com/InitialBug/MarCo-Dialog/tree/master/human_evaluation 7132 MARCOHDSA MARCO vs. HDSA Readability Completion 17 2 74 94 9 4 Win Tie Lose MARCO Human Response MARCO vs. Human Response Readability Completion 30 9 47 75 23 16 100 100 100 100 Figure 7: Results of human study in response quality.",
"MARCO vs. Human Response, where Human Response means the reference responses.",
"Responses generated by each group were randomly assigned in pairs to 3 judges, who ranked them according to their completion and readability (Chen et al., 2019; Zhang et al., 2019).",
"Completion measures if the response correctly answers a user query, including relevance and informativeness.",
"Readability reflects how fluent, natural and consistent the response is.",
"The results of this study are shown in Figure 7, where Win, Tie or Lose mean our MARCO system wins over, ties with or loses to its counterpart, respectively.",
"From the results we note that MARCO outperforms HDSA and Human Response in completion, and ties 94% with HDSA in readability while underperforming Human Response.",
"Overall speaking, MARCO is superior to HDSA and comparable with Human Response.",
"We further analyzed the bad cases of our model in readability and found that our model slightly suffers from token level repetition, a problem that can be solved by methods like the coverage mechanism (Mi et al., 2016; Tu et al., 2016).",
"In completion, our model can understand the users' need and tends to provides them more relevant information, so that they can finish their goals in shorter turns.",
"We present two examples in Figure 8.",
"In the first example, the user requests the hotel type while HDSA ignores it.",
"The user requests to book one ticket in the second example, yet both HDSA and Human Response ask about the number once again.",
"In contrast, our model directly answers the questions with correct information.",
"To sum up, MARCO successfully improves the dialogue system by generating relevant and informative responses.",
"In this paper, we presented a novel co-generation model for dialogue act prediction and response generation in task-oriented dialogue systems.",
"Unlike previous approaches, we modeled act prediction as a sequence generation problem to exploit the semantic structures of acts and trained it jointly with response generation via dynamic attention from response generation to act prediction.",
"To train this joint model, we applied an uncertainty loss for adaptive weighting of the two tasks.",
"Extensive studies were conducted on a large-scale task-oriented dataset to evaluate the proposed model, and the results confirm its effectiveness with very favorable performance over several state-of-the-art methods.",
"We thank the anonymous reviewers for their constructive reviews.",
"This work was partially supported by the Fundamental Research Funds for the Central Universities (No.19lgpy220 and No.19lgpy219), the Program for Guangdong Introducing Innovative and Entrepreneurial Teams (No.2017ZT07X355), and the National Natural Science Foundation of China (No.61906217)."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"objective",
"objective",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"abstain",
"other",
"other"
] |
[
"Contextual string embeddings are a recent type of contextualized word embedding that were shown to yield state-of-the-art results when utilized in a range of sequence labeling tasks.",
"They are based on character-level language models which treat text as distributions over characters and are capable of generating embeddings for any string of characters within any textual context.",
"However, such purely character-based approaches struggle to produce meaningful embeddings if a rare string is used in a underspecified context.",
"To address this drawback, we propose a method in which we dynamically aggregate contextualized embeddings of each unique string that we encounter.",
"We then use a pooling operation to distill a global word representation from all contextualized instances.",
"We evaluate these pooled contextualized embeddings on common named entity recognition (NER) tasks such as CoNLL-03 and WNUT and show that our approach significantly improves the state-of-the-art for NER.",
"We make all code and pre-trained models available to the research community for use and reproduction.",
"Word embeddings are a crucial component in many NLP approaches (Mikolov et al., 2013; Pennington et al., 2014) since they capture latent semantics of words and thus allow models to better train and generalize.",
"Recent work has moved away from the original one word, one embed-ding paradigm to investigate contextualized embedding models (Peters et al., 2017, 2018; Akbik et al., 2018).",
"Such approaches produce different embeddings for the same word depending on its context and are thus capable of capturing latent contextualized semantics of ambiguous words.",
"Recently, Akbik et al. (2018) proposed a character-level contextualized embeddings ap-Fung B-PER Permadi E-PER ( Taiwan S-LOC ) v Indra S-ORG Figure 1: Example sentence that provides underspecified context.",
"proach they refer to as contextual string embeddings .",
"They leverage pre-trained character-level language models from which they extract hidden states at the beginning and end character positions of each word to produce embeddings for any string of characters in a sentential context.",
"They showed these embeddings to yield state-of-the-art results when utilized in sequence labeling tasks such as named entity recognition (NER) or part-of-speech (PoS) tagging.",
"Underspecified contexts.",
"However, such contextualized character-level models suffer from an inherent weakness when encountering rare words in an underspecified context.",
"Consider the example text segment shown in Figure 1: Fung Permadi (Taiwan) v Indra , from the English CO NLL-03 test data split (Tjong Kim Sang and De Meulder, 2003).",
"If we consider the word Indra to be rare (meaning no prior occurrence in the corpus used to generate word embeddings), the underspecified context allows this word to be interpreted as either a person or an organization.",
"This leads to an underspecified embedding that ultimately causes an incorrect classification of Indra as an organization in a downstream NER task.",
"Pooled Contextual Embeddings.",
"In this paper, we present a simple but effective approach to address this issue.",
"We intuit that entities are normally only used in underspecified contexts if they are expected to be known to the reader.",
"That is, they are either more clearly introduced in an earlier sentence, or part of general in-domain knowl-2 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 NAACL-HLT 2019 Submission ***.",
"Indeed, the string Indra in the CO NLL-03 data also occurs in the earlier sentence Indra Wijaya (Indonesia) beat Ong Ewe Hock .",
"Based on this, we propose an approach in which we dynamically aggregate contextualized embeddings of each unique string that we encounter as we process a dataset.",
"We then call the pooling operation over all contextualized embeddings for this word in the memory (line 4) to compute the pooled contextualized embedding.",
"Finally, we concatenate the original contextual embedding together with the pooled representation, to ensure that both local and global interpretations are represented (line 5).",
"As Algorithm 2 shows, we reset the memory at the beginning of each pass over the training data (line emb context Indra 2 I n d r a W i j a y a b e a t O n g E w e emb context Indra 3 A n d I n d r a s a i d t h a t . . . memory Indra emb proposed F u n g P e r m a d i v I n d r a Character Language Model emb context Indra 1 pooling concatenation current sentence Figure 2: Example of how we generate our proposed embedding ( emb proposed ) for the word Indra in the example text segment Fung Permadi v Indra .",
"We extract a contextual string embedding ( emb context ) for this word and retrieve from the memory all embeddings that were produced for this string on previous sentences.",
"We pool and concatenate all local contextualized embeddings to produce the final embedding.",
"We then use a pooling operation to distill a global word representation from all contextualized instances that we use in combination with the current contextualized representation as new word embedding.",
"edge a reader is expected to have.",
"Indeed, the string Indra in the CO NLL-03 data also occurs in the earlier sentence Indra Wijaya (Indonesia) beat Ong Ewe Hock .",
"We evaluate our proposed embedding approach on the task of named entity recognition on the CO NLL-03 (English, German and Dutch) and WNUT datasets.",
"In all cases, we find that our approach outperforms previous approaches and yields new state-of-the-art scores.",
"We contribute our approach and all pre-trained models to the open source FLAIR 1 framework, to ensure reproducibility of these results.",
"2 Method Our proposed approach dynamically builds up a memory of contextualized embeddings and applies a pooling operation to distill a global contextualized embedding for each word.",
"Based on this, we propose an approach in which we dynamically aggregate contextualized embeddings of each unique string that we encounter as we process a dataset.",
"We then use a pooling operation to distill a global word representation from all contextualized instances that we use in combination with the current contextualized representation as new word embedding.",
"Our approach thus produces evolving word representations that change over time as more instances of the same word are observed in the data.",
"It requires an embed () function that produces a contextualized embedding for a given word in a sentence context (see Akbik et al. (2018)).",
"It also requires a memory that records for each unique word all previous contextual embeddings, and a pool () operation to pool embedding vectors.",
"We evaluate our proposed embedding approach on the task of named entity recognition on the CO NLL-03 (English, German and Dutch) and WNUT datasets.",
"In all cases, we find that our approach outperforms previous approaches and yields new state-of-the-art scores.",
"We contribute our approach and all pre-trained models to the open source FLAIR 1 framework (Akbik et al., 2019), to ensure reproducibility of these results.",
"It requires an embed () function that produces a contextualized embedding for a given word in a 1 https://github.com/zalandoresearch/flair sentence context (see Akbik et al. (2018)).",
"This means that the resulting pooled contextualized embedding has twice the dimensionality of the original embedding.",
"It also requires a memory that records for each unique word all previous contextual embeddings, and a pool () operation to pool embedding vectors.",
"Algorithm 1 Compute pooled embedding Input: sentence , memory 1: for word in sentence do 2: emb context embed( word ) within sentence 3: add emb context to memory [ word ] 4: emb pooled pool( memory [ word ] ) 5: word.embedding concat( emb pooled , emb context ) 6: end for Crucially, our approach expands the memory each time we embed a word.",
"Therefore, the same word in the same context may have different embeddings over time as the memory is built up.",
"This is illustrated in Algorithm 1: to embed a word (in a sentential context), we first call the embed () function (line 2) and add the resulting embedding to the memory for this word (line 3).",
"We then call the pooling operation over all contextualized embeddings for this word in the memory (line 4) to compute the pooled contextualized embedding.",
"Finally, we concatenate the original contextual embedding together with the pooled representation, to ensure that both local and global interpretations are represented (line 5).",
"This means that the resulting pooled contextualized embedding has twice the dimensionality of the original embedding.",
"Pooling operations.",
"We experiment with different pooling operations: mean pooling to average a word's contextualized embedding vectors, and min and max pooling to compute a vector of all Approach CO NLL-03 ENCO NLL-03 DECO NLL-03 NL WNUT-17 Pooled Contextualized Embeddings min 93.18 0.09 88.27 0.30 90.12 0.14 49.07 0.31 Pooled Contextualized Embeddings max 93.13 0.09 88.05 0.25 90.26 0.10 49.05 0.26 Pooled Contextualized Embeddings mean 93.10 0.11 87.69 0.27 90.44 0.20 49.59 0.41 Contextual String Emb.",
"Training downstream models.",
"When training downstream task models (such as for NER), we typically make many passes over the training data.",
"As Algorithm 2 shows, we reset the memory at the beginning of each pass over the training data (line 2), so that it is build up from scratch at each epoch.",
"This approach ensures that the downstream task model learns to leverage pooled embeddings that are built up (e.g. evolve ) over time.",
"It also ensures that pooled embeddings during training are only computed over training data.",
"After training, (i.e. during NER prediction), we do not reset embeddings and instead allow our approach to keep expanding the memory and evolve the embeddings.",
"We verify our proposed approach in four named entity recognition (NER) tasks: We use the English, German and Dutch evaluation setups of the CO NLL-03 shared task (Tjong Kim Sang and De Meulder, 2003) to evaluate our approach on classic newswire data, and the WNUT-17 task on emerging entity detection (Derczynski et al., 2017) to evaluate our approach in a noisy user-generated data setting with few repeated entity mentions.",
"We use the open source FLAIR framework in all our experiments.",
"It implements the standard BiLSTM-CRF sequence labeling architecture (Huang et al., 2015) and includes pre-trained contextual string embeddings for many languages.",
"To FLAIR , we add an implementation of our proposed pooled contextualized embeddings .",
"Hyperparameters.",
"For our experiments, we follow the training and evaluation procedure outlined in Akbik et al. (2018) and follow most hyperpa-rameter suggestions as given by the in-depth study presented in Reimers and Gurevych (2017).",
"That is, we use an LSTM with 256 hidden states and one layer (Hochreiter and Schmidhuber, 1997), a locked dropout value of 0.5, a word dropout of 0.05, and train using SGD with an annealing rate of 0.5 and a patience of 3.",
"We perform model selection over the learning rate { 0 .",
"01 , 0 .",
"05 , 0 .",
"1 } and mini-batch size { 8 , 16 , 32 } , choosing the model with the best F -measure on the validation set.",
"Following Peters et al. (2017), we then repeat the experiment 5 times with different random seeds, and train using both train and development set, reporting both average performance and standard deviation over these runs on the test set as final performance.",
"Standard word embeddings.",
"The default setup of Akbik et al. (2018) recommends contextual string embeddings to be used in combination with standard word embeddings.",
"We use GLOVE embeddings (Pennington et al., 2014) for the English tasks and FASTTEXT embeddings (Bojanowski et al., 2017) for all newswire tasks.",
"Baselines.",
"Our baseline are contextual string embeddings without pooling, i.e. the original setup proposed in Akbik et al. (2018) 2 .",
"By comparing against this baseline, we isolate the impact of our proposed pooled contextualized embeddings.",
"2 Our reproduced numbers are slightly lower than we reported in Akbik et al. (2018) where we used the official CO NLL-03 evaluation script over BILOES tagged entities.",
"This introduced errors since this script was not designed for S-tagged entities.",
"In addition, we list the best reported numbers for the four tasks.",
"This includes the recent BERT approach using bidirectional transformers by Devlin et al. (2018), the semi-supervised multitask learning approach by Clark et al. (2018), the ELMo word-level language modeling approach by Peters et al. (2018), and the best published numbers for WNUT-17 (Aguilar et al., 2018) and German and Dutch CO NLL-03 (Lample et al., 2016).",
"New state-of-the-art scores.",
"We find that our approach outperforms all previously published results, raising the state-of-the-art for CO NLL-03 on English to 93.18 F1-score ( 0.32 pp vs. previous best), German to 88.27 ( 0.86 pp) and Dutch to 90.44 ( 0.28 pp).",
"The consistent improvements against the contextual string embeddings baseline indicate that our approach is generally a viable option for embedding entities in sequence labeling.",
"Less pronounced impact on WNUT-17.",
"However, we also find no significant improvements on the WNUT-17 task on emerging entities.",
"Depending on the pooling operation, we find comparable results to the baseline.",
"This result is expected since most entities appear only few times in this dataset, giving our approach little evidence to aggregate and pool.",
"Nevertheless, since recent work has not yet experimented with contextual embeddings on WNUT, as side result we report a new state-of-the-art of 49.59 F1 vs. the previous best reported number of 45.55 (Aguilar et al., 2018).",
"Pooling operations.",
"Comparing the pooling operations discussed in Section 2, we generally find similar results.",
"As Table 1 shows, min pooling performs best for English and German CoNLL, while mean pooling is best for Dutch and WNUT.",
"To better isolate the impact of our proposed approach, we run experiments in which we do not use any classic word embeddings, but rather rely solely on contextual string embeddings.",
"As Table 2 shows, we observe more pronounced improvements of pooling vis-a-vis the baseline approach in this setup.",
"This indicates that pooled contextualized embeddings capture global semantics words similar in nature to classical word embeddings.",
"We presented a simple but effective approach that addresses the problem of embedding rare strings in underspecified contexts.",
"Our experimental evaluation shows that this approach improves the state-of-the-art across named entity recognition tasks, enabling us to report new state-of-the-art scores for CO NLL-03 NER and WNUT emerging entity detection.",
"These results indicate that our embedding approach is well suited for NER.",
"Evolving embeddings.",
"Our dynamic aggregation approach means that embeddings for the same words will change over time, even when used in exactly the same contexts.",
"Assuming that entity names are more often used in well-specified contexts, their pooled embeddings will improve as more data is processed.",
"The embedding model thus continues to learn from data even after the training of the downstream NER model is complete and it is used in prediction mode.",
"We consider this idea of constantly evolving representations a very promising research direction.",
"Future work.",
"Our pooling operation makes the conceptual simplification that all previous instances of a word are equally important.",
"However, we may find more recent mentions of a word such as words within the same document or news cycle to be more important for creating embeddings than mentions that belong to other documents or news cycles.",
"Future work will therefore examine methods to learn weighted poolings of previous mentions.",
"We will also investigate applicability of our proposed embeddings to tasks beside NER.",
"Public release.",
"We contribute our code to the FLAIR framework 3 .",
"This allows full reproduction of all experiments presented in this paper, and al-3 The proposed embedding is added to FLAIR in release 0.4.1.",
"as the PooledFlairEmbeddings class (see Akbik et al. (2019) for more details).",
"We would like to thank the anonymous reviewers for their helpful comments.",
"This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement no 732328 (FashionBrain)."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"method",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"objective",
"method",
"method",
"other",
"method",
"method",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"result",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"other",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"method",
"objective",
"result",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"abstain",
"other",
"other"
] |
[
"Humans use language to refer to entities in the external world.",
"Motivated by this, in recent years several models that incorporate a bias towards learning entity representations have been proposed.",
"Such entity-centric models have shown empirical success, but we still know little about why.",
"In this paper we analyze the behavior of two recently proposed entity-centric models in a referential task, Entity Linking in Multi-party Dialogue (SemEval 2018 Task 4).",
"We show that these models outperform the state of the art on this task, and that they do better on lower frequency entities than a counterpart model that is not entity-centric, with the same model size.",
"We argue that making models entity-centric naturally fosters good architectural decisions.",
"However, we also show that these models do not really build entity representations and that they make poor use of linguistic context.",
"These negative results underscore the need for model analysis, to test whether the motivations for particular architectures are borne out in how models behave when deployed.",
"Modeling reference to entities is arguably crucial for language understanding, as humans use language to talk about things in the world.",
"A hypothesis in recent work on referential tasks such as co-reference resolution and entity linking (Haghighi and Klein, 2010; Clark and Manning, 2016; Henaff et al., 2017; Aina et al., 2018; Clark et al., 2018) is that encouraging models to learn and use entity representations will help them better carry out referential tasks.",
"To illustrate, creating an entity representation with the relevant information upon reading a woman should make it easier to denotes equal contribution.",
"resolve a pronoun mention like she .",
"1 In the mentioned work, several models have been proposed that incorporate an explicit bias towards entity representations.",
"Such entity-centric models have shown empirical success, but we still know little about what it is that they effectively learn to model.",
"In this analysis paper, we adapt two previous entity-centric models (Henaff et al., 2017; Aina et al., 2018) for a recently proposed referential task and show that, despite their strengths, they are still very far from modeling entities.",
"2 The task is character identification on multiparty dialogue as posed in SemEval 2018 Task 4 (Choi and Chen, 2018).",
"3 Models are given dialogues from the TV show Friends and asked to link entity mentions (nominal expressions like I , she or the woman ) to the characters to which they refer in each case.",
"Figure 1 shows an example, where the mentions Ross and you are linked to entity 335, mention I to entity 183, etc.",
"Since the TV series revolves around a set of entities that recur over many scenes and episodes, it is a good benchmark to analyze whether entity-centric models learn and use entity representations for referential tasks.",
"Our contributions are three-fold: First, we adapt two previous entity-centric models and show that they do better on lower frequency entities 1 Note the analogy with traditional models in formal linguistics like Discourse Representation Theory (Kamp and Reyle, 2013).",
"2 Source code for our model, the training procedure and the new dataset is published on https://github.com/ amore-upf/analysis-entity-centric-nns .",
"(a significant challenge for current data-hungry models) than a counterpart model that is not entity-centric, with the same model size.",
"Second, through analysis we provide insights into how they achieve these improvements, and argue that making models entity-centric fosters architectural decisions that result in good inductive biases.",
"Third, we create a dataset and task to evaluate the models' ability to encode entity information such as gender, and show that models fail at it.",
"More generally, our paper underscores the need for the analysis of model behavior, not only through ablation studies, but also through the targeted probing of model representations (Linzen et al., 2016; Conneau et al., 2018).",
"Modeling.",
"Various memory architectures have been proposed that are not specifically for entity-centric models, but could in principle be employed in them (Graves et al., 2014; Sukhbaatar et al., 2015; Joulin and Mikolov, 2015; Bansal et al., 2017).",
"The two models we base our results on (Henaff et al., 2017; Aina et al., 2018) were explicitly motivated as entity-centric.",
"We show that our adaptations yield good results and provide a closer analysis of their behavior.",
"Tasks.",
"The task of entity linking has been formalized as resolving entity mentions to referential entity entries in a knowledge repository, mostly Wikipedia (Bunescu and Pasca, 2006; Mihalcea and Csomai, 2007 and much subsequent work; for recent approaches see Francis-Landau et al., 2016; Chen et al., 2018).",
"In the present entity linking task, only a list of entities is given, without associated encyclopedic entries, and information about the entities needs to be acquired from scratch through the task; note the analogy to how a human audience might get familiar with the TV show characters by watching it.",
"Moreover, it addresses multiparty dialogue (as opposed to, typically, narrative text), where speaker information is crucial.",
"A task closely related to entity linking is coreference resolution , i.e., predicting which portions of a text refer to the same entity (e.g., Marie Curie and the scientist ).",
"This typically requires clustering mentions that refer to the same entity (Pradhan et al., 2011).",
"Mention clusters essentially correspond to entities, and recent work on coreference and language modeling has started exploiting an explicit notion of entity (Haghighi and Klein, 2010; Clark and Manning, 2016; Yang et al., 2017).",
"Previous work both on entity linking and on coreference resolution (cited above, as well as Wiseman et al., 2016) often presents more complex models that incorporate e.g. hand-engineered features.",
"In contrast, we keep our underlying model basic since we want to systematically analyze how certain architectural decisions affect performance.",
"For the same reason we deviate from previous work to entity linking that uses a specialized coreference resolution module (e.g., Chen et al., 2017).",
"Analysis of Neural Network Models.",
"Our work joins a recent strand in NLP that systematically analyzes what different neural network models learn about language (Linzen et al., 2016; Kadar et al., 2017; Conneau et al., 2018; Gulordava et al., 2018b; Nematzadeh et al., 2018, a.o.).",
"This work, like ours, has yielded both positive and negative results: There is evidence that they learn complex linguistic phenomena of morphological and syntactic nature, like long distance agreement (Gulor-dava et al., 2018b; Giulianelli et al., 2018), but less evidence that they learn how language relates to situations; for instance, Nematzadeh et al. (2018) show that memory-augmented neural models fail on tasks that require keeping track of inconsistent states of the world.",
"We approach character identification as a classification task, and compare a baseline LSTM (Hochreiter and Schmidhuber, 1997) with two models that enrich the LSTM with a memory module designed to learn and use entity representations.",
"LSTMs are the workhorse for text processing, and thus a good baseline to assess the contribution of this module.",
"The LSTM processes text of dialogue scenes one token at a time, and the output is a probability distribution over the entities (the set of entity IDs are given).",
"The BILSTM model is depicted in Figure 2.",
"It is a standard bidirectional LSTM (Graves et al., 2005), with the difference with most uses of LSTMs in NLP that we incorporate speaker information in addition to the linguistic content of utterances.",
"The model is given chunks of dialogue (see Appendix for hyperparameter settings such as the chunk size).",
"At each time step i , one-hot vectors for token t i and speaker entities s i are embedded Joey you ... softmax W t W e Joey: think Joey: you Joey: love { ... ... ...",
"via two distinct matrices W t and W e and concatenated to form a vector x i (Eq.",
"1, where (cid:107) denotes concatenation; note that in case of multiple simultaneous speakers S i their embeddings are summed).",
"The vector x i is fed through the nonlinear activation function tanh and input to a bidirectional LSTM.",
"The hidden state h i of a uni directional LSTM for the i th input is recursively defined as a combination of that input with the LSTM's previous hidden state h i 1 .",
"For a bi directional LSTM, the hidden state h i is the concatenation of the hidden states h i and h i of two unidirectional LSTMs which process the data in opposite directions (Eqs. 2-4).",
"h i = LSTM ( tanh ( x i ) , h i 1 ) (2) h i = LSTM ( tanh ( x i ) , h i +1 ) (3) h i = h i (cid:107) h i (4) For every entity mention t i (i.e., every token 4 that is tagged as referring to an entity), we obtain a distribution over all entities, o i [0 , 1] 1 N , by applying a linear transformation to its hidden state h i (Eq. 5), and feeding the resulting g i to a softmax classifier (Eq. 6).",
"Eq.",
"5 is where the other models will diverge.",
"The ENTLIB model (Figure",
"3) is an adaptation of our previous work in Aina et al. (2018), which was the winner of the SemEval 2018 Task 4 competition.",
"This model adds a simple memory module that is expected to represent entities because its vectors are tied to the output classes (accordingly, Aina et al., 2018, call this module entity library ).",
"We call this memory static', since it is updated only during training, after which it remains fixed.",
"Where BILSTM maps the hidden state h i to class scores o i with a single transformation (plus softmax), ENTLIB instead takes two steps: It first transforms h i into a query' vector q i (Eq.",
"7) that it will then use to query the entity library.",
"As we will see, this mechanism helps dividing the labor between representing the context (hidden layer) and doing the prediction task (query layer).",
"A weight matrix W e is used as the entity library, which is the same as the speaker embedding in Eq.",
"1: the query vector q i R 1 k is compared to each vector in W e (cosine), and a gate vector g i is obtained by applying the ReLU function to the cosine similarity scores (Eq. 8).",
"5 Thus, the query extracted from the LSTM's hidden state is used as a soft pointer over the model's representation of the entities.",
"As before, a softmax over g i then yields the distribution over entities (Eq. 6).",
"So, in the ENTLIB 4 For multi-word mentions this is done only for the last token in the mention.",
"5 In Aina et al. (2018), the gate did not include the ReLU nonlinear activation function.",
"Adding it improved results.",
"Our implementation differs from Aina et al. (2018) in one important point that we will show to be relevant to model less frequent entities (training also differs, see Section 4): The original model did not do parameter sharing between speakers and referents, but used two distinct weight matrices.",
"Note that the contents of the entity library in ENTLIB do not change during forward propagation of activations, but only during backpropagation of errors, i.e., during training, when the weights of W e are updated.",
"If anything, they will encode permanent properties of entities, not properties that change within a scene or between scenes or episodes, which should be useful for reference.",
"The next model attempts to overcome this limitation.",
"ENTNET is an adaptation of Recurrent Entity Networks (Henaff et al., 2017, Figure",
"4) to the task.",
"Instead of representing each entity by a single vector, as in ENTLIB , here each entity is represented jointly by a context-invariant or static' key and a context-dependent or dynamic' value .",
"For the keys the entity embedding W e is used, just like the entity library of ENTLIB .",
"But the values V i can be dynamically updated throughout a scene.",
"As before, an entity query q i is first obtained from the BILSTM (Eq. 7).",
"Then, ENTNET computes gate values g i by estimating the query's similarity to both keys and values, as in Eq.",
"9 (replacing Eq. 8 of ENTLIB ).",
"6 Output scores o i are computed 6 Two small changes with respect to the original model (motivated by empirical results in the hyperparameter search) as in the previous models (Eq. 6).",
"The values V i are initialized at the start of every scene ( i = 0 ) as being identical to the keys ( V 0 = W e ).",
"After processing the i th token, new information can be added to the values.",
"Eq.",
"10 computes this new information V i,j , for the j th entity, where Q , R and S are learned linear transformations and PReLU denotes the parameterized rectified linear unit (He et al., 2015): V i,j = PReLU ( QW ej + RV i,j + Sq i ) (10) This information V i,j , multiplied by the respective gate g i,j , is added to the values to be used when processing the next ( i + 1 th ) token (Eq. 11), and the result is normalized (Eq. 12): V i +1 ,j = V j + g i,j V i,j (11) V i +1 ,j = V i +1 ,j (cid:107) V i +1 ,j (cid:107) (12) Our adaptation of the Recurrent Entity Network involves two changes.",
"First, we use a biLSTM to process the linguistic utterance, while Henaff et al. (2017) used a simple multiplicative mask (we have natural dialogue, while their main evaluation was on bAbI, a synthetic dataset).",
"Second, in the original model the gates were used to retrieve and output information about the query, whereas we use them directly as output scores because our task is referential.",
"This also allows us to tie the keys to the characters of the Friends series as in the previous model, and thus have them represent entities (in the original model, the keys represented entity types, not instances).",
"The training and test data for the task span the first two seasons of Friends , divided into scenes and episodes, which were in turn divided into utterances (and tokens) annotated with speaker identity.",
"7 The set of all possible entities to refer to is given, as well as the set of mentions to resolve.",
"Only the dialogues and speaker information are available (e.g., no video or descriptive text).",
"Indeed, are that we compute the gate using cosine similarity instead of dot product, and the obtained similarities are fed through a ReLU nonlinearity instead of sigmoid.",
"7 The dataset also includes automatic linguistic annotations, e.g., PoS tags, which our models do not use.",
"one of the most interesting aspects of the SemEval data is the fact that it is dialogue (even if scripted), which allows us to explore the role of speaker information, one of the aspects of the extralinguistic context of utterance that is crucial for reference.",
"We additionally used the publicly available 300-dimensional word vectors that were pre-trained on a Google News corpus with the word2vec Skip-gram model (Mikolov et al., 2013a) to represent the input tokens.",
"Entity (speaker/referent) embeddings were randomly initialized.",
"We train the models with backpropagation, using the standard negative log-likelihood loss function.",
"For each of the three model architectures we performed a random search ( > 1500 models) over the hyperparameters using cross-validation (see Appendix for details), and report the results of the best settings after retraining without cross-validation.",
"The findings we report are representative of the model populations.",
"Results.",
"We follow the evaluation defined in the SemEval task.",
"Metrics are macro-average F 1 -score (which computes the F 1 -score for each entity separately and then averages these over all entities) and accuracy, in two conditions: All entities , with 78 classes (77 for entities that are mentioned in both training and test set of the SemEval Task, and one grouping all others), and main entities , with 7 classes (6 for the main characters and one for all the others).",
"Macro-average F 1 -score on all entities, the most stringent, was the criterion to define the leaderboard.",
"Table 1 gives our results in the two evaluations, comparing the models described in Section 3 to the best performing models in the SemEval 2018 Task 4 competition (Aina et al., 2018; Park et al., Figure 5: Accuracy on entities with high ( > 1000), medium (201000), and low ( < 20) frequency.",
"2018).",
"Recall that our goal in this paper is not to optimize performance, but to understand model behavior; however, results show that these models are worth analyzing, as that they outperform the state of the art.",
"All models perform on a par on main entities, but entity-centric models outperform BILSTM by a substantial margin when all characters are to be predicted (the difference between ENTLIB and ENTNET is not significant).",
"The architectures of ENTLIB and ENTNET help with lower frequency characters, while not hurting performance on main characters.",
"Indeed, Figure 5 shows that the accuracy of BILSTM rapidly deteriorates for less frequent entities, whereas ENTLIB and ENTNET degrade more gracefully.",
"Deep learning approaches are data-hungry, and entity mentions follow the Zipfian distribution typical of language, with very few high frequency and many lower-frequency items, such that this is a welcome result.",
"Moreover, these improvements do not come at the cost of model complexity in terms of number of parameters, since all models have roughly the same number of parameters ( 3 . 3 3 . 4 million).",
"8 Given these results and the motivations for the model architectures, it would be tempting to conclude that encouraging models to learn and use entity representations helps in this referential task.",
"However, a closer look at the models' behavior reveals a much more nuanced picture.",
"Figure 6 suggests that: (1) models are quite good at using speaker information, as the best performance is for first person pronouns and determiners ( I , my , etc.); (2) instead, models do not seem to be very good at handling other contextual information or entity-specific properties, as the worst 8 See Appendix for a computation of the models' parameters.",
"performance is for third person mentions and common nouns, which require both; 9 (3) ENTLIB and ENTNET behave quite similarly, with performance boosts in (1) and smaller but consistent improvements in (2).",
"Our analyses in the next two sections confirm this picture and relate it to the models' architectures.",
"We examine how the entity-centric architectures improve over the BILSTM baseline on the reference task, then move to entity representations (Sec-tion 6).",
"Shared speaker/referent representation.",
"We found that an important advantage of the entity-centric models, in particular for handling low-frequency entities, lies in the integrated representations they enable of entities both in their role of speakers and in their role of referents.",
"This explains the boost in first person pronoun and proper noun mentions, as follows.",
"Recall that the integrated representation is achieved by parameter sharing, using the same weight matrix W e as speaker embedding and as entity library/keys.",
"This enables entity-centric models to learn the linguistic rule a first person pronoun ( I, me , etc.) refers to the speaker regardless of whether they have a meaningful representation of this particular entity: It is enough that speaker representations are distinct, and they are because they have been randomly initialized.",
"In contrast, the 9 1st person: I , me , my , myself , mine ; 2nd person: you , your , yourself , yours ; 3rd person: she , her , herself , hers , he , him , himself , his , it , itself , its .",
"simple BILSTM baseline needs to independently learn the mapping between speaker embedding and output entities, and so it can only learn to resolve even first-person pronouns for entities for which it",
"has enough data.",
"For proper nouns (character names), entity-centric models learn to align the token embeddings with the entity representations (identical to the speaker embeddings).",
"We show this by using Representation Similarity Analysis (RSA) (Kriegesko-rte et al., 2008), which measures how topologically similar two different spaces are as the Spearman correlation between the pair-wise similarities of points in each space (this is necessary because entities and tokens are in different spaces).",
"For instance, if the two spaces are topologically similar, the relationship of entities 183 and 335 in the entity library will be analogous to the relationship between the names Joey and Ross in the token space.",
"Table 2 shows the topological similarities between the two spaces, for the different model types.",
"10 This reveals that in entity-centric models the space of speaker/referent embeddings is topologically very similar to the space of token embeddings restricted to the entities' names, and more so than in the BILSTM baseline.",
"We hypothesize that entity-centric models can do the alignment better because referent (and hence speaker) embeddings are closer to the error signal, and thus backpropagation is more effective (this again helps with lower-frequency entities).",
"Further analysis revealed that in entity-centric models the beneficial effect of weight sharing between the speaker embedding and the entity representations (both W e ) is actually restricted to first-person pronouns.",
"For other expressions, having 10 As an entity's name we here take the proper noun that is most frequently used to refer to the entity in the training data.",
"Note that for the all entities condition the absolute values are lower, but the space is much larger (over 22K pairs).",
"Also note that this is an instance of slow learning; models are not encoding the fact that a proper noun like Rachel can refer to different people.",
"two distinct matrices yielded almost the same performance as having one (but still higher than the BILSTM, thanks to the other architectural advantage that we discuss below).",
"In the case of first-person pronouns, the speaker embedding given as input corresponds to the target entity.",
"This information is already accessible in the hidden state of the LSTM.",
"Therefore, mentions cluster into entities already at the hidden layer h i , with no real difference with the query layer q i (see Figure 7).",
"Advantage of query layer.",
"The entity querying mechanism described above entails having an extra transformation after the hidden layer, with the query layer q .",
"Part of the improved performance of entity-centric models, compared to the BILSTM baseline, is due not to their bias towards entity representations' per se, but due to the presence of this extra layer.",
"Recall that the BILSTM baseline maps the LSTM's hidden state h i to output scores o i with a single transformation.",
"Gulordava et al. (2018a) observe in the context of Language Modeling that this creates a tension between two con-flicting requirements for the LSTM: keeping track of contextual information across time steps, and encoding information useful for prediction in the current timestep.",
"The intermediate query layer q in entity-centric models alleviates this tension.",
"This explains the improvements in context-dependent mentions like common nouns or second and third pronouns.",
"We show this effect in two ways.",
"First, we compare the average mean similarity s of mention pairs T e = { ( t k , t k (cid:48) ) | t k e k (cid:54) = k (cid:48) } referring to the same entity e in the hidden layer (Eq. 13) and the BILSTM ENTLIBENTNET h i h i q i h i q i 0.34 0.24 0.48 0.27 0.60 Table 3: Average cosine similarity of mentions with the same referent.",
"Table 3 shows that, in entity-centric models, this similarity is lower in the hidden layer h i than in the case of the BILSTM baseline, but in the query layer q i it is instead much higher.",
"The hidden layer thus is representing other information than referent-specific knowledge, and the query layer can be seen as extracting referent-specific information from the hidden layer.",
"Figure 8 visually illustrates the divi-sion of labor between the hidden and query layers.",
"Second, we compared the models to variants where the cosine-similarity comparison is replaced by an ordinary dot-product transformation, which converts the querying mechanism into a simple further layer.",
"These variants performed almost as well on the reference task, albeit with a slight but consistent edge for the models using cosine similarity.",
"No dynamic updates in ENTNET .",
"A surprising negative finding is that ENTNET is not using its dynamic potential on the referential task.",
"We con-firmed this in two ways.",
"First, we tracked the values V i of the entity representations and found that the pointwise difference in V i at any two adjacent time steps i tended to zero.",
"Second, we simply switched off the update mechanism during testing and did not observe any score decrease on the reference task.",
"ENTNET is thus only using the part of the entity memory that it shares with ENTLIB , i.e., the keys W e , which explains their similar performance.",
"This finding is markedly different from Henaff et al. (2017), where for instance the BaBI tasks could be solved only by dynamically updating the entity representations.",
"This may reflect our different language modules: since our LSTM module already has a form of dynamic memory, unlike the simpler sentence processing module in Henaff et al. (2017), it may be that the LSTM takes this burden off of the entity module.",
"An alternative is that it is due to differences in the datasets.",
"11 For the query layer, Eq.",
"13 is equivalent, with cos ( q t k , q t k (cid:48) ) .",
"dataset for information extraction as entity linking.",
"We leave an empirical comparison of these potential explanations for future work, and focus in Section 6 on the static entity representations W e that ENTNET essentially shares with ENTLIB .",
"The foregoing demonstrates that entity-centric architectures help in a reference task, but not that the induced representations in fact contain meaningful entity information.",
"In this section we deploy these representations on a new dataset, showing that they do notnot even for basic information about entities such as gender.",
"Method.",
"We evaluate entity representations with an information extraction task including attributes and relations, using information from an indepen-dent, unstructured knowledge basethe Friends Central Wikia.",
"12 To be able to use the models as is, we set up the task in terms of entity linking, asking models to solve the reference of natural language descriptions that uniquely identify an entity.",
"For instance, given This person is the brother of Monica Geller.",
", the task is to determine that person refers to Ross Geller , based on the information in the sentence.",
"13 The information in the descriptions was in turn extracted from the Wikia.",
"We do not retrain the models for this task in any waywe simply deploy them.",
"We linked the entities from the Friends dataset used above to the Wikia through a semi-automatic procedure that yielded 93 entities, and parsed the Wikia to extract their attributes ( gender and job ) and relations (e.g., sister , mother-in-law ; see Appendix for details).",
"We automatically generate the natural language descriptions with a simple pattern (Figure",
"9) from combinations of properties that uniquely identify a given entity within the set of Friends characters.",
"14 We 12 http://friends.wikia.com .",
"consider unique descriptions comprising at most 3 properties.",
"Each property is expressed by a noun phrase, whereas the article is adapted (definite or indefinite) depending on whether that property applies to one or several entities in our data.",
"This yields 231 unique natural language descriptions of 66 characters, created on the basis of overall 61 relation types and 56 attribute values.",
"Results.",
"The results of this experiment are negative: The first column of Table 4 shows that models get accuracies near 0.",
"A possibility is that models do encode information in the entity representations, but it doesn't get used in this task because of how the utterance is encoded in the hidden layer, or that results are due to some quirk in the specific setup of the task.",
"However, we replicated the results in a setup that does not encode whole utterances but works with single attributes and relations.",
"While the methodological details are in the Appendix, the gender' and job' columns of Table 4 show that results are a bit better in this case but models still perform quite poorly: Even in the case of an attribute like gender, which is crucial for the resolution of third person pronouns ( he/she ), the models' results are quite close to that of a random baseline.",
"Thus, we take it to be a robust result that entity-centric models trained on the SemEval data do not learn or use entity informationat least as recoverable from language cues.",
"This, together with the remainder of the results in the paper, suggests that models rely crucially on speaker information, but hardly on information from the linguistic context.",
"15 Future work should explore alternatives such as pre-training with a language modeling task, which KNOWN .",
"15 Note that 44% of the mentions in the dataset are first person, for which linguistic context is irrelevant and the models only need to recover the relevant speaker embedding to succeed.",
"However, downsampling first person mentions did not improve results on the other mention types.",
"could improve the use of context.",
"Recall that the motivation for entity-centric models is the hypothesis that incorporating entity representations into the model will help it better model the language we use to talk about them.",
"We still think that this hypothesis is plausible.",
"However, the architectures tested do not yet provide convincing support for it, at least for the data analyzed in this paper.",
"On the positive side, we have shown that framing models from an entity-centric perspective makes it very natural to adopt architectural decisions that are good inductive biases.",
"In particular, by exploiting the fact that both speakers and referents are entities, these models can do more with the same model size, improving results on less frequent entities and emulating rule-based behavior such as a first person expression refers to the speaker.",
"On the negative side, we have also shown that they do not yield operational entity representations, and that they are not making good use of contextual information for the referential task.",
"More generally, our paper underscores the need for model analysis to test whether the motivations for particular architectures are borne out in how the model actually behaves when it is deployed.",
"We gratefully acknowledge Kristina Gulordava and Marco Baroni for the feedback, advice and support.",
"We are also grateful to the anonymous reviewers for their valuable comments.",
"This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 715154), and from the Spanish Ramon y Cajal programme (grant RYC-2015-18907).",
"We are grateful to the NVIDIA Corporation for the donation of GPUs used for this research.",
"We are also very grateful to the Pytorch developers.",
"This paper reflects the authors' view only, and the EU is not responsible for any use that may be made of the information it contains."
] | [
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"result",
"abstain",
"other",
"other",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"other",
"method",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"result",
"abstain",
"result",
"objective",
"other",
"other",
"other",
"other",
"other",
"other"
] |
[
"In data-to-text (D2T) generation, training on in-domain data leads to overfitting to the data representation and repeating training data noise.",
"We examine how to avoid finetuning pretrained language models (PLMs) on D2T generation datasets while still taking advantage of surface realization capabilities of PLMs.",
"Inspired by pipeline approaches, we propose to generate text by transforming single-item descriptions with a sequence of modules trained on general-domain text-based operations: ordering, aggregation, and paragraph compression.",
"We train PLMs for performing these operations on a synthetic corpus WIKIFLUENT which we build from English Wikipedia.",
"Our experiments on two major triple-to-text datasetsWebNLG and E2Eshow that our approach enables D2T generation from RDF triples in zero-shot settings.",
"1 1 Introduction The aim of data-to-text (D2T) generation is to produce natural language descriptions of structured data (Gatt and Krahmer, 2018; Reiter and Dale, 1997).",
"Although pipelines of rule-based D2T generation modules are still used in practice (Dale, 2020), end-to-end approaches based on PLMs recently showed superior benchmark performance (Ke et al., 2021; Chen et al., 2020a; Ferreira et al., 2020; Kale and Rastogi, 2020b; Ribeiro et al., 2020), surpassing pipeline systems (Ferreira et al., 2019) in both automatic and human evaluation metrics.",
"Finetuning PLMs on human-written references is widely accepted as a standard approach for adapting PLMs to the D2T generation objective and achieving good performance on a given benchmark (Agarwal et al., 2021; Ke et al., 2021).",
"However, finetuning a model on the domain-specific data leads to overfitting to the particular benchmark, decreasing performance on out-of-domain 1 Our code and data is available at https://github.",
"data (Laha et al., 2019).",
"Gathering a large set of references for a particular domain is also costly and time-consuming as it usually requires collecting human-written references through crowdsourcing (Duek et al., 2020).",
"These problems can be partially mitigated using few-shot approaches (Chen et al., 2020b; Ke et al., 2021; Su et al., 2021a), which operate with only several dozens or hundreds of annotated examples, but the robustness of these approaches is questionableselecting a representative set of examples which would improve performance is difficult (Chang et al., 2021a), and the limited sample is often noisy, increasing the chance of hallucinations and omissions (Duek et al., 2019; Harkous et al., 2020; Rebuffel et al., 2022).",
"In this paper, we present a zero-shot alternative to the traditional finetuning paradigm by formulating the D2T generation from RDF triples as a sequence of general-domain operations over text in natural language.",
"We start by transforming individual triples to text using trivial templates, which 3914 we subsequently order, aggregate, and compress on the paragraph level to produce the resulting description of the data.",
"In constrast to traditional pipeline systems, all our pipeline modules are built upon PLMs and operate over sentences in natural language.",
"The modules are trained on our new WIKIFLUENT corpus, which contains 934k examples of first paragraphs from the English Wikipedia, each supplied with a synthesized set of simple template-like sentences which together convey the meaning of the original paragraph.",
"Our approach allows generating natural language descriptions from RDF triples with a minimum amount of domain-specific rules or knowledge and without using training data from the D2T datasets.",
"Although our approach is primarily a probe into the territory of zero-shot approaches and cannot yet match the quality of state-of-the-art models, we show that it can yield large improvements upon simple baselines and match older supervised systems on automatic metrics for text fluency.",
"Moreover, the semantic accuracy metrics and our manual error analysis suggest that our approach offers a way to prevent omissions and hallucinations common in few-shot approaches.",
"Our contributions are the following: (1) We propose an alternative D2T generation approach based on general-domain text-to-text operations (ordering, aggregation, and paragraph compression).",
"(2) We introduce a synthetic WIKIFLUENT corpus containing 934k sentences based on English Wikipedia, providing training data for the operations in (1).",
"(3) We apply our system on two D2T datasets and evaluate its performance both automatically and manually, including the contribution of individual pipeline modules.",
"(4) We release our code, data, pretrained models, and system outputs to ease future research.",
"1 2 Related Work D2T Generation with PLMs Large neural language models pretrained on self-supervised tasks (Lewis et al., 2020; Liu et al., 2019; Devlin et al., 2019) have recently gained a lot of traction in D2T generation research (Ferreira et al., 2020; Kasner and Duek, 2020b).",
"Following Chen et al. (2020b), other works adopted PLMs for few-shot D2T generation (Chang et al., 2021b; Su et al., 2021a).",
"Kale and Rastogi (2020b) and Ribeiro et al. (2020) showed that PLMs using linearized representations of data can outperform graph neural networks on graph-to-text datasets, recently surpassed again by graph-based models (Ke et al., 2021; Chen et al., 2020a).",
"Although the models make use of general-domain pretraining tasks, all of them are eventually finetuned on domain-specific data.",
"Pipeline-based D2T Generation Until the recent surge of end-to-end approaches (Duek et al., 2020), using several modules connected in a pipeline was a major approach for D2T generation (Gatt and Krahmer, 2018; Reiter, 2007; Reiter and Dale, 1997).",
"Our approach is inspired by the pipeline approaches, in particular the pipelines utilizing neural modules (Ferreira et al., 2019).",
"In contrast with these approaches, our pipeline works with unstructured data in natural language and it operates in zero-shot setting, i.e. without using any training data from target D2T datasets.",
"Laha et al. (2019) introduce a three-step pipeline for zero-shot D2T generation similar to ours.",
"Unlike the approach we describe here, they use a semiautomatic template generation system, 2 their sentence fusion is rule-based, and they do not address content planning.",
"Content Planning in D2T Generation Content planning, i.e. the task of ordering input facts and aggregating them into individual sentences, is one of the steps of the traditional D2T pipeline (Gatt and Krahmer, 2018).",
"As shown by Moryossef et al. (2019a,b) and confirmed by other works (Pudup-pully et al., 2019; Zhao et al., 2020; Trisedya et al., 2020; Su et al., 2021b), including a content plan improves the quality of outputs in neural D2T pipelines.",
"Unlike the aforementioned planners, which use predicates or keys from D2T datasets for representing the data items, our planner is trained on ordering sentences in natural language.",
"Sentence Ordering Sentence ordering is the task of organizing a set of natural language sentences to increase the coherence of a text (Barzilay et al., 2001; Lapata, 2003).",
"Several neural methods for this task were proposed, using either interactions between pairs of sentences (Chen et al., 2016; Li and Jurafsky, 2017), global interactions (Gong et al., 2016; Wang and Wan, 2019), or combination of both (Cui et al., 2020).",
"We base our ordering module (5.2) on the recent work of Calizzano et al. 2 As we describe in 5.1, we opted for a simpler way for generating the templates to showcase the results of our approach independently of the template generator quality.",
"Aggregating Input into Sentences Typically, multiple pieces of input information need to be merged into a single sentence.",
"Previous works (Wiseman et al., 2018; Shao et al., 2019; Shen et al., 2020; Xu et al., 2021) capture the segments which correspond to individual parts of the input as latent variables.",
"Unlike these works, we adopt a simpler scenario using an already ordered sequence of facts (see 3.1), into which we selectively insert delimiters to mark sentence boundaries.",
"Paragraph Compression We introduce paragraph compression (PC) as a new task and the final step in our D2T generation pipeline.",
"This task combines several standard natural-language tasks including sentence fusion, rephrasing, and coreference resolution.",
"Unlike text summarization or simplification (Zhang et al., 2020; Jiang et al., 2020), we aim to convey the complete semantics of the text without omitting any facts.",
"In contrast to sentence fusion (Geva et al., 2019; Barzilay and McKeown, 2005) or sentence compression (Filip-pova and Altun, 2013), we operate in the context of multiple sentences in a paragraph.",
"The task is the central focus of our WIKIFLUENT corpus (4).",
"In this section, we provide the formal description of our proposed approach.",
"We focus on the task of producing a natural language description Y for a set of n RDF triples X t x 1 , . . . , x n u .",
"Each triple x i t s i , p i , o i u consists of subject s i , predicate p i , and object o i .",
"triples X on the input, we: (1) transform the triples into facts , which are sentences in natural language, (2) sort the facts using an ordering module, (3) insert sentence delimiters between the sorted facts using an aggregation module, (4) input the ordered sequence of facts with delimiters into a paragraph compression module, which generates the final description Y .",
"The individual steps are described in the following sections: transforming individual triples to text (3.1), ordering (3.2), aggregation (3.3), and paragraph compression (3.4).",
"The first step in our pipeline involves transforming each of the input triples x i P X into a fact f i P F using a transformation T : X F .",
"We define a fact f i as a single sentence in natural language describing x i .",
"The transformation serves two purposes:",
"(a) preparing the data for the subsequent text-to-text operations,",
"(b) introducing in-domain knowledge about the semantics of individual predicates.",
"This step can be realized e.g. using a simple template for each predicate (cf. 5.1).",
"We assume that the default order of triples X is random and the same applies for the respective facts F .",
"Note, however, that that F is a indeed set of meaningful sentences.",
"We can use this to our advantage and apply a sentence ordering model to maximize the coherency of the paragraph resulting from their concatenation.",
"An example outcome of such operation may be grouping together facts mentioning birth date and birth place of a person, followed by their occupation (see Figure 1).",
"The ordering module allows downstream modules to only focus on operations over neighboring sentences.",
"Formally, we apply the ordering model O p F q to get an ordered sequence of facts: F o t f o 1 , . . . , f o n u , where o 1: n is a permutation of indices.",
"We describe our ordering model in 5.2.",
"Some facts will be typically mentioned together in a single sentence.",
"Considering the previous example, occupation is likely to be mentioned separately, while birth date and birth place are likely to be mentioned together.",
"Using an ordered sequence of facts as input, we can apply an aggregation model to decide which facts should be merged into a single sentence.",
"Formally, the aggregation model takes a sequence of ordered facts F o as input and produces a sequence of sentence delimiters A p F o q t o 1 , o 2 , . . . , o n 1 u ; i P t 0 , 1 u .",
"The output i 1 means that the neighboring facts should be mentioned separately, i.e. the neighboring sentences should not be fused.",
"Conversely, i 0 means that the facts should be aggregated and their corresponding sentences should be fused.",
"We describe our aggregation model in 5.3.",
"The paragraph compression (PC) model is a generative model which outputs the final text description.",
"It has two main objectives:",
"(a) fusing related sentences, i.e., sentences i and j in between which i 0 , and",
"(b) rephrasing the text to improve its fluency, e.g. fixing disfluencies in the templates, replacing noun phrases with refering expressions, etc.",
"The goal of the task is to preserve the semantics of the text which is an already ordered sequence of sentences, so the edits will typically be minor.",
"Formally, the model takes as input the ordered sequence of facts with delimiters F a t f o 1 , o 1 , f o 2 , . . . , o n 1 , f o n u and produces the final text Y . We describe our PC model in 5.4. 4 WIKIFLUENT Corpus Here we descibe the process of building a large-scale synthetic corpus WIKIFLUENT . The corpus provides training data for the neural models which we use in our implementation of the ordering, aggregation, and paragraph compression modules (cf. 5). Our goal is to cover a broad range of domains while capturing the sentence style in D2T generation with respect to both the input facts and the target descriptions. In other words, we aim to build a corpus in which (1) the input is a set of simple, template-like sentences, (2) the output is a fluent text in natural language preserving the semantics of the input. As we describe below in detail, we achieve that by using human-written paragraphs in English Wikipedia and applying split-and-rephrase and coreference resolution models to obtain synthetic source texts. The process is illustrated in Figure 2; corpus statistics are included in Appendix A. 4.1 Data Source For building the WIKIFLUENT corpus, we extracted 934k first paragraphs of articles from a Wikipedia dump 3 using WikiExtractor (Attardi, 2015). Wikipedia is commonly used for large-scale pretraining of D2T generation models (Jin et al., 2020; Chen et al., 2020a). Although it is not bias-free, it provides more balanced sample of natural language use than typical D2T generation datasets. We used the first paragraphs of Wikipedia entries, which contain mostly concise, fact-based descriptions. We selected paragraphs with length between 3 enwiki-20210401-pages-articles-multistream The Westmeath Examiner is a weekly newspaper in Westmeath, Ireland. It is located in Westmeath, Ireland. The Westmeath Examiner is a weekly newspaper. original paragraph The Westmeath Examiner is a weekly newspaper. It was founded in 1882. It was founded in 1882. split-and-rephrase coreference replacement The Westmeath Examiner is located in Westmeath, Ireland. The Westmeath Examiner was founded in 1882. processed paragraph split successful pronounsresolved Figure 2: The building process of the WIKIFLUENT corpus. We apply a split-and-rephrase model on each sentence in the paragraph and resolve coreferences in the split sentences. The result is a set of simple sentences which together convey the same meaning as the original paragraph. The synthesized sentences are used as input into our models, the original human-written texts are used as ground truth . 30-430 characters; filtering out lists, disambiguations, and repeated and malformed paragraphs. To balance the length of inputs, we selected 250k examples each from 4 equally sized length ranges (30-130 characters, etc.). 4.2 Split-and-Rephrase To generate a set of simple sentences, we divide each paragraph into sentences using NLTK (Bird, 2006) and apply a split-and-rephrase model on each sentence. Split-and-rephrase is a task of splitting a complex sentence into a meaning preserving sequence of shorter sentences (Narayan et al., 2017). The process is illustrated in the upper part of Figure 2. We train our split-and-rephrase model on the large-scale WikiSplit corpus by Botha et al. (2018), containing human-made sentence splits from Wikipedia edit history. Following the same setup as for a paragraph compression model (3.4), we train BART-base (Lewis et al., 2020) on the WikiSplit dataset in a sequence-to-sequence setting. Next, we apply the trained split-and-rephrase model on each sentence in our Wikipedia-based corpus, uniformly randomly choosing between 0-2 recursive calls to ensure that the splits are not deterministic. If the sentence cannot be meaningfully split, the model tends to duplicate the sentence on the output; in that case, we use only the original sentence and do not proceed with the splitting. 3917 4.3 Coreference Replacement As the next step, we concatenate the split sentences and apply a coreference resolution model (Gardner et al., 2018; Lee et al., 2018) in order to replace referring expressions with their antencendents (e.g., pronouns with noun phrases). The motivation for this step is to match the style of the facts (see 3.1), which do not use pronouns since each fact describes a single triple only. Note that this procedure replaces the referring expressions only in the synthesized sentences (which are used as input) and keeps them in the original paragraphs (which are used as ground truth). As a consequence, the paragraph compression module is implicitly trained to generate referring expressions in the final description. 4.4 Filtering To ensure that the generated sentences convey the same semantics as the original paragraph, we use a pretrained RoBERTa model 4 (Liu et al., 2019) trained on the MultiNLI dataset (Williams et al., 2018) for checking the semantic accuracy of the generated text. Following Duek and Kasner (2020), we test if the original paragraph entails each of the synthesized sentences (checking for omissions), and if the set of concatenated synthesized sentences entails the original paragraph (checking for hallucinations). In a filtered version of the WIKIFLUENT corpus, we include only the examples without omissions or hallucinations (as computed by the model), reducing it to 714k examples (approximately 75% of the original size). 5 Implementation In this section, we describe how we implement our pipeline modules (3) using simple template transformations (5.1) and neural models trained on the WIKIFLUENT dataset (5.2-5.4). 5 5.1 Templates We transform triples into facts (3.1) using a single-triple template t i for each predicate. For example, if p i instrument , then T p p i q s i plays o i (cf. Table 1).",
"We follow previous work in which simple hand-crafted templates have been used as an efficient way of introducing domain knowledge (Kale and Rastogi, 2020a; Kasner and Duek, 2020a).",
"Compared to more complex rule-based 4 https://huggingface.co/roberta-large-mnli 5 Our training setup details are included in Appendix C. dataset predicate template WebNLG instrument < s > plays < o >.",
"template generation engines (Laha et al., 2019; Hei-dari et al., 2021; Mehta et al., 2021), the approach may produce less fluent outputs, but it minimizes manual workload and makes it easier to control the quality of the input for the subsequent steps.",
"For our ordering model (3.2), we use the Simple Pointer model from Calizzano et al. (2021).",
"The model is based on a pretrained BART-base extended with a pointer network from Wang and Wan (2019).",
"We provide a short description of the model here; for details please refer to Calizzano et al. (2021).",
"In the encoding phase, facts F are concatenated and tokenized.",
"Each fact is surrounded by special tokens denoting the beginning ( <s> ) and the end ( </s> ) of the fact.",
"The sequence is processed by the BART encoder, generating a sequence of encoder states E for each end token </s> representing the preceding fact.",
"The decoding proceeds autoregressively.",
"To bootstrap the decoding process, the pair of tokens <s></s> is fed into the decoder, producing the decoder state d 1 .",
"The pointer network (attend-ing to d 1 and E ), selects the first ordered fact f o 1 , which is fed into the decoder in the next step ( d 2 <s> f o 1 </s> ). The process is repeated until the all the facts are decoded in a particular order. The pointer network computes the probability of a fact to be on the j -th position, using the encoder output E and the decoder output state d j . The network is based on the scaled dot product attention, where d j is the query and encoder outputs E i are the keys: Q d j WQK EWKP j softmax QKT ? b . 3918 A dam is a barrier obstructing flowing water. A dam is a barrier. 3-stage 2-stage 1-stage A dam obstructs flowing water. src tgt a agg ord PC + PC+agg PC+ord+agg b b a b b a a Figure 3: An example illustrating how the individual modules are trained and subsequently applied as the parts of the pipeline. See 5.2 for description of the ordering model ( ORD ), 5.3 for the aggregation model ( AGG ), and 5.4 and 6 for the paragraph compression model (PC, PC+ AGG , PC+ ORD + AGG ). Here WQ and WKP R b b , b is the dimension of BART hidden states, and P j P R n ` 1 is the probability distribution for the j -th position (i.e., P ji is the probability that fact f i is on the j -th position). We train the model using the synthesized simple sentences in the WIKIFLUENT corpus, randomly shuffling the order of the sentences and training the model to restore their original order. 5.3 Aggregation Model We base our aggregation model (3.3) on RoBERTa-large (Liu et al., 2019) with a token classification head. 6 Similarly to the ordering model (5.2), we input the sequence of (now ordered) facts F o into the model, separating each pair of facts f o i with a special token </s> (used by the model as a separator). Subsequently, the token classification layer classifies each separator </s> i position into two classes t 0 , 1 u corresponding to the delimiter i . We ignore the outputs for the non-separator tokens while computing cross-entropy loss. We create the training examples using the synthesized sentences in the WIKIFLUENT corpus, in which we set i 0 for the sentences i, i ` 1 which were originally aggregated (i.e., are the result of splitting a single sentence) and i 1 otherwise. 5.4 Paragraph Compression Model We adopt BART-base for our paragraph compression model. We finetune the model on the WIKIFLUENT corpus, concatenating the synthesized sentences on the input. We add delimiters between the sentences i and i ` 1 where i 1 using a special token <sep> , which we add to the model vocabulary. As shown in Keskar et al. (2019), including control codes for training the model can steer the model towards producing certain outputs. Here we expect that the model will learn to fuse the sentences between which there are no delimiters 6 https://huggingface.co/transformers/model_ doc/roberta.html#robertafortokenclassification on the input. We evaluate how the model learns to respect the order and aggregation markers in 7.3. 6 Experiments We train our pipeline modules on the WIKIFLUENT corpus as described in 5. Next, we use these modules without finetuning for generating descriptions for RDF triples on two English D2T datasets, WebNLG and E2E. Datasets The datasets differ in domain, size, textual style, and number of predicates (see Appendix A for details): WebNLG (Gardent et al., 2017; Ferreira et al., 2020) contains RDF triples from DBPedia (Auer et al., 2007) and their crowdsourced descriptions. We use version 1.4 of the dataset for comparison to prior work. We hand-crafted templates for all 354 predicates, including unseen predicates in the test set. 7 E2E (Novikova et al., 2017; Duek et al., 2020) contains restaurant recommendations in the form of attribute-value pairs. We use the cleaned version of the dataset (Duek et al., 2019). Following previous work, we transform the attribute-value pairs into RDF triples (using the restaurant name as a subject) and then apply the same setup as for WebNLG. We created a template for each of the 8 attributes manually. Pipeline versions In order to evaluate individual components of our pipeline, we train three versions of the paragraph compression model (see 5.4). The models share the same architecture and targets, but differ in their inputs: PC the model takes as an input ordered facts with delimiters (as described in 3.4), PC+ AGG the model takes as an input ordered facts without delimiters (i.e., the aggregation is left implicitly to the model), PC+ ORD + AGG the model takes as an input facts in random order and without delimiters 7 See Appendix B for details on template creation. 3919 (i.e., both ordering and aggregation are left implicitly to the model). Correspondingly, we test three versions of the pipeline in our ablation study (see Figure 3): 3STAGE a full version of the pipeline consisting of the ordering model ( ORD ), the aggregation model ( AGG ) and the PC model (following the full pipeline from 3), 2STAGE a pipeline consisting of the ORD model and the PC+ AGG model, 1STAGE a single stage consisting of the PC+ ORD + AGG model. We evaluate all versions of the pipeline with PC models trained on the full and filtered versions of the WIKIFLUENT dataset (see 4). 7 Evaluation and Discussion Our main aim is the evaluation of our pipeline on the downstream task of D2T generation. We evaluate outputs from the {1,2,3}STAGE variants of our pipeline using automatic metrics (7.1), and we perform a detailed manual error analysis of the model outputs (7.2). We also evaluate the performance of the content planning modules and the ability of the PC module to follow the content plan (7.3). In 7.4, we include an intrinsic evaluation of our modules on the WIKIFLUENT test set. 7.1 Automatic Metrics Following prior work, we use BLEU (Papineni et al., 2002) and METEOR (Banerjee and Lavie, 2005) to evaluate the outputs against the human references. 8 We also evaluate the number of omission and hallucination errors (i.e., facts missing or added, respectively) using a metric from Duek and Kasner (2020) based on a RoBERTa model (Liu et al., 2019) pretrained on natural language inference (NLI). 9 We include a diverse set of baselines for comparison. For WebNLG (see Table 3), we compare our systems with the results of: UPF-FORGe and MELBOURNE systems (grammar-based and supervised, respectively) from the first run of WebNLG Challenge (Gar-dent et al., 2017), Ke et al. (2021) a state-of-the-art system with 8 We use the implementation from https://github. com/tuetschek/e2e-metrics . 9 We additionally evaluated the outputs on the E2E dataset using the provided pattern-based slot error script. See Appendix D for details. a structure-aware encoder and task-specific pretraining, Laha et al. (2019) the only other (to our knowledge) zero-shot D2T generation system applied to WebNLG. For E2E (see Table 4), we compare our systems with the results of: TGEN (Duek and Jurccek, 2015) the baseline system for the E2E Challenge (Duek et al., 2020), Harkous et al. (2020) a state-of-the-art supervised system on cleaned E2E data. For both datasets, COPY denotes the baseline of copying the facts without further processing. The automatic evaluation shows that our systems consistently outperform the COPY baseline (e.g., 12 BLEU points for E2E), which is already strong thanks to our manually curated set of templates. 10 Automatic scores also suggest that our systems are comparable with some older supervised systems. Nevertheless, our systems still un-derperform the state-of-the-art supervised systems. For this reason, we further focus on manual error analysis in 7.2 to pinpoint the current shortcomings of our approach. The 2STAGE system is generally on par with the 3STAGE system or better, which indicates that explicit aggregation using the AGG model may not be necessary. However, an advantage of having a separate aggregation module is the possibility to control the aggregation step explicitly. The models using the filtered version of the corpus generally produce better results, although they also bring in a larger number of omissions. 7.2 Manual Error Analysis Since automatic performance metrics do not provide insights into specific weaknesses of the system (van Miltenburg et al., 2021), we manually examined 100 outputs of the models. We counted the number of errors: factual (hallucinations, omissions, incorrect fact merging, redundancies) and grammatical. The results are summarized in Table 5. The 1STAGE model (which has to order the facts implicitly) tends to repeat the facts in the text (es-pecially in E2E) and produces frequent hallucinations. These problems are largely eliminated with the 2STAGE and 3STAGE models, which produce 10 On WebNLG, our COPY baseline achieves 37.18 BLEU points, compared to 24.80 BLEU points of the full system of Laha et al. (2019), which uses automatic template generation. 3920 Input (Allen Forrest; background; solo singer), (Allen Forrest; genre; Pop music), (Allen Forrest; birthPlace; Dothan, Alabama) Templ. Allen Forrest is a solo singer. Allen Forrest performs Pop music. Allen Forrest was born in Dothan, Alabama. Model Allen Forrest is a solo singer who performs Pop music. He was born in Dothan, Alabama. Human Born in Dothan, Alabama, Allen Forrest has a background as a solo singer and was a pop artist. Input name[Wildwood], eatType[restaurant], food[French], area[riverside], near[Raja Indian Cuisine] Templ. Wildwood is a restaurant. Wildwood serves French food. Wildwood is in the riverside. Wildwood is near Raja Indian Cuisine. Model Wildwood is a restaurant serving French food. It is in the riverside near Raja Indian Cuisine. Human A amazing French restaurant is called the Wildwood. The restaurant is near the Raja Indian Cuisine in riverside. They love kids. Table 2: Example outputs of our model (3STAGE , filtered). See Appendix E for more examples. B M O H UPF-FORGe 38.65 39.00 0.075 0.101 MELBOURNE 45.13 37.00 0.237 0.202 Ke et al. (2021) : 66.14 47.25 -Laha et al. (2019) : 24.80 34.90 --COPY 37.18 38.77 0.000 0.000 full 3STAGE 42.92 39.07 0.051 0.148 2STAGE 42.90 39.28 0.043 0.125 1STAGE 39.08 38.94 0.071 0.204 filtered 3STAGE 43.19 39.13 0.152 0.073 2STAGE 43.49 39.32 0.146 0.096 1STAGE 42.99 38.81 0.202 0.093 Table 3: Automatic metrics on WebNLG. B = BLEU, M = METEOR, O = omissions / # facts, H = hallucinations / # examples. The systems marked with asterisk (*) are trained on the WebNLG dataset. Results for the systems marked with : are taken from the respective works. almost no hallucinations or omissions. However, the outputs on WebNLG for all systems suffer from semantic errors resulting from merging of unrelated facts. This mostly happens with unrelated predicates connected to the same subject/object (e.g. X was born in Y, X worked as Z expressed as X worked as Z in Y; see Appendix E for examples).",
"This behavior is the main obstacle to ensure factual consistency of the output.",
"As a possible remedy, we propose explicitly controlling the semantics of sentence fusion (Ben-David et al., 2020), e.g. using a variant of constrained decoding (Balakrishnan et al., 2019; Wang et al., 2021).",
"On the E2E data, which has a simpler triple structure (all predicates share the same subject), the outputs are generally consistent and the 2STAGE and 3STAGE models exhibit almost no semantic errors.",
"Grammar errors and disfluencies stem mainly from over-eager paragraph compression or from artifacts in our templates and are relatively minor (e.g., missing is in serves French food and B M O H TGEN 40.73 37.76 0.016 0.083 Harkous et al. (2020) 43.60 39.00 --COPY 24.19 34.89 0.000 0.000 full 3STAGE 36.04 36.95 0.001 0.001 2STAGE 35.84 36.91 0.001 0.001 1STAGE 30.81 36.01 0.009 0.122 filtered 3STAGE 35.88 36.95 0.001 0.001 2STAGE 36.01 36.99 0.001 0.001 1STAGE 34.08 36.32 0.012 0.050 Table 4: Automatic metrics on E2E. B = BLEU, M = METEOR, O = omissions / # facts, H = hallucinations / # examples. The systems marked with asterisk (*) are trained on the E2E dataset. The results for Harkous et al. (2020) are taken from their work. WebNLG E2E H I O R G H I O R G f u ll 3STAGE 3 39 2 2 16 0 1 0 0 17 2STAGE 8 36 1 5 16 1 1 0 1 23 1STAGE 28 27 6 10 20 17 0 1 79 45 fi lt e r e d 3STAGE 2 37 2 1 15 0 0 0 0 17 2STAGE 5 32 1 2 14 0 0 0 0 11 1STAGE 8 40 6 6 16 11 2 1 41 22 Table 5: Number of manually annotated errors on 100 examples: H = hallucinations, I = incorrect fact merging, O = omissions, R = redundancies, G = grammar errors or disfluencies. family-friendly).",
"Following Su et al. (2021b) and Zhao et al. (2020), we report the accuracy and BLEU-2 score of our ordering model on WebNLG against the human-generated plans from Ferreira et al. (2018).",
"The results are listed in Table 6 and compared against a RANDOM baseline (random ordering) and prior work.",
"The results show that although our approach again lags behind state-of-the-art supervised ap-3921 B-2 Acc Transformer (Ferreira et al., 2019) : 52.20 0.35 Step-by-step (Moryossef et al., 2019b) : 70.80 0.47 PLANENC (Zhao et al., 2020) : 80.10 0.62 Plan-then-generate (Su et al., 2021b) : 84.97 0.72 RANDOM 47.00 0.29 Ours (BART+ptr) 59.10 0.48 Table 6: Evaluation of our zero-shot ordering model based on Calizzano et al. (2021).",
"proaches, it can outperform both the random baseline and the Transformer-based approach from Ferreira et al. (2019) while not using any in-domain examples.",
"We also evaluate the accuracy of our aggregation model , using triples ordered according to the plans from Ferreira et al. (2018) as input.",
"The accuracy is 0.33 per example and 0.62 per sentence boundary (random baseline is 0.23 and 0.50, re-spectively).",
"The results show that although our approach is better than the random baseline, there is still room for improvement.",
"Finally, we manually evaluate how the PC model follows the content plan (i.e., keeping the predefined order and aggregating the sentences according to the delimiters) using 100 randomly cho-sen examples with more than 1 triple on WebNLG and E2E.",
"We find that the model follows the content plan in 95% and 100% of cases, respectively.",
"The incorrect cases include a fact not properly mentioned or an extra boundary between sentences without a separator.",
"We can thus conclude that the pretraining task successfully teaches the PC model to follow a given content plan.",
"Aside from the main D2T generation results, we also provide an intrinsic evaluation of our pipeline modules on the WIKIFLUENT test sets.",
"We evaluated the ordering, aggregation, and paragraph compression modules trained on the full WIKIFLUENT corpus.",
"The results for both full and filtered test sets are summarized in Table 7.",
"The PC model achieves high scores, which follows from the fact that we provide it with ground truth content plans (i.e., the ordering and aggregation plan corresponding to the original paragraph).",
"Accuracy of the ordering and aggregation modules is comparable to their performance on D2T datasets.",
"Our experiments outline several possible future research directions.",
"Automatic generation of facts without using hand-crafted templates (cf. 5.1) could allow applying zero-shot generation systems to datasets with a large number of predicates, such as ToTTo (Parikh et al., 2020).",
"The task of paragraph compression could be used as a task-specific pretraining (Gururangan et al., 2020) for more efficient finetuning of D2T models, e.g., with a small amount of clean data.",
"Consistency checks may be introduced in the pipeline to control the output from the modules, and individual modules may be improved by using more efficient model architectures.",
"More research is also needed regarding the main shortcoming of our approach, i.e., the semantic errors stemming from merging of facts in improper ways.",
"As we suggested in 7.2, explicitly controlling the semantics of sentence fusion could help to mitigate this issue, while still keeping the advantages of a zero-shot approach.",
"We presented an approach for zero-shot D2T generation.",
"The approach uses a pipeline of PLMs trained on general-domain lexical operations over natural language.",
"The pipeline builds upon traditional approaches and consists of three interpretable intermediate steps.",
"By avoiding noisy human-written references from the D2T datasets, our models produce more semantically consitent output.",
"We believe that training models for zero-shot D2T generation using large cross-domain corpora will help to build D2T generation systems with good performance across various domains.",
"We study zero-shot D2T generation with the focus on generating descriptions for RDF triples.",
"Although the task of D2T generation has numerous applications, using neural models for D2T generation (especially in the zero-shot context) is still limited to experimental settings (Dale, 2020).",
"Similarly to other recent approaches for D2T generation, our approach relies on PLMs, which are known to reflect the biases in their pretraining corpus (Ben-der et al., 2021; Rogers, 2021).",
"Our system may therefore rely on spurious correlations for verbalizing e.g. gender or occupation of the entities.",
"Since we cannot guarantee the factual correctness of the outputs of our system, the outputs should be used with caution.",
"On the flip side, our approach helps to reduce the number of omissions and hallucinations stemming from noise in human-written references.",
"Our work thus contributes to the general aim of D2T generation in conveying the data semantics accurately and without relying on implicit world knowledge.",
"This research was supported by Charles University projects GAUK 140320, SVV 260575 and PRIMUS/19/SCI/10, an Apple NLU Research Grant for Heriot-Watt University and Charles University, and by the European Research Council (Grant agreement No. 101039303 NG-NLG).",
"It used resources provided by the LINDAT/CLARIAH-CZ Research Infrastructure (Czech Ministry of Education, Youth and Sports project No. LM2018101)."
] | [
"abstain",
"method",
"objective",
"method",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"other",
"method",
"result",
"method",
"objective",
"method",
"result",
"abstain",
"objective",
"method",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"other"
] |
[
"The uncertainty measurement of classifiers' predictions is especially important in applications such as medical diagnoses that need to ensure limited human resources can focus on the most uncertain predictions returned by machine learning models.",
"However, few existing uncertainty models attempt to improve overall prediction accuracy where human resources are involved in the text classification task.",
"In this paper, we propose a novel neural-network-based model that applies a new dropout-entropy method for uncertainty measurement.",
"We also design a metric learning method on feature representations, which can boost the performance of dropout-based uncertainty methods with smaller prediction variance in accurate prediction trials.",
"Extensive experiments on real-world data sets demonstrate that our method can achieve a considerable improvement in overall prediction accuracy compared to existing approaches.",
"In particular, our model improved the accuracy from 0.78 to 0.92 when 30% of the most uncertain predictions were handed over to human experts in 20NewsGroup data.",
"Machine learning algorithms are gradually taking over from the human operators in tasks such as machine translation (Bahdanau et al., 2014), optical character recognition (Mithe et al., 2013), and face recognition (Parkhi et al., 2015).",
"However, some real-world applications require higher accuracy than the results achieved by state-of-the-art algorithms, which makes it difficult to directly apply these algorithms in certain scenarios.",
"For example, a medical diagnosis system (van der Westhuizen and Lasenby, 2017) is expected to have a very high accuracy to support correct decision-making for medical practitioners.",
"Although domain experts can achieve a high performance in these challenging tasks, it is not always feasible to rely on limited and expensive human input for large-scale data sets.",
"Therefore, if we have a model with 70% prediction accuracy, it is intuitive to ask what percentage of the data should be handed to domain experts to achieve an overall accuracy rate above 90%?",
"To maximize the value of limited human resources while achieving desirable results, modeling uncertainty accurately is extremely important to ensure that domain experts can focus on the most uncertain results returned by machine learning models.",
"Most existing uncertainty models are based on Bayesian models, which are not only time-consuming but also unable to handle large-scale data sets.",
"Deep Neural networks (DNNs) have attracted increasing attention in recent years and have been reported to achieve state-of-the-art performance in various machine learning tasks (Yang et al., 2016; Iyyer et al., 2014).",
"However, unlike probabilistic models, DNNs are still at the early development stage in regards to providing the model uncertainty in their predictions.",
"For those seeking to address the prediction uncertainty in DNNs, it is common to suffer from the following issues on the text classification task.",
"Firstly, few researchers have sought to improve overall prediction performance when only limited human resources are available.",
"Different from existing methods which focus on the value of uncertainty, this problem needs to get domain experts involved in emphasis on the order of the uncertain predictions.",
"For example, the importance of distance between feature representations is neglected by the majority of existing models, but actually this is crucial for improving the order of uncertain predictions, especially during the pre-training of embedding vectors.",
"Moreover, the methods proposed for continuous feature space cannot be applied to discrete text data.",
"For example, adversarial training is used in some uncertainty models (Goodfel-low et al., 2014; Lakshminarayanan et al., 2017; Mandelbaum and Weinshall, 2017).",
"However, due to its dependence on gradient-based methods to generate adversarial examples, the method is not applicable to discrete text data.",
"In order to simultaneously address all these problems in existing methods, the work presented in this paper adopts a DNN-based approach that incorporates a novel dropout-entropy uncertainty measurement method along with metric learning in the feature representation to handle the uncertainty problem in the document classification task.",
"The study's main contributions can be summarized as follows: A novel DNN-based text classification model is proposed to achieve higher model accuracy with limited human input.",
"In this new approach, a reliable uncertainty model learns to identify the accurate predictions with smaller estimated uncertainty.",
"Metric learning in feature representation is designed to boost the performance of the dropout-based uncertainty methods in the text classification task.",
"Specifically, the shortened intra-class distance and enlarged inter-class distance can reduce the prediction variance and increase the confidence for the accurate predictions.",
"A new dropout-entropy method based on the Bayesian approximation property of Dropout in DNNs is presented.",
"Specifically, we measure the model uncertainty in terms of the information entropy of multiple dropout-based evaluations combined with the de-noising mask operations.",
"Extensive experiments on real-world data sets demonstrate that the effectiveness of our proposed approach consistently outperforms existing methods.",
"In particular, the macro-F1 score can be increased from 0.78 to 0.92 by assigning 25% of the labeling work to human experts in a 20-class text classification task.",
"The rest of this paper is organized as follows.",
"Section 2 reviews related work, and Section 3 provides a detailed description of our proposed model.",
"The experiments on multiple real-world data sets are presented in Section",
"4. The paper concludes with a summary of the research in Section",
"5. 2 Related Work The work related to this paper falls into two sub topics, described as follows.",
"Existing uncertainty models are usually based on Bayesian models, which is Traditional Bayesian models such as Gaussian Process (GP), can measure uncertainty of model.",
"However, as a nonparametric model, the time complexity of GP is increased by the size of data, which makes it intractable in many real world applications.",
"Conformal Prediction (CP) was proposed as a new approach to obtain confidence values (Vovk et al., 1999).",
"Unlike the traditional underlying algorithm, conformal predictors provide each of the predictions with a measure of confidence.",
"Also, a measure of credibility serves as an indicator of how suitable the training data are used for the classification task (Shafer and Vovk, 2008).",
"Different from Bayesian-based methods, CP approaches obtain probabilistically valid results, which are merely based on the independent and identically distributed assumption.",
"The drawback of CP methods is their computational inefficiency, which renders the application CP not applicable for any model that requires long training time such as Deep Neural Networks.",
"With the recently heated research on DNNs, the associated uncertainty models have received a great deal of attention.",
"Bayesian Neural Networks are a class of neural networks which are capable of modeling uncertainty (Denker and LeCun, 1990) (Hernandez-Lobato and Adams, 2015).",
"These models not only generate predictions but also provide the corresponding variance (uncertainty) of predictions.",
"However, as the number of model parameters increases, these models become computationally more expensive (Wang and Yeung, 2016).",
"Lee et al. proposed a computationally efficient uncertainty method that treats Deep Neural Networks as Gaussian Processes (Lee et al., 2017).",
"Due to its kernel-based design, however, it is not straightforward to apply this to the deep network structures for text classification.",
"Gal and Ghahramani used dropout in DNNs as an approximate Bayesian inference in deep Gaussian processes (Gal and Ghahramani, 2016) to mitigate the problem of representing uncertainty in deep learning without sacrificing the computational complexity.",
"Dropout-based methods have also been extended to various tasks such as computer vision (Kendall and Gal, 2017), autonomous vehicle safety (McAllister et al., 2017) and medical decision making (van der Westhuizen and Lasenby, 2017).",
"However, few of these methods are specifically designed for text classification and lack of considerations on improving the overall accuracy in the scenario that domain experts can be involved in the process.",
"Metric learning (Xing et al., 2003; Weinberger et al., 2006) algorithms design distance metrics that capture the relationships among data representations.",
"This approach has been widely used in various machine learning applications, including image segmentation (Gong et al., 2013), face recognition (Guillaumin et al., 2009), document retrieval (Xu et al., 2012), and collaborative filter-ing (Hsieh et al., 2017).",
"Weinberger et al. proposed a large margin nearest neighbor (LMNN) method (Weinberger et al., 2006) in learning a metric to minimize the number of class impostors based on pull and push losses.",
"However, as yet there have been no report of work focusing specifically on mitigating prediction uncertainties.",
"Mandelbaum and Weinshall (Mandelbaum and Weinshall, 2017) measured model uncertainty by the distance when comparing to the feature representations in training data, but this makes the uncertainty measurement inefficient because it requires an iteration over the entire training data set.",
"To the best of our knowledge, we are the first to apply metric learning to mitigate model uncertainty in the text classification task.",
"We also demonstrate that metric learning can be applied to dropout-based approaches to improve their prediction uncertainty.",
"In this section, we propose a DNN-based approach to predict document categories with high confidence for the accurate predictions and high uncertainty for the inaccurate predictions.",
"The overall architecture of the proposed model is presented in Section 3.1.",
"The technical details for the metric loss and model uncertainty predictions are deFigure 1: Overall Architecture of Proposed Model scribed in Sections 3.2 and 3.3, respectively.",
"In order to measure the uncertainty of the predictions for document classification task, we propose a neural-network-based model augmented with dropout-entropy uncertainty measurement and incorporating metric learning in its feature representation.",
"The overall structure of the proposed model is shown in Figure 1.",
"Our proposed model has four layers: 1) Input Layer .",
"The input layer is represented by the word embeddings of each words in the document.",
"By default, all word vectors are initialized by Glove (Pennington et al., 2014) pre-trained word vectors in Wikipedia with an embedding dimension of 200.",
"2) Sequence Modeling Layer .",
"The sequence modeling layer extracts the feature representations from word vectors.",
"This is usually implemented by Convolutional Neural Networks (CNN) or Recurrent Neural Networks (RNN).",
"In this paper, we focus on a CNN implementation with max pooling that utilizes 3 kernels with filter sizes of 3, 4 and 5, respectively.",
"After that, a max pooling operation is applied on the output of sequence model.",
"3) Dropout layer .",
"The convolutional layers usually contain a relatively small number of parameters compared to the fully connected layers.",
"It is therefore reasonable to assume that CNN layers suffer less from over-fitting, so Dropout is not usually used after CNN layers as it achieves only a trivial performance Figure 2: Feature representations with no metric learning (left) and metric learning (right).",
"improvement (Srivastava et al., 2014).",
"However, since there is only one fully-connected layer in our model, we opted to add one Dropout layer after the CNN layer, not only to prevent overfitting, but also to measure prediction uncertainty (Gal and Ghahramani, 2016).",
"The Dropout operation will be randomly applied to the activations during the training and uncertainty measurement phrases, but will not be applied to the evaluation phrase.",
"4) Output layers .",
"The output is connected by a fully connected layer and the softmax.",
"The loss function of our model is the combination of the cross entropy loss of the prediction and the metric loss of the feature representation.",
"We regard the output of the Dropout layer as the representation of the document and deposit it into a metric loss function.",
"The purpose here is to penalize large distance feature representations in the same class and small distance feature representations among different classes.",
"The details of the metric loss function will be described in Section 3.2.",
"For uncertainty learning in text feature space, our purpose is to ensure the Euclidean distance between intra-class instances is much smaller than the inter-class instances.",
"To achieve this, we use metric learning to train the desirable embeddings.",
"Specifically, let r i and r j be the feature representations of instances i and j , respectively, then the Euclidean distance between them is defined as D ( r i , r j ) = 1 d (cid:107) r i r j (cid:107) 22 , where d is the dimension of the feature representation.",
"n subsets { S k } nk =1 , where S k denotes the set of data instances belong to class k .",
"Then the intra-class loss penalizes the large Euclidean distance between the feature representations in the same class, which can be formalized as Equation (1).",
"where | S k | represents the number of elements in set S k .",
"The loss is the sum of all the feature distances between each possible pair in the same class set.",
"Then, the loss is normalized by the number of unique pairs belonging to each class set.",
"The inter-class loss ensures large feature distances between different classes, which is formally defined as Equation (2).",
"where m is a metric margin constant to distinguish between the intraand inter-classes and [ z ] + = max(0 , z ) denotes the standard hinge loss.",
"If the feature distance between instances from different classes is larger than m , the loss is zero.",
"Otherwise, we use the value of m minus the distance as its penalty loss, with a larger m representing a larger inter-class distance.",
"This parameter usually varies when we use different word embedding methods.",
"In our experiment, we found that a small m is normally needed when the word embedding is initialized by a pre-trained word vector method such as Glove (Pennington et al., 2014); a larger m is required if word vectors are initialized randomly.",
"The overall metric loss function is defined in Equation (3).",
"This combines the intra-class loss and inter-class loss for all the classes.",
"where is a pre-defined parameter to weight the importance of the intraand inter-class losses.",
"We set to 0.1 by default.",
"Figure 2 illustrates an example of a three-class feature representation in two dimensions.",
"The left-hand figure shows the feature distribution trained with no metric learning.",
"Obviously, the feature distance of the intra-class is large, sometimes even exceeding those of the inter-class distance near the decision boundary.",
"However, the features trained by metric learning, shown in the right-hand figure, exhibit clear gaps between the inter-class predictions.",
"This means the predictions with dropout are less likely to result in an inaccurate prediction and even reduce the variance of dropout prediction trials.",
"The example shown in Figure 2 has eight dropout predictions, three of which are classified to an inaccurate class when no metric learning is applied compared to only one inaccurate prediction with metric learning.",
"Bayesian models such as the Gaussian process (Rasmussen, 2004) provide a powerful tool to identify low-confidence regions of input space.",
"Recently, Dropout (Srivastava et al., 2014), which is used in deep neural networks, has been shown to serve as a Bayesian approximation to represent the model uncertainty in deep learning (Gal and Ghahramani, 2016).",
"Based on this work, we propose a novel information-entropy-based dropout method to measure the model uncertainty in combination with metric learning for text classification.",
"Given an input data instance x , we assume the corresponding output of our model is y .",
"The output computed by our model incorporates a dropout mechanism in its evaluation mode, which means the activations of intermediate layers with Dropout are not reduced by a factor.",
"When we repeat the process k times, we obtain the output vector y = { y 1 , . . . , y k } .",
"Note that the outputs are not the same since the output here is generated by applying dropout after the feature representation layer in Figure 1.",
"Given the output y of k trials with Dropout, Figure 3: Example of the dropout-entropy method.",
"our proposed uncertainty method has the following four steps, as shown in Figure 3: (1) Bin count .",
"We use bin count to calculate the frequency of each class.",
"For example, if the class 2 appears 24 times in the dropout output vector y , the bin count for class 2 is 24 .",
"(2) Mask .",
"We use the mask step to avoid random noises in the frequency vector.",
"In this step, we set the largest m elements to have their original values and the remaining ones to zero.",
"The value of m is usually chosen to be 2 / 3 of the total class number when the total classes are over 10; otherwise, we just skip the step.",
"(3) Normalization .",
"We use the normalization step to calculate the probabilities of each class.",
"(4) Information entropy .",
"The information entropy is calculated by u = (cid:80) ci =1 p k ( i ) log p k ( i ) , where p k ( i ) represents the frequency probability of the i -th class in a total k trials and c is the number of classes.",
"We use the entropy value as the uncertainty score here, in which the smaller the entropy value is, the more confident the model is in the output.",
"Take the case in Figure 3 as an example.",
"When the frequency of class 2 is 24, the entropy is 1 .",
"204 .",
"If the output of the 50 trials all belong to class 2 , the entropy becomes 0 .",
"401 , which means that the model is less uncertain about the predictive results.",
"In this section, the performance of the proposed model uncertainty approach is evaluated on multiple real-world document classification data sets.",
"After an introduction of the experiment settings in Section 4.1, we compare the performance achieved by the proposed method against those of existing state-of-the-art methods, along with an analysis of the parameter settings and metric learning in Section 4.2.",
"Due to space limitation, the detailed experiment results on different sequence models can be accessed in the full version here 1 .",
"The source code can be downloaded here 2 .",
"In our experiments, all word vectors are initialized by pre-trained Glove (Pennington et al., 2014) word vectors, by default.",
"The word embedding vectors are pre-trained in Wikipedia 2014 with a word vector dimension of 200.",
"We trained all the DNN-based models with a batch size of 32 samples with a momentum of 0.9 and an initial learning rate of 0.001 using the Adam (Kingma and Ba, 2014) optimization algorithm.",
"We conducted experiments on three publicly available datasets: 1) 20 Newsgroups 3 (Lang, 1995):",
"The data set is a collection of 20,000 documents, partitioned evenly across 20 different news groups; 2) IMDb Reviews (Maas et al., 2011): The data set contains 50,000 popular movie reviews with binary positive or negative labels from the IMDb website; and 3) Amazon Reviews (McAuley and Leskovec, 2013): The dataset is a collection of reviews from Amazon spanning the time period from May 1996 to July 2013.",
"We used review data from the Sports and outdoors category, with 272,630 data samples and rating labels from 1 to",
"5. For all three data sets, we randomly selected 70% of the data samples as the training set, 10% as the validation set and 20% as the test set.",
"In order to answer the question What percentage of data should be transferred to domain experts to achieve an overall accuracy rate above 90%?",
", we measure the classification performance in terms of various uncertainty ratios.",
"Specifically, assuming the entire testing set S has size n and an uncertainty ratio r , we can remove the most uncertain samples S r from S based on the uncertainty ratio r , where the size of the uncertainty set S r is r n .",
"We assume the uncertain samples S r handed to domain experts achieve 100% accuracy.",
"If the uncertainty ratio r equals to 0 , the model performs Uncertainty Ratio ( Accuracy, Improved Ratio ) 0% 10% 20% 30% 40% PL-Variance 0.878 0.911(3.69%) 0.937(6.70%) 0.955(8.71%) 0.970( 10.42% ) Distance 0.884 0.893(0.95%) 0.892(0.91%) 0.893(1.04%) 0.895(1.24%) Dropout 0.880 0.912(3.72%) 0.936(6.43%) 0.957(8.75%) 0.969(10.20%) Dropout + Metric 0.884 0.917(3.73%) 0.944 (6.78%) 0.961 (8.70%) 0.973 (10.11%) DE 0.878 0.911(3.70%) 0.937(6.71%) 0.956(8.83%) 0.969(10.33%) DE + Metric 0.883 0.918 ( 3.91% ) 0.944 ( 6.87% ) 0.961 ( 8.78% ) 0.973 (10.20%) Uncertainty Ratio ( F1 Score, Improved Ratio ) 0% 10% 20% 30% 40% PL-Variance 0.880 0.913(3.68%) 0.939(6.67%) 0.956(8.65%) 0.971( 10.34% ) Distance 0.885 0.894(1.07%) 0.898(1.42%) 0.901(1.84%) 0.904(2.13%) Dropout 0.881 0.914(3.70%) 0.938(6.41%) 0.958(8.67%) 0.971(10.13%) Dropout + Metric 0.885 0.917(3.70%) 0.944 (6.74%) 0.961 (8.67%) 0.974 (10.06%) DE 0.880 0.913(3.67%) 0.939(6.67%) 0.957(8.77%) 0.970(10.25%) DE + Metric 0.884 0.918 ( 3.88% ) 0.944 ( 6.83% ) 0.961 ( 8.73% ) 0.974 (10.14%) Table 2: Uncertainty Scores for the IMDb Dataset (2 Categories) Uncertainty Ratio ( Accuracy, Improved Ratio ) 0% 10% 20% 30% 40% PL-Variance 0.700 0.738(5.43%) 0.764(9.14%) 0.784(1.20%) 0.801(14.4%) Distance 0.697 0.699(0.29%) 0.702(0.72%) 0.704(1.00%) 0.705(1.15%) Dropout 0.700 0.735(5.00%) 0.764(9.14%) 0.800(14.29%) 0.831(18.71%) Dropout + Metric 0.710 0.746(5.07%) 0.779(9.72%) 0.815(14.79%) 0.847(19.30%) DE 0.700 0.739( 5.57% ) 0.773(10.43%) 0.806(15.14%) 0.836(19.43%) DE + Metric 0.724 0.764 (5.52%) 0.800 ( 10.50% ) 0.834 ( 15.19% ) 0.866 ( 19.61% ) Table 3: Uncertainty Scores for the Amazon Dataset (5 Categories) without uncertainty measurement concerns.",
"For the binary classification task, we use the accuracy and F1-score to measure the classification performance based on the testing set S \\ S r for different uncertainty ratios r .",
"Similarly, for multi-class tasks, we use the micro-F1 and macro-F1 scores utilizing the same settings as for the binary classification.",
"The following methods are included in the performance comparison: 1) Penultimate Layer Variance (PL-Variance).",
"Activations before the soft-max layer in a deep neural network always reveal the uncertainty of the prediction (Zaragoza and d'Alche Buc, 1998).",
"As a baseline method, we use the variance of the output of a fully connected layer in Figure 1 as the uncertainty weight.",
"2) Deep Neural Networks as Gaussian Processes (NNGP) (Lee et al., 2017).",
"This approach applies a Gaussian process to perform a Bayesian inference for deep neural networks, with a computationally efficient pipeline being used to compute the covariance function of the Gaussian process.",
"The default parameter settings in the source code 4 were applied in our experiments.",
"3) Distance-based Confidence (Dis-tance)(Mandelbaum and Weinshall, 2017).",
"This method assigns confidence scores based on the data embedding compared to the training data.",
"We set its nearest neighbor parameter k = 10 .",
"4) Dropout (Gal and Ghahramani, 2016).",
"Here, dropout training in DNNs is treated as an approximation of Bayesian inference in deep Gaussian processes.",
"We set the sample number T as 100 in our experiments.",
"5) Dropout + Metric.",
"In 4 https://github.com/brain-research/nngp Uncertainty Ratio ( Micro F1, Improved Ratio ) 0% 10% 20% 30% 40% Random DE 0.659 0.702(6.47%) 0.748(13.46%) 0.792(20.14%) 0.831(26.03%) DE + Metric 0.660 0.705 ( 6.85% ) 0.752 ( 13.92% ) 0.802 ( 21.57% ) 0.845 ( 28.04% ) Glove DE 0.760 0.807(6.25%) 0.849(11.73%) 0.888(16.79%) 0.917(20.70%) DE + Metric 0.781 0.835 ( 6.93 %) 0.878 ( 12.47% ) 0.918 ( 17.62% ) 0.944 ( 20.92% ) Table 4: Embedding vs. No Pre-trained Embedding Figure 4: Prediction performance for different metric margin settings.",
"order to validate the effectiveness of our metric learning, we applied our proposed metric learning method to the Dropout method.",
"The metric margin m and coefficient were set as 0 .",
"5 and 0 .",
"1 , respectively.",
"6) Our proposed method.",
"We evaluate our proposed method in two different settings, Dropout-Entropy alone (DE) and Dropout-Entropy with metric learning (DE + Metric).",
"Here, we set the sample number T = 100 , coefficient = 0 .",
"1 and the metric margin may vary from different data sets.",
"Table 1 shows the Micro-F1 and Macro-F1 scores for ratios of uncertain predictions eliminated ranging from 10 to 40% for the 20NewsGroup data set.",
"To demonstrate its effect, metric learning was also applied to the baseline method Dropout, and our proposed method DE.",
"The improvement ratio compared to the results with no uncertainty elimination, shown in the 0% column, are presented after the F1 scores.",
"Based on these result, we can conclude that: 1) Our proposed method, DE+Metric, significantly improves both the Microand Macro-F1 scores when a portion of uncertain predictions are eliminated.",
"For example, the Micro-F1 improves from 0.78 to 0.92 when 30% of the uncertain predictions are eliminated.",
"2) Comparing the results obtained by DE and DE+Metric, metric learning significantly improves the results obtained for different uncertainty ratio settings.",
"Similar results can be observed when comparing the Dropout and Dropout+Metric.",
"For example, the Micro-F1 scores for Dropout+Metric are around 5% better than the Dropout method alone, boosting them from 0.851 to 0.892, with a 30% uncertainty ratio.",
"3) The DE method outperforms all the other methods when metric learning is not applied.",
"Specifically, DE is around 4% better than the Dropout method in terms of the Micro-F1 score.",
"The results for IMDb and Amazon data sets are presented in Table 2 and Table 3.",
"When comparing our proposed model's performance across three data sets, we found that the greater improvements are achieved on multiinstead of binary-class classification data sets.",
"One possible explanation is that a comparatively large portion of multi-class features are close to the decision boundary in the feature space.",
"Through the metric learning strategy of minimizing intra-class distance while maxmizing the inter-class instances, the feature distance between the inter-class predictions is enlarged and the quality of embeddings is greatly enhanced.",
"The impact of metric learning on feature representation is analyzed in this section.",
"Figure 5 shows the 300-dimension feature representations",
"for the 20 NewsGroup testing data set, with Figure",
"5(a) presenting the features trained without metric learning and Figure",
"5(b) those trained by metric learning with a margin parameter m =10.",
"We used the t-SNE algorithm (Maaten and Hinton, 2008) to visualize the high dimensional features in the form of two dimensional images.",
"From the results, we can clearly see that the distances between the inter-classes are significantly enlarged compared to the features trained without metric learning shown in Figure",
"5(a).",
"This enlarged inter-class spacing means that dropout-based uncertainty methods have smaller prediction variances in case their dropout prediction trials are accurate.",
"Metric Margin.",
"Figure 4 shows the impact of metric margin parameters, ranging from 0 to 800 on the 20 NewsGroup data set with a 20% uncertainty ratio.",
"From the results, we can conclude that: (1) The prediction performance is not sensitive to the point at which the metric margin parameter is set as long as its value is not extremely large.",
"(2) Compared to the model trained with no metric learning, our methods consistently achieve better performance when the metric margin is set no larger than 10.",
"When the metric margin is too large, however, the prediction cross-entropy loss is hard to minimize and thus dampens the overall prediction performance.",
"(3) The results of Macro-F1 are similar to Micro-F1 with relatively small scores.",
"Impact of Word Embedding.",
"We also analyzed the impact of our proposed methods on different word embedding initialization methods, including random and pre-trained Glove word vectors in 200 dimensions.",
"Table 4 shows the results of Micro-F1 for the different uncertainty ratios.",
"We can observe that: 1) The performance of Glove-based methods are around 15% better than that of the randomly initialized methods for different uncertainty ratios.",
"2) Metric learning based on a Glove initialization generally outperforms a random initialization.",
"For instance, the F1 score of Glove rises by 0.29 when the uncertainty ratio is 20%, while for a random method it only increases by 0.04.",
"In this paper, a DNN-based model is proposed to address the uncertainty mitigation problem in the presence of human involvement in a text classification task.",
"To achieve this, we proposed a dropout-entropy uncertainty measurement method with the metric learning for the feature representation.",
"Extensive experiments on real-world data sets confirmed that our proposed approach dramatically outperforms competing methods, exhibiting a significant improvement in accuracy when a relatively small portion of the uncertainty predictions are handed over to domain experts."
] | [
"abstain",
"abstain",
"objective",
"method",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"objective",
"objective",
"method",
"objective",
"objective",
"method",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"objective",
"objective"
] |
[
"We propose a new model for speaker naming in movies that leverages visual, textual, and acoustic modalities in an unified optimization framework.",
"To evaluate the performance of our model, we introduce a new dataset consisting of six episodes of the Big Bang Theory TV show and eighteen full movies covering different genres.",
"Our experiments show that our multimodal model significantly outperforms several competitive baselines on the average weighted F-score metric.",
"To demonstrate the effectiveness of our framework, we design an end-to-end memory network model that leverages our speaker naming model and achieves state-of-the-art results on the subtitles task of the MovieQA 2017 Challenge.",
"Identifying speakers and their names in movies, and videos in general, is a primary task for many video analysis problems, including automatic subtitle labeling (Hu et al., 2015), content-based video indexing and retrieval (Zhang et al., 2009), video summarization (Tapaswi et al., 2014), and video storyline understanding (Tapaswi et al., 2014).",
"It is a very challenging task, as the visual appearance of the characters changes over the course of the movie due to several factors such as scale, clothing, illumination, and so forth (Arandjelovic and Zisserman, 2005; Everingham et al., 2006).",
"The annotation of movie data with speakers' names can be helpful in a number of applications, such as movie question answering (Tapaswi et al., 2016), automatic identification of character relationships (Zhang et al., 2009), or automatic movie captioning (Hu et al., 2015).",
"Most previous studies relied primarily on visual information (Arandjelovic and Zisserman, 2005; Everingham et al., 2006), and aimed for the slightly different task of face track labeling; 01:02:00 --> 01:02:01 Jack , must you go? 01:02:01 --> 01:02:04 Time for me to go row with other slaves.",
"Figure 1 : Overview of our approach for speaker naming.",
"speakers who did not appear in the video frame were not assigned any names, which is common in movies and TV shows.",
"Other available sources of information such as scripts were only used to extract cues about the speakers' names to associate the faces in the videos with their corresponding character name (Everingham et al., 2006; Tapaswi et al., 2015; Bauml et al., 2013; Sivic et al., 2009); however since scripts are not always available, the applicability of these methods is somehow limited.",
"Other studies focused on the problem of speaker recognition without naming, using the speech modality as a single source of information.",
"While some of these studies attempted to incorporate the visual modality, their goal was to cluster the speech segments rather than name the speakers 2206 (Erzin et al., 2005; Bost and Linares, 2014; Kap-souras et al., 2015; Bredin and Gelly, 2016; Hu et al., 2015; Ren et al., 2016).",
"None of these studies used textual information (e.g., dialogue), which prevented them from identifying speaker names.",
"In our work, we address the task of speaker naming, and propose a new multimodal model that leverages in an unified framework of the visual, speech, and textual modalities that are naturally available while watching a movie.",
"We do not assume the availability of a movie script or a cast list, which makes our model fully unsupervised and easily applicable to unseen movies.",
"The paper makes two main contributions.",
"First, we introduce a new unsupervised system for speaker naming for movies and TV shows that exclusively depends on videos and subtitles, and relies on a novel unified optimization framework that fuses visual, textual, and acoustic modalities for speaker naming.",
"Second, we construct and make available a dataset consisting of 24 movies with 31,019 turns manually annotated with character names.",
"Additionally, we also evaluate the role of speaker naming when embedded in an end-to-end memory network model, achieving state-of-the-art performance results on the subtitles task of the MovieQA 2017 Challenge.",
"The problem of speaker naming in movies has been explored by the computer vision and the speech communities.",
"In the computer vision community, the speaker naming problem is usually considered as a face/person naming problem, in which names are assigned to their corresponding faces on the screen (Everingham et al., 2006; Cour et al., 2010; Bauml et al., 2013; Haurilet et al., 2016; Tapaswi et al., 2015).",
"On the other hand, the speech community considered the problem as a speaker identification problem, which focuses on recognizing and clustering speakers rather than naming them (Reynolds, 2002; Campbell, 1997).",
"In this work, we aim to solve the problem of speaker naming in movies, in which we label each segment of the subtitles with its corresponding speaker name whether the speaker's face appeared on in the video or not.",
"Previous work can be furthered categorized according to the type of supervision used to build the character recognition and speaker recognition models: supervised vs. weakly supervised models.",
"In the movie and television domains, utilizing scripts in addition to subtitles to obtain times-tamped speaker information was also studied in (Everingham et al., 2006; Tapaswi et al., 2015; Bauml et al., 2013; Sivic et al., 2009).",
"Moreover, they utilized this information to resolve the ambiguity introduced by co-occurring faces in the same frame.",
"Features were extracted through the period of speaking (detected via lip motion on each face).",
"Then they assigned the face based on candidate names from the time-stamped script.",
"Thus, these studies used speaker recognition as an essential step to construct cast-specific face classifiers.",
"(Tapaswi et al., 2012) extended the face identification problem to include person tracking.",
"They utilized available face recognition results to learn clothing models for characters to identify person tracks without faces.",
"In (Cour et al., 2010; Haurilet et al., 2016), the authors proposed a weakly supervised model depending on subtitles and a character list.",
"They extracted textual cues from the dialog: first, second, and third person references, such as I'm Jack, Hey, Jack!, and Jack left.",
"Using a character list from IMDB, they mapped these references onto true names using minimum edit distance, and then they ascribed the references to face tracks.",
"Other work removed the dependency on a true character list by determining all names through coreference resolution.",
"However, this work also depended on the availability of scripts (Ramanathan et al., 2014).",
"In our model, we removed the dependency on both the true cast list and the script, which makes it easier to apply our model to other movies and TV shows.",
"Recent work proposed a convolutional neural network (CNN) and Long Short-Term Memory (LSTM) based learning framework to automatically learn a function that combines both facial and acoustic features (Hu et al., 2015; Ren et al., 2016).",
"Using these cues, they tried to learn matching face-audio pairs and non-matching face-audio pairs.",
"They then trained a SVM classifier on the audio-video pairings to discriminate between the non-overlapping speakers.",
"In order to train their models, they manually identified the leading characters in two TV shows, Friends and The Big Bang Theory (BBT), and collected their face tracks and corresponding audio segments using pre-annotated subtitles.",
"Despite the very high performance reported in these studies, it is very hard to generalize their approach since it requires a lot 2207 of training data.",
"On the other hand, talking faces have been used to improve speaker recognition and diarization in TV shows (Bredin and Gelly, 2016; Bost and Linares, 2014; Li et al., 2004).",
"In the case of (Liu et al., 2008), they modeled the problem of speaker naming as facial recognition to identify speakers in news broadcasts.",
"This work leveraged optical character recognition to read the broadcasters' names that were displayed on screen, requiring the faces to already be annotated.",
"Our dataset consists of a mix of TV show episodes and full movies.",
"For the TV show, we use six full episodes of season one of the BBT.",
"The number of named characters in the BBT episodes varies between 5 to 8 characters per episode, and the background noise level is low.",
"Additionally, we also acquired a set of eighteen full movies from different genres, to evaluate how our model works under different conditions.",
"In this latter dataset, the number of named characters ranges between 6 and 37, and it has varied levels of background noise.",
"We manually annotated this dataset with the character name of each subtitle segment.",
"To facilitate the annotation process, we built an interface that parses the movies subtitles files, collects the cast list from IMDB for each movie, and then shows one subtitle segment at a time along with the cast list so that the annotator can choose the correct character.",
"Using this tool, human annotators watched the movies and assigned a speaker name to each subtitle segment.",
"If a character name was not mentioned in the dialogue, the annotators labeled it as unknown.",
"To evaluate the quality of the annotations, five movies in our dataset were double annotated.",
"The Cohen's Kappa inter-annotator agreement score for these five movies is 0.91, which shows a strong level of agreement.",
"To clean the data, we removed empty segments, as well as subtitle description parts written between brackets such as [groaning] and [sniff-ing].",
"We also removed segments with two speakers at the same time.",
"We intentionally avoided using any automatic means to split these segments, to preserve the high-quality of our gold standard.",
"Table 1 shows the statistics of the collected data.",
"Overall, the dataset consists of 24 videos with a total duration of 40.28 hours, a net dialogue duration of 21.99 hours, and a total of 31,019 turns spoken by 463 different speakers.",
"Four of the movies in this dataset are used as a development set to develop supplementary systems and to fine tune our model's parameters; the remaining movies are used for evaluation.",
"We process the movies by extracting several textual, acoustic, and visual features.",
"SkipThoughts uses a Recurrent Neural Network to capture the underlying semantic and syntactic properties, and map them to a vector representation (Kiros et al., 2015).",
"We use their pretrained model to compute a 4,800 dimensional sentence representation for each line in the subtitles.",
"1 TF-IDF is a traditional weighting scheme in information retrieval.",
"We represent each subtitle as a vector of tf-idf weights, where the length of the vector (i.e., vocabulary size) and the idf scores are obtained from the movie including the subtitle.",
"For each movie in the dataset, we extract the audio from the center channel.",
"The center chan-nel is usually dedicated to the dialogue in movies, while the other audio channels carry the surrounding sounds from the environment and the musical background.",
"Although doing this does not fully eliminate the noise in the audio signal, it still improves the speech-to-noise ratio of the signal.",
"When a movie has stereo sound (left and right channels only), we down-mix both channels of the stereo stream into a mono channel.",
"In this work, we use the subtitles timestamps as an estimate of the boundaries that correspond to the uttered speech segments.",
"Usually, each subtitle corresponds to a segment being said by a single speaker.",
"We use the subtitle timestamps for segmentation so that we can avoid automatic speaker diarization errors and focus on the speaker naming problem.",
"1 https://github.com/ryankiros/skip-thoughts 2208 To represent the relevant acoustic information from each spoken segment, we use iVectors, which is the state-of-the-art unsupervised approach in speaker verification (Dehak et al., 2011).",
"While other deep learning-based speaker embeddings models also exist, we do not have access to enough supervised data to build such models.",
"We train unsupervised iVectors for each movie in the dataset, using the iVector extractor used in (Khorram et al., 2016).",
"We extract iVectors of size 40 using a Gaussian Mixture Model-Universal Background Model (GMM-UBM) with 512 components.",
"Each iVector corresponds to a speech segment uttered by a single speaker.",
"We fine tune the size of the iVectors and the number of GMM-UBM components using the development dataset.",
"We detect faces in the movies every five frames using the recently proposed MTCNN (Zhang et al., 2016) model, which is pretrained for face detection and facial landmark alignment.",
"Based on the results of face detection, we apply the forward and backward tracker with an implementation of the Dlib library (King, 2009; Danelljan et al., 2014) to extract face tracks from each video clip.",
"We represent a face track using its best face in terms of detection score, and use the activations of the fc7 layer of pretrained VGG-Face (Parkhi et al., 2015) network as visual features.",
"We calculate the distance between the upper lip center and the lower lip center based on the 68-point facial landmark detection implemented in the Dlib library (King, 2009; Kazemi and Sullivan, 2014).",
"This distance is normalized by the height of face bounding boxes and concatenated across frames to represent the amount of mouth opening.",
"A human usually speaks with lips moving with a certain frequency (3.75 Hz to 7.5 Hz used in this work) (Tapaswi et al., 2015).",
"We apply a bandpass filter to amplify the signal of true lip motion in these segments.",
"The overall sum of lip motion is used as the score for the talking face.",
"We tackle the problem of speaker naming as a transductive learning problem with constraints.",
"In this approach, we want to use the sparse positive labels extracted from the dialogue and the underlying topological structure of the rest of the unlabeled data.",
"We also incorporate multiple cues extracted from both textual and multimedia information.",
"A unified learning framework is proposed to enable the joint optimization over the automatically labeled and unlabeled data, along with multiple semantic cues.",
"In this work, we do not consider the set of character names as given because we want to build a model that can be generalized to unseen movies.",
"This strict setting adds to the problem's complexity.",
"To extract the list of characters from the subtitles, we use the Named Entity Recognizer (NER) in the Stanford CoreNLP toolkit (Manning et al., 2014).",
"The output is a long list of person names that are mentioned in the dialogue.",
"This list is prone to errors including, but not limited to, nouns that are misclassified by the NER as person's name such as Dad and Aye, names that are irrelevant to the movie such as Superman or named animals, or uncaptured character names.",
"To clean the extracted names list of each movie, we cluster these names based on string minimum edit distance and their gender.",
"From each cluster, we then pick a name to represent it based on its frequency in the dialogue.",
"The result of this step consists of name clusters along with their distribution in the dialogue.",
"The distribution of each cluster is the sum of all the counts of its members.",
"To filter out irrelevant characters, we run a name reference classifier, which classifies each name into first, second or third person references.",
"If a name was only mentioned as a third person throughout the whole movie, we discard it from the list of characters.",
"We remove any name cluster that has a total count less than three, which takes care of the misclassified names' reference types.",
"We use the subtitles to extract the name mentions in the dialogue.",
"These mentions allow us to obtain cues about the speaker name and the absence or the presence of the mentioned character in the surrounding subtitles.",
"Thus, they affect the probability that the mentioned character is the speaker or not.",
"We follow the same name reference categories used in (Cour et al., 2010; Haurilet et al., 2016).",
"We classify a name mention into: first (e.g., I'm Sheldon), second (e.g., Oh, hi, Penny) or third person reference (e.g., So how did it go with Leslie?).",
"The first person reference represents a positive constraint that allows us to label the corresponding iVector of the speaker and his face if 2209 it exists during the segment duration.",
"The second person reference represents a multi-instance constraint that suggests that the mentioned name is one of the characters that are present in the scene, which increases the probability of this character to be one of the speakers of the surrounding segments.",
"On the other hand, the third person reference represents a negative constraint, as it suggests that the speaker does not exist in the scene, which lowers the character probability of the character being one of the speakers of the next or the previous subtitle segments.",
"To identify first, second and third person references, we train a linear support vector classifier.",
"The first person, the second and third person clas-sifier's training data are extracted and labeled from our development dataset, and fine tuned using 10-fold cross-validation.",
"Table 2 shows the results of the classifier on the test data.",
"The average number of first, second and third-person references in each movie are 14.63, 117.21, and 95.71, respectively.",
"Given a set of data points that consist of l labeled 2 and u unlabeled instances, we apply an optimization framework to infer the best prediction of speaker names.",
"Suppose we have l + u instances X = { x 1 , x 2 , ..., x l , x l +1 , ..., x l + u } and K possible character names.",
"We also get the dialogue-based positive labels y i for instances x i , where y i is a k -dimension one-hot vector and y ji = 1 if x i belongs to the class j , for every 1 i l and 1 j K .",
"To name each instance x i , we want to predict another one-hot vector of naming scores f ( x i ) for each x i , such that argmax j f j ( x i ) = z i where z i is the ground truth number of class for instance x i .",
"To combine the positive labels and unlabeled data, we define the objective function for predic-2 Note that in our setup, all the labeled instances are obtained automatically, as described above.",
"tions f as follows: L initial ( f ) = 1 l l X i =1 || f ( x i ) y i || 2 + 1 l + u l + u X i =1 l + u X j =1 w ij || f ( x i ) f ( x j ) || 2 (1) Here w ij is the similarity between x i and x j , which is calculated as the weighted sum of textual, acoustic and visual similarities.",
"The inverse Euclidean distance is used as similarity function for each modality.",
"The weights for different modalities are selected as hyperparameters and tuned on the development set.",
"This objective leads to a convex loss function which is easier to optimize over feasible predictions.",
"Besides the positive labels obtained from first person name references, we also introduce other semantic constraints and cues to enhance the power of our proposed approach.",
"We implement the following four types of constraints: Multiple Instance Constraint.",
"Although the second person references cannot directly provide positive constraints, they imply that the mentioned characters have high probabilities to be in this conversation.",
"Following previous work (Cour et al., 2010), we incorporate the second person references as multiple instances constraints into our optimization: if x i has a second person reference j , we encourage j to be assigned to its neighbors, i.e., its adjacent subtitles with similar timestamps.",
"For the implementation, we simply include multiple instances constraints as a variant of positive labels with decreasing weights s , where s = 1 / ( l i ) for each neighbor x l .",
"Negative Constraint.",
"For the third person references, the mentioned characters may not occur in the conversation and movies.",
"So we treat them as negative constraints, which means they imply that the mentioned characters should not be assigned to corresponding instances.",
"This constraint is formulated as follows: L neg ( f ) = X ( i,j ) N [ f j ( x i )] 2 (2) where N is the set of negative constraints x i doesn't belong class j .",
"Gender Constraint.",
"We train a voice-based gender classifier by using the subtitles segments from the four movies in our development dataset (5,543 2210 segments of subtitles).",
"We use the segments in which we know the speaker's name and manually obtain the ground truth gender label from IMDB.",
"We extract the signal energy, 20 Mel-frequency cepstral coefficients (MFCCs) along with their first and second derivatives, in addition to time-and frequency-based absolute fundamental frequency (f0) statistics as features to represent each segment in the subtitles.",
"The f0 statistics has been found to improve the automatic gender detection performance for short speech segments (Levitan et al., 2016), which fits our case since the median duration of the dialogue turns in our dataset is 2.6 seconds.",
"The MFCC features are extracted using a step size of 16 msec over a 64 msec window using the method from (Mathieu et al., 2010), while the f0 statistics are extracted using a step size of 25 msec over a 50 msec window as the default configura-tion in (Eyben et al., 2013).",
"We then use these features to train a logistic regression classifier using the Scikit-learn library (Pedregosa et al., 2011).",
"The average accuracy of the gender classifier on a 10-fold cross-validation is 0.8867.",
"Given the results for the gender classification of audio segments and character names, we define the gender loss to penalize inconsistency between the predicted gender and character names: L gender ( f ) = X ( i,j ) Q 1 P ga ( x i )(1 P gn ( j )) f j ( x i ) + X ( i,j ) Q 2 (1 P ga ( x i )) P gn ( j ) f j ( x i ) (3) where P ga ( x i ) is the probability for instance x i to be a male, and P gn ( j ) is the probability for name j to be a male, and Q 1 = { ( i, j ) | P ga ( x i ) < 0 .",
"5 , P gn ( j ) > 0 .",
"5 } , Q 2 = { ( i, j ) | P ga ( x i ) > 0 .",
"5 , P gn ( j ) < 0 .",
"5 } .",
"Distribution Constraint.",
"We automatically analyze the dialogue and extract the number of mentions of each character in the subtitles using Stanford CoreNLP and string matching to capture names that are missed by the named entity recognizer.",
"We then filter the resulting counts by removing third person mention references of each name as we assume that this character does not appear in the surrounding frames.",
"We use the results to estimate the distribution of the speaking characters and their importance in the movies.",
"The main goal of this step is to construct a prior probability distribution for the speakers in each movie.",
"To encourage our predictions to be consistent with the dialogue-based priors, we penalize the square error between the distributions of predictions and name mentions priors in the following equation: L dis ( f ) = KX j =1 ( X ( f j ( x i )) d j ) 2 (4) where d j is the ratio of name j mentions in all subtitles.",
"Final Framework.",
"Combining the loss in Eqn.",
"1 and multiple losses with different constraints, we obtain our unified optimization problem: f = arg min f 1 L initial ( f ) + 2 LMI ( f ) + 3 L neg ( f ) + 4 L gender ( f ) + 5 L dis ( f ) (5) All of the s are hyper-parameters to be tuned on development set.",
"We also include the constraint that predictions for different character names must sum to 1.",
"We solve this constrained optimization problem with projected gradient descent (PGD).",
"Our optimization problem in Eqn.",
"5 is guaranteed to be a convex optimization problem and therefore projected gradient descent is guaranteed to stop with global optima.",
"PGD usually converges after 800 iterations.",
"We model our task as a classification problem, and use the unified optimization framework described earlier to assign a character name to each subtitle.",
"Since our dataset is highly unbalanced, with a few main characters usually dominating the entire dataset, we adopt the weighted F-score as our evaluation metric, instead of using an accuracy metric or a micro-average F-score.",
"This allows us to take into account that most of the characters have only a few spoken subtitle segments, while at the same time placing emphasis on the main characters.",
"This leads sometimes to an average weighted F-score that is not between the average precision and recall.",
"One aspect that is important to note is that characters are often referred to using different names.",
"For example, in the movie The Devil's Advo-cate, the character Kevin Lomax is also referred to as Kevin or Kev.",
"In more complicated situations, characters may even have multiple identities, such as the character Saul Bloom in the movie Ocean's Eleven, who pretends to be another character named Lyman Zerga.",
"Since our 2211 Precision Recall F-score B1: MFMC 0.0910 0.2749 0.1351 B2: DRA 0.2256 0.1819 0.1861 B3: Gender-based DRA 0.2876 0.2349 0.2317 Our Model (Skip-thoughts)* 0.3468 0.2869 0.2680 Our Model (TF-IDF)* 0.3579 0.2933 0.2805 Our Model (iVectors) 0.2151 0.2347 0.1786 Our Model (Visual)* 0.3348 0.2659 0.2555 Our Model (Visual+iVectors)* 0.3371 0.2720 0.2617 Our Model (TF-IDF+iVectors)* 0.3549 0.2835 0.2643 Our Model (TF-IDF+Visual)* 0.3385 0.2975 0.2821 Our Model (all)* 0.3720 0.3108 0.2920 Table 3 : Comparison between the average of macro-weighted average of precision, recall and f-score of the baselines and our model.",
"* means statistically significant (t-test p-value < 0.05) when compared to baseline B3.",
"goal is to assign names to speakers, and not necessarily solve this coreference problem, we consider the assignment of the subtitle segments to any of the speaker's aliases to be correct.",
"Thus, during the evaluation, we map all the characters' aliases from our model's output to the names in the ground truth annotations.",
"Our mapping does not include other referent nouns such as Dad, Buddy,",
"etc.; if a segment gets assigned to any such terms, it is considered a misprediction.",
"We compare our model against three baselines: B1: Most-frequently mentioned character consists of selecting the most frequently mentioned character in the dialogue as the speaker for all the subtitles.",
"Even though it is a simple baseline, it achieves an accuracy of 27.1%, since the leading characters tend to speak the most in the movies.",
"B2: Distribution-driven random assignment consists of randomly assigning character names according to a distribution that reflects their frac-tion of mentions in all the subtitles.",
"B3: Gender-based distribution-driven random assignment consists of selecting the speaker names based on the voice-based gender detection classifier.",
"This baseline randomly selects the character name that matches the speaker's gender according to the distribution of mentions of the names in the matching gender category.",
"The results obtained with our proposed unified optimization framework and the three baselines are shown in Table 3.",
"We also report the performance of the optimization framework using different combinations of the three modalities.",
"The model that uses all three modalities achieves the best results, and outperforms the strongest baseline (B3) by more than 6% absolute in average",
"Figure 2 : For each speech segment, we applied t-SNE (Van Der Maaten, 2014) on their corresponding iVectors.",
"The points with the same color represent instances with the same character name.",
"weighted F-score.",
"It also significantly outperforms the usage of the visual and acoustic features combined, which have been frequently used together in previous work, suggesting the importance of textual features in this setting.",
"The ineffectiveness of the iVectors might be a result of the background noise and music, which are difficult to remove from the speech signal.",
"Figure 2 shows the t-Distributed Stochastic Neighbor Embedding (t-SNE) (Van Der Maaten, 2014), which is a nonlinear dimensionality reduction technique that models points in such a way that similar vectors are modeled by nearby points and dissimilar objects are modeled by distant points, visualization of the iVectors over the whole BBT show and the movie Titanic.",
"In the BBT there is almost no musical background or background noise, while, Titanic has musical background in addition to the background noise such as the screams of the drowning people.",
"From the graph, 2212 the difference between the quality of the iVectors clusters on different noise-levels is clear.",
"Table 4 shows the effect of adding components of our loss function to the initial loss L init function.",
"The performance of the model using only L init without the other parts is very low due to the sparsity of first person references and errors that the person reference classifier introduces.",
"Table 4 : Analysis of the effect of adding each component of the loss function to the initial loss.",
"In order to analyze the effect of the errors that several of the modules (e.g., gender and name reference classifiers) propagate into the system, we also test our framework by replacing each one of the components with its ground truth information.",
"As seen in Table 5, the results obtained in this setting show significant improvement with the replacement of each component in our framework, which suggests that additional work on these components will have positive implications on the overall system.",
"Table 5 : Comparison between our model while replacing different components with their ground truth information.",
"Identifying speakers is a critical task for understanding the dialogue and storyline in movies.",
"MovieQA is a challenging dataset for movie understanding.",
"The dataset consists of 14,944 multiple choice questions about 408 movies.",
"Each question has five answers and only one of them is correct.",
"The dataset is divided into three splits: train, validation, and test according to the movie titles.",
"Importantly, there are no overlapping movies between the splits.",
"Table 6 shows examples of the question and answers in the MovieQA dataset.",
"Figure 3 : The diagram describing our Speaker-based Convolutional Memory Network (SC-MemN2N) model.",
"The MovieQA 2017 Challenge 3 consists of six different tasks according to the source of information used to answer the questions.",
"Given that for many of the movies in the dataset the videos are not completely available, we develop our initial system so that it only relies on the subtitles; we thus participate in the challenge subtitles task, which includes the dialogue (without the speaker information) as the only source of information to answer questions.",
"To demonstrate the effectiveness of our speaker naming approach, we design a model based on an end-to-end memory network (Sukhbaatar et al., 2015), namely Speaker-based Convolutional Memory Network (SC-MemN2N), which relies on the MovieQA dataset, and integrates the speaker naming approach as a component in the network.",
"Specifically, we use our speaker naming framework to infer the name of the speaker for each segment of the subtitles, and prepend the predicted speaker name to each turn in the subtitles.",
"4 To represent the movie subtitles, we represent each turn in the subtitles as the mean-pooling of a 300-dimension pretrained word2vec (Mikolov et al., 2013) representation of each word in the sentence.",
"We similarly represent the input questions and their corresponding answers.",
"Given a question, we use the SC-MemN2N memory to find an answer.",
"For questions asking about specific characters, we keep the memory slots that have the characters in question as speakers or mentioned in, and mask out the rest of the memory slots.",
"Figure 3 http://movieqa.cs.toronto.edu/workshops/iccv2017/ 4 We strictly follow the challenge rules, and only use text to infer the speaker names.",
"Table 6 : Example of questions and answers from the MQA benchmark.",
"The answers in bold are the correct answers to their corresponding question.",
"3 shows the architecture of our model.",
"Table 7 includes the results of our system on the validation and test sets, along with the best systems introduced in previous work, showing that our SC-MemN2N achieves the best performance.",
"Furthermore, to measure the effectiveness of adding the speaker names and masking, we test our model after removing the names from the network (C-MemN2N).",
"As seen from the results, the gain of SC-MemN2N is statistically significant 5 compared to a version of the system that does not include the speaker names (C-MemN2N).",
"Figure 4 shows the performance of both C-MemN2N and SC-MemN2N models by question type.",
"The results suggest that our speaker naming helps the model better distinguish between characters, and that prepending the speaker names to the subtitle segments improves the ability of the memory network to correctly identify the supporting facts from the story that answers a given question.",
"Table 7 : Performance comparison for the subtitles task on the MovieQA 2017 Challenge on both validation and test sets.",
"We compare our models with the best existing models (from the challenge leaderboard).",
"In this paper, we proposed a unified optimization framework for the task of speaker naming",
"Figure 4 : Accuracy comparison according to question type.",
"in movies.",
"We addressed this task under a difficult setup, without a cast-list, without supervision from a script, and dealing with the complicated conditions of real movies.",
"Our model includes textual, visual, and acoustic modalities, and incorporates several grammatical and acoustic constraints.",
"Empirical experiments on a movie dataset demonstrated the effectiveness of our proposed method with respect to several competitive baselines.",
"We also showed that an SC-MemN2N model that leverages our speaker naming model can achieve state-of-the-art results on the subtitles task of the MovieQA 2017 Challenge.",
"The dataset annotated with character names introduced in this paper is publicly available from http://lit.eecs.umich.edu/ downloads.html .",
"We would like to thank the anonymous reviewers for their valuable comments and suggestions.",
"This work is supported by a Samsung research grant and by a DARPA grant HR001117S0026-AIDA-FP-045."
] | [
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"objective",
"method",
"result",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"other",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"other",
"other",
"other"
] |
[
"Attention mechanisms have seen wide adoption in neural NLP models.",
"In addition to improving predictive performance, these are often touted as affording transparency: models equipped with attention provide a distribution over attended-to input units, and this is often presented (at least implicitly) as communicating the relative importance of inputs.",
"However, it is unclear what relationship exists between attention weights and model outputs.",
"In this work we perform extensive experiments across a variety of NLP tasks that aim to assess the degree to which attention weights provide meaningful explanations\" for predictions. We find that they largely do not. For example, learned attention weights are frequently uncorrelated with gradient-based measures of feature importance, and one can identify very different attention distributions that nonetheless yield equivalent predictions. Our findings show that standard attention modules do not provide meaningful explanations and should not be treated as though they do. Code to reproduce all experiments is available at https://github.com/successar/ AttentionExplanation . 1 Introduction and Motivation Attention mechanisms (Bahdanau et al., 2014) induce conditional distributions over input units to compose a weighted context vector for downstream modules. These are now a near-ubiquitous component of neural NLP architectures. Attention weights are often claimed (implicitly or explicitly) to afford insights into the inner-workings of models: for a given output one can inspect the inputs to which the model assigned large attention weights. Li et al. (2016) summarized this commonly held view in NLP: Attention provides an important way to explain the workings of neural models\".",
"Indeed, claims that attention provides after 15 minutes watching the movie i was asking myself what to do leave the theater sleep or try to keep watching the movie to see if there was anything worth i finally watched the movie what a waste of time maybe i am not a 5 years old kid anymore original adversarial after 15 minutes watching the movie i was asking myself what to do leave the theater sleep or try to keep watching the movie to see if there was anything worth i finally watched the movie what a waste of time maybe i am not a 5 years old kid anymore f ( x | , ) = 0 .",
"interpretability are common in the literature, e.g., (Xu et al., 2015; Choi et al., 2016; Lei et al., 2017; Martins and Astudillo, 2016; Xie et al., 2017).",
"1 Implicit in this is the assumption that the input units (e.g., words) accorded high attention weights are responsible for model outputs.",
"But as far as we are aware, this assumption has not been formally evaluated, and our findings here suggest that it is problematic.",
"More specifically, we empirically investigate the relationship between attention weights, inputs, and outputs.",
"Assuming attention provides an explanation for model predictions, we might expect the following properties to hold.",
"(i) Attention weights should correlate with feature importance measures (e.g., gradient-based measures);",
"(ii) Alternative (or counterfactual ) attention weight configurations ought to yield corresponding changes in prediction (and if they do not then are equally plausible as explanations).",
"We report that neither property is consistently observed by standard attention mechanisms in the context of text classification, question answering (QA), and Natural Language Inference (NLI) tasks when RNN encoders are used.",
"1 We do not intend to single out any particular work; indeed one of the authors has himself presented (supervised) attention as providing interpretability (Zhang et al., 2016).",
"Consider Figure 1. The left panel shows the original attention distribution over the words of a particular movie review using a standard attentive BiLSTM architecture for sentiment analysis.",
"It is tempting to conclude from this that the token waste is largely responsible for the model coming to its disposition of negative' ( y = 0 . 01 ).",
"But one can construct an alternative attention distribution (right panel) that attends to entirely different tokens yet yields an essentially identical prediction (holding all other parameters of f , , constant).",
"Such counterfactual distributions imply that explaining the original prediction by highlighting attended-to tokens is misleading insofar as alternative attention distributions would have yielded an equivalent prediction (e.g., one might conclude from the right panel that model output was due primarily to was rather than waste ).",
"Further, the attention weights in this case correlate only weakly with gradient-based measures of feature importance ( g = 0 . 29 ).",
"And arbitrarily permuting the entries in yields a median output difference of 0.006 with the original prediction.",
"These and similar findings call into question the view that attention provides meaningful insight into model predictions.",
"We thus caution against using attention weights to highlight input tokens responsible for model outputs and constructing just-so stories on this basis, particularly with complex encoders.",
"Research questions and contributions .",
"We examine the extent to which the (often implicit) narrative that attention provides model transparency 2 holds across tasks by exploring the following empirical questions.",
"1. To what extent do induced attention weights correlate with measures of feature importance specifically, those resulting from gradients and leave-one-out methods?",
"2. Would alternative attention weights (and hence distinct heatmaps/explanations) necessarily yield different predictions?",
"Our findings with respect to these questions (as-suming a BiRNN encoder) are summarized as follows: (1) Only weakly and inconsistently, and, (2) No; it is very often possible to construct adversarial attention distributions that yield effectively 2 Defined as per (Lipton, 2016); we are interested in whether attended-to features are responsible for outputs.",
"equivalent predictions as when using the originally induced attention weights, despite attending to entirely different input features.",
"Further, randomly permuting attention weights often induces only minimal changes in output.",
"We consider exemplar NLP tasks for which attention mechanisms are commonly used: classification, natural language inference (NLI), and question answering.",
"3 We adopt the following general modeling assumptions and notation.",
"We assume model inputs x RT | V | , composed of one-hot encoded words at each position.",
"These are passed through an embedding matrix E which provides dense ( d dimensional) token representations x e RT d .",
"Next, an encoder Enc consumes the embedded tokens in order, producing T m -dimensional hidden states: h = Enc ( x e ) RT m .",
"We predominantly consider a Bi-RNN as the encoder module, but for completeness we also analyze convolutional and (unordered) aver-age embedding' variants.",
"4 A similarity function maps h and a query Q R m (e.g., hidden representation of a question in QA, or the hypothesis in NLI) to scalar scores, and attention is then induced over these: = softmax ( ( h , Q )) RT .",
"In this work we consider two common similarity functions: Additive ( h , Q ) = v T tanh ( W 1 h + W 2 Q ) (Bahdanau et al., 2014) and Scaled Dot-Product ( h , Q ) = hQ m (Vaswani et al., 2017), where v , W 1 , W 2 are model parameters.",
"Finally, a dense layer Dec with parameters consumes a weighted instance representation and yields a prediction y = ( h ) R |Y| , where h = (cid:80) Tt =1 t h t ; is an output activation function; and |Y| denotes the label set size.",
"For binary text classification , we use: Stanford Sentiment Treebank (SST) (Socher et al., 2013).",
"10,662 sentences tagged with sentiment on a scale from 1 (most negative) to 5 (most positive).",
"We filter out neutral instances and dichotomize the remaining sentences into positive (4, 5) and negative (1, 2).",
"IMDB Large Movie Reviews Corpus (Maas et al., 2011).",
"Binary sentiment classification dataset containing 50,000 polarized (positive or negative) movie reviews, split into half for training and testing.",
"Twitter Adverse Drug Reaction dataset (Nikfar-jam et al., 2015).",
"A corpus of 8000 tweets retrieved from Twitter, annotated by domain experts as mentioning adverse drug reactions.",
"20 Newsgroups (Hockey vs Baseball) .",
"Collection of 20,000 newsgroup correspondences, partitioned (nearly) evenly across 20 categories.",
"We extract instances belonging to baseball and hockey , which we designate as 0 and 1, respectively, to derive a binary classification task.",
"5 496,835 news articles from 2000+ sources.",
"AG News Corpus (Business vs World) .",
"We follow (Zhang et al., 2015) in filtering out all but the top 4 categories.",
"We consider the binary classification task of discriminating between world (0) and business (1) articles.",
"MIMIC ICD9 (Diabetes) (Johnson et al., 2016).",
"A subset of discharge summaries from the MIMIC III dataset of electronic health records.",
"The task is to recognize if a given summary has been labeled with the ICD9 code for diabetes (or not).",
"MIMIC ICD9 (Chronic vs Acute Anemia) (John-son et al., 2016).",
"A subset of discharge summaries from MIMIC III dataset (Johnson et al., 2016) known to correspond to patients with anemia.",
"Here the task to distinguish the type of anemia for each report acute (0) or chronic (1).",
"matic parsing of news articles from CNN.",
"Each instance comprises a paragraph-question-answer triplet, where the answer is one of the anonymized entities in the paragraph.",
"bAbI (Weston et al., 2015).",
"We consider the three tasks presented in the original bAbI dataset paper, training separate models for each.",
"These entail finding",
"(i) a single supporting fact for a question and",
"(ii) two or",
"(iii) three supporting statements, chained together to compose a coherent line of reasoning.",
"The SNLI dataset (Bowman et al., 2015).",
"570k human-written English sentence pairs manually labeled for balanced classification with the labels neutral , contradiction , and entailment , supporting the task of natural language inference (NLI).",
"In this work, we generate an attention distribution over premise words conditioned on the hidden representation induced for the hypothesis.",
"We restrict ourselves to comparatively simple instantiations of attention mechanisms, as described in the preceding section.",
"This means we do not consider recently proposed BiAttentive' architectures that attend to tokens in the respective inputs, conditioned on the other inputs (Parikh et al., 2016; Seo et al., 2016; Xiong et al., 2016).",
"Table 1 provides summary statistics for all datasets, as well as the observed test performances for additional context.",
"We run a battery of experiments that aim to examine empirical properties of learned attention weights and to interrogate their interpretability and transparency.",
"The key questions are: Do Gradient (BiLSTM) g Gradient (Average) g Leave-One-Out (BiLSTM) loo Dataset Class Mean Std.",
"learned attention weights agree with alternative, natural measures of feature importance ?",
"And, Had we attended to different features, would the prediction have been different ?",
"More specifically, in Section 4.1, we empirically analyze the correlation between gradient-based feature importance and learned attention weights, and between leave-one-out' (LOO) measures and the same.",
"In Section 4.2 we then consider counterfactual (to those observed) attention distributions.",
"Under the assumption that attention weights are explanatory, such counterfactual distributions may be viewed as alternative potential explanations; if these do not correspondingly change model output, then the original attention weights do not provide unique explanation for predictions, i.e., attending to other features could have resulted in the same output.",
"To generate counterfactual attention distributions, we first consider randomly permuting observed attention weights and recording associated changes in model outputs (4.2.1).",
"We then propose explicitly searching for adversarial attention weights that maximally differ from the observed attention weights (which one might show in a heatmap and use to explain a model prediction), and yet yield an effectively equivalent prediction (4.2.2).",
"The latter strategy also provides a useful potential metric for the reliability of attention weights as explanations: we can report a measure quantifying how different attention weights can be for a given instance without changing the model output by more than some threshold (cid:15) .",
"All results presented below are generated on test sets.",
"We present results for Additive attention below.",
"The results for Scaled Dot Product in its place are comparable.",
"We provide a web interface to interactively browse the (very large set of) plots for all datasets, model variants, and experiment types: https://successar.github.",
"io/AttentionExplanation/docs/ .",
"In the following sections, we use Total Variation Distance (TVD) as the measure of change between output distributions, defined as follows.",
"TVD ( y 1 , y 2 ) = 12 (cid:80) |Y| i =1 | y 1 i y 2 i | .",
"We use the Jensen-Shannon Divergence (JSD) to quantify the difference between two attention distributions: JSD ( 1 , 2 ) = 12 KL [ 1 || 1 + 2 2 ] + 12 KL [ 2 || 1 + 2 2 ] .",
"We empirically characterize the relationship between attention weights and corresponding feature importance scores.",
"Specifically we measure correlations between attention and: (1) gradient based measures of feature importance ( g ), and, (2) differences in model output induced by leaving features out ( loo ).",
"While these measures are themselves insufficient for interpretation of neu-1.0 0.5 0.0 0.5 1.0 0.000 0.025 0.050 0.075 0.100 0.125",
"ral model behavior (Feng et al., 2018), they do provide measures of individual feature importance with known semantics (Ross et al., 2017).",
"It is thus instructive to ask whether these measures correlate with attention weights.",
"The process we follow to quantify this is described in Algorithm 1. We denote the input resulting from removing the word at position t in x by x t .",
"Note that we disconnect the computation graph at the attention module so that the gradient does not flow through this layer.",
"Table 2 reports summary statistics of Kendall correlations for each dataset.",
"Full distributions are shown in Figure 2, which plots histograms of g for every data point in the respective corpora.",
"(Corresponding plots for loo are similar and the full set can be browsed via the online supplement.)",
"We plot these separately for each class: orange ( (cid:4) ) represents instances predicted as positive, and purple ( (cid:4) ) those predicted to be negative.",
"For SNLI, colors (cid:4) , (cid:4) and (cid:4) code for contradiction, entailment, and neutral respectively.",
"In general, observed correlations are modest (recall: 0 indicates no correspondence, 1 implies perfect concordance) for the BiRNN encoder.",
"The centrality of observed densities hovers around or below 0.5 in most of the corpora considered.",
"Moreover, as per Table 2, correlation is sufficiently weak that a statistically significant correlation between attention weights and feature importance scores (both gradient and feature erasure based) cannot consistently be established across corpora.",
"In contrast, gradients in average embedding based models show very high degree of correspondence with attention weights on average across corpora, correlation between LOO scores and attention weights is 0.375 points higher for this encoder, compared to the BiLSTM.",
"These results suggest that, in general, attention weights do not strongly or consistently agree with such feature importance scores in models with contextualized embeddings.",
"This is problematic for the view of attention weights as explanatory, given the face validity of input gradient/erasure based explanations (Ross et al., 2017; Li et al., 2016).",
"On some datasets notably the MIMIC tasks, and to a lesser extent the QA corpora this correlation is consistently significant but remains relatively weak.",
"This could be attributed to increased length of documents for these datasets providing stronger signal to standard hypothesis testing methods.",
"For reference we report correlations between gradients and LOO scores in the Appendix and online materials; these are consistently stronger than the correlation between attention weights and either feature importance score for the recurrent (BiLSTM) encoder.",
"These exhibit, on average, a",
"(i) 0 .",
"2 and",
"(ii) 0 .",
"25 greater correlation with each other than BiLSTM attention and",
"(i) LOO and",
"(ii) gradient scores.",
"We next consider what-if scenarios corresponding to alternative (counterfactual) attention weights.",
"The idea is to investigate whether the prediction would have been different, had the model emphasized (attended to) different input features.",
"More precisely, suppose = { t } Tt =1 are the attention weights induced for an instance, giving rise to model output y .",
"We then consider counterfactual distributions over y , under alternative .",
"We experiment with two means of constructing such distributions.",
"First, we simply scramble the original attention weights , re-assigning each value to an arbitrary, randomly sampled index (input feature).",
"Second, we generate an adversarial attention distribution : this is a set of attention weights that is maximally distinct from but that nonetheless yields an equivalent prediction (i.e., prediction within some (cid:15) of y ).",
"To characterize model behavior when attention weights are shuffled, we follow Algorithm 2.",
"Figure 3 depicts the relationship between the maximum attention value in the original and the median induced change in model output ( y med ) across instances in the respective datasets.",
"Colors again indicate class predictions, as above.",
"We observe that there exist many points with small y med despite large magnitude attention weights.",
"These are cases in which the attention weights might suggest explaining an output by a small set of features (this is how one might reasonably read a heatmap depicting the attention weights), but where scrambling the attention makes little difference to the prediction.",
"In some cases, such as predicting ICD codes from notes using the MIMIC dataset, one can see different behavior for the respective classes.",
"For the Diabetes task, e.g., attention behaves intuitively for at least the positive class; perturbing attention in this case causes large changes to the prediction.",
"We again conjecture that this is due to a few tokens serving as high precision indicators for the positive class; in their absence (or when they are not attended to sufficiently), the prediction drops considerably.",
"However, this is the exception rather than the rule.",
"We next propose a more focused approach to counterfactual attention weights, which we will refer to as adversarial attention .",
"The intuition is to explicitly seek out attention weights that differ as much as possible from the observed attention distribution and yet leave the prediction effectively unchanged.",
"Such adversarial weights violate an intuitive property of explanations: shifting model attention to very different input features should yield corresponding changes in the output.",
"Alternative attention distributions identified adversarially may then be viewed as equally plausible explanations for the same output.",
"Operationally, realizing this objective requires specifying a value (cid:15) that defines what qualifies as a small difference in model output.",
"Once this is specified, we aim to find k adversarial distributions { (1) , ..., ( k ) } , such that each ( i ) maximizes the distance from original but does not change the output by more than (cid:15) .",
"In practice we simply set this to 0 .",
"01 for text classification and 0 .",
"05 for QA datasets.",
"6 We propose the following optimization problem to identify adversarial attention weights.",
"maximize (1) ,..., ( k ) f ( { ( i ) } ki =1 ) subject to i TVD [ y ( x , ( i ) ) , y ( x , )] (cid:15) (1) Where f ( { ( i ) } ki =1 ) is: k (cid:88) i =1 JSD [ ( i ) , ] + 1 k ( k 1) (cid:88) i<j JSD [ ( i ) , ( j ) ] (2) 6 We make the threshold slightly higher for QA because the output space is larger and thus small dimension-wise perturbations can produce comparatively large TVD.",
"In practice we maximize a relaxed version of this objective via the Adam SGD optimizer (Kingma and Ba, 2014): f ( { ( i ) } ki =1 ) + k (cid:80) ki =1 max(0 , TVD [ y ( x , ( i ) ) , y ( x , )] (cid:15) ) .",
"7 Equation 1 attempts to identify a set of new attention distributions over the input that is as far as possible from the observed (as measured by JSD) and from each other (and thus diverse), while keeping the output of the model within (cid:15) of the original prediction.",
"We denote the output obtained under the i th adversarial attention by y ( i ) .",
"Note that the JS Divergence between any two categorical distributions (irrespective of length) is bounded from above by 0.69.",
"One can view an attentive decoder as a function that maps from the space of latent input representations and attention weights over input words T 1 to a distribution over the output space Y .",
"Thus, for any output y , we can define how likely each attention distribution will generate the output as inversely proportional to TVD ( y ( ) , y ) .",
"Figure 4 depicts the distributions of max JSDs realized over instances with adversarial attention weights for a subset of the datasets considered.",
"Colors again indicate predicted class.",
"Mass toward the upper-bound of 0.69 indicates that we are frequently able to identify maximally different attention weights that hardly budge model output.",
"We observe that one can identify adversarial attention weights associated with high JSD for a significant number of examples.",
"This means that is often the 7 We set = 500 .",
"case that quite different attention distributions over inputs would yield essentially the same (within (cid:15) output.",
"In the case of the diabetes task, we again observe a pattern of low JSD for positive examples (where evidence is present) and high JSD for negative examples.",
"In other words, for this task, if one perturbs the attention weights when it is inferred that the patient is diabetic, this does change the output, which is intuitively agreeable.",
"However, this behavior again is an exception to the rule.",
"We also consider the relationship between max attention weights (indicating strong emphasis on a particular feature) and the dissimilarity of identified adversarial attention weights, as measured via JSD, for adversaries that yield a prediction within (cid:15) of the original model output.",
"Intuitively, one might hope that if attention weights are peaky, then counterfactual attention weights that are very different but which yield equivalent predictions 0.0 0.2 0.4 0.6 Max JS Divergence within 0.00 0.02 0.04 0.06 0.0 0.2 0.4 0.6 Max JS Divergence within [0.00,0.25) [0.25,0.50) [0.50,0.75) [0.75,1.00) M a x A tt e n t i o n",
"Figure 5 illustrates that while there is a negative trend to this effect, it is realized only weakly.",
"Put another way: there exist many cases (in all datasets) in which despite a high attention weight, an alternative and quite different attention configuration over inputs yields effectively the same output.",
"In light of this, presenting a heatmap implying that a particular set of features is primarily responsible for an output would seem to be misleading.",
"We have focused on attention mechanisms and the question of whether they afford transparency, but a number of interesting strategies unrelated to attention mechanisms have been recently proposed to provide insights into neural NLP models.",
"These include approaches that measure feature importance based on gradient information (Ross et al., 2017; Sundararajan et al., 2017) (aligned with the gradient-based measures that we have used here), and methods based on representation erasure (Li et al., 2016), in which dimensions are removed and then the resultant change in output is recorded (similar to our experiments with removing tokens from inputs, albeit we do this at the input layer).",
"Comparing such importance measures to attention scores may provide additional insights into the working of attention based models (Ghaeini et al., 2018).",
"Another novel line of work in this direction involves explicitly identifying explanations of black-box predictions via a causal framework (Alvarez-Melis and Jaakkola, 2017).",
"We also note that there has been complementary work demonstrating correlation between human attention and induced attention weights, which was relatively strong when humans agreed on an explanation (Pappas and Popescu-Belis, 2016).",
"It would be interesting to explore if such cases present explicit high precision' signals in the text (for example, the positive label in diabetes dataset).",
"More specific to attention mechanisms, re-cent promising work has proposed more principled attention variants designed explicitly for interpretability; these may provide greater transparency by imposing hard , sparse attention.",
"Such instantiations explicitly select (modest) subsets of inputs to be considered when making a prediction, which are then by construction responsible for model output (Lei et al., 2016; Peters et al., 2018).",
"Structured attention models (Kim et al., 2017) provide a generalized framework for describing and fitting attention variants with explicit probabilistic semantics.",
"Tying attention weights to human-provided rationales is another potentially promising avenue (Bao et al., 2018).",
"We hope our work motivates further development of these methods, resulting in attention variants that both improve predictive performance and provide insights into model predictions.",
"We have provided evidence that correlation between intuitive feature importance measures (in-0.0",
"cluding gradient and feature erasure approaches) and learned attention weights is weak when using a BiRNN encoder (Section 4.1).",
"We also established that counterfactual attention distributions which would tell a different story about why a model made the prediction that it did often have no effect on model output (Section 4.2).",
"These results suggest that while attention modules consistently yield improved performance on NLP tasks, their ability to provide transparency for model predictions is (in the sense of pointing to inputs responsible for outputs) questionable.",
"More generally, how one is meant to interpret the heatmaps' of attention weights placed over inputs that are commonly presented is unclear.",
"These seem to suggest a story about how a model arrived at a particular disposition, but the results here indicate that the relationship between this and attention is not obvious, at least for RNN encoders.",
"There are important limitations to this work and the conclusions we can draw from it.",
"We have reported the (generally weak) correlation between learned attention weights and various alternative measures of feature importance, e.g., gradients.",
"We do not imply that such alternative measures are necessarily ideal or should be considered ground truth'.",
"While such measures do enjoy a clear intrinsic (to the model) semantics, their interpretation for non-linear neural networks can nonetheless be difficult for humans (Feng et al., 2018).",
"Still, that attention consistently correlates poorly with multiple such measures ought to give pause to practitioners.",
"That said, exactly how strong such correlations should' be to establish reliability as explanation is an admittedly subjective question.",
"We note that the counterfactual attention experiments demonstrate the existence of alternative heatmaps that yield equivalent predictions; thus one cannot conclude that the model made a particular prediction because it attended over inputs in a specific way.",
"But these adversarial weights may themselves be unlikely under the attention module parameters.",
"Further, it may be that multiple plausible explanations exist, complicating interpretation.",
"We would maintain that in such cases the model should highlight all plausible explanations, but one may instead view a model that provides sufficient' explanation as reasonable.",
"An additional limitation is that we have only considered a handful of attention variants, selected to reflect common module architectures for the respective tasks included in our analysis.",
"Alternative attention specifications may yield different conclusions; and indeed we hope this work motivates further development of principled attention mechanisms (or encoders).",
"Finally, we have limited our evaluation to tasks with unstructured output spaces, i.e., we have not considered seq2seq tasks, which we leave for future work.",
"However we believe interpretability is more often a consideration in, e.g., classification than in translation.",
"We thank Zachary Lipton for insightful feedback on a preliminary version of this manuscript.",
"This work was supported by the Army Research Office (ARO), award W911NF1810328."
] | [
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"method",
"objective",
"method",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"method",
"objective",
"method",
"method",
"other",
"other"
] |
[
"The choice of negative examples is important in noise contrastive estimation.",
"Recent works find that hard negativeshighest-scoring incorrect examples under the modelare effective in practice, but they are used without a formal justification.",
"We develop analytical tools to understand the role of hard negatives.",
"Specifically, we view the contrastive loss as a biased estimator of the gradient of the cross-entropy loss, and show both theoretically and empirically that setting the negative distribution to be the model distribution results in bias reduction.",
"We also derive a general form of the score function that unifies various architectures used in text retrieval.",
"By combining hard negatives with appropriate score functions, we obtain strong results on the challenging task of zero-shot entity linking.",
"Noise contrastive estimation (NCE) is a widely used approach to large-scale classification and retrieval.",
"It estimates a score function of input-label pairs by a sampled softmax objective: given a correct pair ( x, y 1 ) , choose negative examples y 2 . . . y K and maximize the probability of ( x, y 1 ) in a softmax over the scores of ( x, y 1 ) . . . ( x, y K ) .",
"NCE has been successful in many applications, including information retrieval (Huang et al., 2013), entity linking (Gillick et al., 2019), and open-domain question answering (Karpukhin et al., 2020).",
"It is well known that making negatives hard can be empirically beneficial.",
"For example, Gillick et al. (2019) propose a hard negative mining strategy in which highest-scoring incorrect labels under the current model are chosen as negatives.",
"Some works even manually include difficult examples based on external information such as a ranking function (Karpukhin et al., 2020) or a knowledge base (Fvry et al., 2020).",
"While it is intuitive that such hard negatives help improve the final model by making the learning task more challenging, they are often used without a formal justification.",
"Existing theoretical results in contrastive learning are not suitable for understanding hard negatives since they focus on unconditional negative distributions (Gutmann and Hyvri-nen, 2012; Mnih and Teh, 2012; Ma and Collins, 2018; Tian et al., 2020) or consider a modified loss divergent from practice (Bengio and Sencal, 2008).",
"In this work, we develop analytical tools to understand the role of hard negatives.",
"We formalize hard-negative NCE with a realistic loss (5) using a general conditional negative distribution, and view it as a biased estimator of the gradient of the cross-entropy loss.",
"We give a simple analysis of the bias (Theorem 3.1).",
"We then consider setting the negative distribution to be the model distribution, which recovers the hard negative mining strategy of Gillick et al. (2019), and show that it yields an unbiased gradient estimator when the model is optimal (Theorem 3.2).",
"We complement the gradient-based perspective with an adversarial formulation (Theo-rem 3.3).",
"The choice of architecture to parametrize the score function is another key element in NCE.",
"There is a surge of interest in developing efficient cross-attentional architectures (Humeau et al., 2020; Khattab and Zaharia, 2020; Luan et al., 2020), but they often address different tasks and lack direct comparisons.",
"We give a single algebraic form of the score function (9) that subsumes and generalizes these works, and directly compare a spectrum of architectures it induces.",
"We present experiments on the challenging task of zero-shot entity linking (Logeswaran et al., 2019).",
"We calculate empirical estimates of the bias of the gradient estimator to verify our analysis, and systematically explore the joint space of negative examples and architectures.",
"We have clear practical recommendations:",
"(i) hard negative mining always improves performance for all architectures, and",
"(ii) the sum-of-max encoder (Khattab and Zaharia, 2020) yields the best recall in entity retrieval.",
"Our final model combines the sum-of-max retriever with a BERT -based joint reranker to achieve 67.1% unnormalized accuracy: a 4.1% absolute improvement over Wu et al. (2020).",
"We also present complementary experiments on AIDA CoNLL-YAGO (Hoffart et al., 2011) in which we finetune a Wikipedia-pretrained dual encoder with hard-negative NCE and show a 6% absolute improvement in accuracy.",
"Let X and Y denote input and label spaces.",
"We assume |Y| < for simplicity.",
"Let pop denote a joint population distribution over X Y .",
"We define a score function s : X Y R differentiable in R d .",
"Given sampling access to pop , we wish to estimate such that the classifier x (cid:55) arg max y Y s ( x, y ) (breaking ties arbitrarily) has the optimal expected zero-one loss.",
"We can reduce the problem to conditional density estimation.",
"Given x X , define p ( y | x ) = exp ( s ( x, y )) (cid:80) y (cid:48) Y exp ( s ( x, y (cid:48) )) (1) for all y Y .",
"If the score function is sufficiently expressive, satisfies p ( y | x ) = pop ( y | x ) by the usual property of cross entropy.",
"This implies that s can be used as an optimal classifier.",
"The cross-entropy loss is difficult to optimize when Y is large since the normalization term in (1) is expensive to calculate.",
"In NCE, we dodge this difficulty by subsampling.",
"Given x X and any K labels y 1: K = ( y 1 . . . y K ) YK , define ( k | x, y 1: K ) = exp ( s ( x, y k )) (cid:80) Kk (cid:48) =1 exp ( s ( x, y k (cid:48) )) (3) for all 1 k K .",
"where y 2: K YK 1 are negative examples drawn iid from some noise distribution q over Y .",
"Popular choices of q include the uniform distribution q ( y ) = 1 / |Y| and the population marginal q ( y ) = pop ( y ) .",
"The NCE loss (4) has been studied extensively.",
"An optimal classifier can be extracted from a minimizer of JNCE (Ma and Collins, 2018); minimizing JNCE can be seen as maximizing a lower bound on the mutual information between ( x, y ) pop if q is the population marginal (Oord et al., 2018).",
"We refer to Stratos (2019) for an overview.",
"However, most of these results focus on unconditional negative examples and do not address hard negatives, which are clearly conditional.",
"We now focus on conditional negative distributions, which are more suitable for describing hard negatives.",
"Given K 2 , we define",
"where y 2: K YK 1 are negative examples drawn from a conditional distribution h ( | x, y 1 ) given ( x, y 1 ) pop .",
"Note that we do not assume y 2: K are iid.",
"While simple, this objective captures the essence of using hard negatives in NCE, since the negative examples can arbitrarily condition on the input and the gold (e.g., to be wrong but difficult to distinguish from the gold) and be correlated (e.g., to avoid duplicates).",
"We give two interpretations of optimizing JHARD .",
"First, we show that the gradient of JHARD is a biased estimator of the gradient of the cross-entropy loss JCE .",
"Thus optimizing JHARD approximates optimizing JCE when we use a gradient-based method, where the error depends on the choice of h ( | x, y 1 ) .",
"Second, we show that the hard negative mining strategy can be recovered by considering an adversarial setting in which h ( | x, y 1 ) is learned to maximize the loss.",
"We assume an arbitrary choice of h ( | x, y 1 ) and K 2 .",
"Denote the bias at R d by b ( ) = JCE ( ) JHARD ( ) To analyze the bias, the following quantity will be important.",
"2: K 1 k ( | x,y 1: K )",
"For all i = 1 . . . d , b i ( ) = E x pop (cid:88) y Y (cid:15) ( y | x ) s ( x, y ) i where (cid:15) ( y | x ) = p ( y | x ) ( y | x ) .",
"Proof.",
"Fix any x X and let J x CE ( ) and J x HARD ( ) denote JCE ( ) and JHARD ( ) conditioned on x .",
"The difference J x CE ( ) J x HARD ( ) is log Z ( x ) E y 1 pop ( | x ) y 2: K h ( | x,y 1 ) [log Z ( x, y 1: K )] (7) where we define Z ( x ) = (cid:80) y (cid:48) Y exp ( s ( x, y (cid:48) )) and Z ( x, y 1: K ) = (cid:80) Kk =1 exp( s ( x, y k )) .",
"The statement follows from the chain rule: b i ( ) = (cid:88) x X ,y Y b ( ) s ( x, y ) s ( x, y ) i Theorem 3.1 states that the bias vanishes if ( y | x ) matches p ( y | x ) .",
"for all y Y .",
"That is, ( y | x ) is the probability that y is included as a candidate (either as the gold or a negative) and then selected by the NCE discriminator (3).",
"Theorem 3.1.",
"For any ( x, y ) , the partial derivative of (7) with respect to s ( x, y ) is given by [[ x = x ]] p ( y | x ) [[ x = x ]] ( y | x ) where [[ A ]] is the indicator function that takes the value 1 if A is true and 0 otherwise.",
"Taking an expectation of their difference over x pop gives the partial derivative of b ( ) = JCE ( ) JHARD ( ) with respect to s ( x, y ) : pop ( x )( p ( y | x ) ( y | x )) .",
"Hard negative mining can be seen as an attempt to minimize the bias by defining h ( | x, y 1 ) in terms of p .",
"Specifically, we define h ( y 2: K | x, y 1 ) [[ |{ y 1 . . . y K }| = K ]] K (cid:89) k =2 p ( y k | x ) (8) Thus h ( | x, y 1 ) has support only on y 2: K YK 1 that are distinct and do not contain the gold.",
"sampling from h ( | x, y 1 ) corresponds to taking K 1 incorrect label types with highest scores.",
"This coincides with the hard negative mining strategy of Gillick et al. (2019).",
"The absence of duplicates in y 1: K ensures JCE ( ) = JHARD ( ) if K = |Y| .",
"This is consistent with (but does not imply) Theorem 3.1 since in this case ( y | x ) = p ( y | x ) .",
"For general K < |Y| , Theorem 3.1 still gives a precise bias term.",
"To gain a better insight into its behavior, it is helpful to consider a heuristic approximation given by 1 ( y | x ) p ( y | x ) exp ( s ( x, y )) N ( x ) where N ( x ) = (cid:80) y (cid:48) Y p ( y (cid:48) | x ) exp ( s ( x, y (cid:48) )) .",
"where ( x, y ) = exp ( s ( x, y )) /N ( x ) .",
"The expression suggests that the bias becomes smaller as the model improves since p ( | x ) pop ( | x ) implies ( x, y ) 1 where ( x, y ) pop .",
"We can formalize the heuristic argument to prove a desirable property of (5): the gradient is unbiased if satisfies p ( y | x ) = pop ( y | x ) , assuming iid hard negatives.",
"Theorem 3.2.",
"Assume K 2 and the distribution h ( y 2: K | x, y 1 ) = (cid:81) Kk =2 p ( y k | x ) in (5).",
"If p ( y | x ) = pop ( y | x ) , then JHARD ( ) = JCE ( ) .",
"Proof.",
"Since pop ( y | x ) = exp( s ( x, y )) /Z ( x ) , the probability ( y | x ) in (6) is (cid:88) y 1: K Y KK (cid:89) k =1 exp ( s ( x, y k )) Z ( x ) exp ( s ( x, y )) Z ( x, y 1: K ) = exp ( s ( x, y )) Z ( x ) (cid:88) y 1: K Y K (cid:81) Kk =1 exp ( s ( x, y k )) Z ( x, y 1: K ) The sum marginalizes a product distribution over y 1: K , thus equals one.",
"Hence ( y | x ) = p ( y | x ) .",
"The statement follows from Theorem 3.1.",
"1 We can rewrite ( y | x ) as E y 1 pop ( | x ) y 2: K h ( | x,y 1 ) (cid:34) count y 1: K ( y ) exp ( s ( x, y )) (cid:80) y (cid:48) Y count y 1: K ( y (cid:48) ) exp ( s ( x, y (cid:48) )) (cid:35) where count y 1: K ( y ) is the number of times y appears in y 1: K .",
"The proof exploits the fact that negative examples are drawn from the model and does not generally hold for other negative distributions (e.g., uniformly random).",
"We empirically verify that hard negatives indeed yield a drastically smaller bias compared to random negatives (Section 6.4).",
"We complement the bias-based view of hard negatives with an adversarial view.",
"We generalize (5) and define JADV ( , h ) = E ( x,y 1 ) pop y 2: K h ( | x,y 1 ) [ log (1 | x, y 1: K )] where we additionally consider the choice of a hard-negative distribution.",
"The premise of adversarial learning is that it is beneficial for to consider the worst-case scenario when minimizing this loss.",
"This motivates a nested optimization problem: min R d max h H JADV ( , h ) where H denotes the class of conditional distributions over S Y satisfying | S { y 1 }| = K .",
"Theorem 3.3.",
"Fix R d .",
"For any ( x, y 1 ) , pick y 2: K arg max y 2: K Y K 1 : |{ y 1 ...y K }| = K K (cid:88) k =2 s ( x, y k ) breaking ties arbitrarily, and define the point-mass distribution over YK 1 : h ( y 2: K | x, y 1 ) = [[ y k = y k k = 2 . . . K ]] Then h arg max h H JADV ( , h ) .",
"Proof.",
"max h H JADV ( , h ) is equivalent to max h H E ( x,y 1 ) pop y 2: K h ( | x,y 1 ) (cid:34) log K (cid:88) k =1 exp ( s ( x, y k )) (cid:35) The expression inside the expectation is maximized by y 2: K by the monotonicity of log and exp , subject to the constraint that |{ y 1 . . . y K }| = K .",
"h H achieves this maximum.",
"Along with the choice of negatives, the choice of the score function s : X Y R is a critical component of NCE in practice.",
"There is a clear tradeoff between performance and efficiency in modeling the cross interaction between the input-label pair ( x, y ) .",
"This trade-off spurred many recent works to propose various architectures in search of a sweet spot (Humeau et al., 2020; Luan et al., 2020), but they are developed in isolation of one another and difficult to compare.",
"In this section, we give a general algebraic form of the score function that subsumes many of the existing works as special cases.",
"We focus on the standard setting in NLP in which x VT and y VT (cid:48) are sequences of tokens in a vocabulary V .",
"Let E ( x ) RH T and F ( y ) RH T (cid:48) denote their encodings, typically obtained from the final layers of separate pretrained transformers like BERT (Devlin et al., 2019).",
"We follow the convention popularized by BERT and assume the first token is a special symbol (i.e., [CLS]), so that E 1 ( x ) and F 1 ( y ) represent single-vector summaries of x and y .",
"We have the following design choices: Direction : If x y , define the query Q = E ( x ) and key K = F ( y ) .",
"If y x , define the query Q = F ( y ) and key K = E ( x ) .",
"Reduction : Given integers m, m (cid:48) , reduce the number of columns in Q and K to obtain Q m RH m and K m (cid:48) RH m (cid:48) .",
"We can simply select leftmost columns, or introduce an additional layer to perform the reduction.",
"Attention : Choose a column-wise attention Attn : A (cid:55) s A either Soft or Hard .",
"If Soft , s A t = softmax( A t ) where the subscript denotes the column index.",
"If Hard , s A t is a vector of zeros with exactly one 1 at index arg max i [ A t ] i .",
"Given the design choices, we define the score of ( x, y ) as s ( x, y ) = 1 (cid:62) m Q (cid:62) m K m (cid:48) Attn (cid:16) K (cid:62) m (cid:48) Q m (cid:17) (9) where 1 m is a vector of m 1s that aggregates query scores.",
"Note that the query embeddings Q m double as the value embeddings.",
"The parameter vector R d denotes the parameters of the encoders E, F and the optional reduction layer.",
"Dual encoder.",
"Choose either direction x y or y x .",
"Select the leftmost m = m (cid:48) = 1 vectors in Q and K as the query and key.",
"The choice of attention has no effect.",
"This recovers the standard dual encoder used in many retrieval problems (Gupta et al., 2017; Lee et al., 2019; Logeswaran et al., 2019; Wu et al., 2020; Karpukhin et al., 2020; Guu et al., 2020): s ( x, y ) = E 1 ( x ) (cid:62) F 1 ( y ) .",
"Poly-encoder.",
"Choose the direction y x .",
"Select the leftmost m = 1 vector in F ( y ) as the query.",
"Choose an integer m (cid:48) and compute K m (cid:48) = E ( x )Soft( E ( x ) (cid:62) O ) where O RH m (cid:48) is a learnable parameter (code embeddings).",
"Choose soft attention.",
"This recovers the poly-encoder (Humeau et al., 2020): s ( x, y ) = F 1 ( y ) (cid:62) C m (cid:48) ( x, y ) where C m (cid:48) ( x, y ) = K m (cid:48) Soft (cid:0) K (cid:62) m (cid:48) F 1 ( y ) (cid:1) .",
"Similar architectures without length reduction have been used in previous works, for instance the neural attention model of Ganea and Hofmann (2017).",
"Sum-of-max.",
"Choose the direction x y .",
"Select all m = T and m (cid:48) = T (cid:48) vectors in E ( x ) and F ( y ) as the query and key.",
"Choose Attn = Hard .",
"This recovers the sum-of-max encoder (aka., Col-BERT) (Khattab and Zaharia, 2020): s ( x, y ) = (cid:80) Tt =1 max T (cid:48) t (cid:48) =1 E t ( x ) (cid:62) F t (cid:48) ( y ) .",
"Multi-vector.",
"Choose the direction x y .",
"Select the leftmost m = 1 and m (cid:48) = 8 vectors in E ( x ) and F ( y ) as the query and key.",
"Choose Attn = Hard .",
"This recovers the multi-vector encoder (Luan et al., 2020): s ( x, y ) = max m (cid:48) t (cid:48) =1 E 1 ( x ) (cid:62) F t (cid:48) ( y ) .",
"It reduces computation to fast dot products over cached embeddings, but is less expressive than the sum-of-max.",
"The abstraction (9) is useful because it generates a spectrum of architectures as well as unifying existing ones.",
"For instance, it is natural to ask if we can further improve the poly-encoder by using m > 1 query vectors.",
"We explore these questions in experiments.",
"We discuss related work to better contextualize our contributions.",
"There is a body of work on developing unbiased estimators of the population distribution by modifying NCE.",
"The modifications include learning the normalization term as a model parameter (Gutmann and Hyvrinen, 2012; Mnih and Teh, 2012) and using a bias-corrected score function (Ma and Collins, 2018).",
"However, they assume unconditional negative distributions and do not explain the benefit of hard negatives in NCE (Gillick et al., 2019; Wu et al., 2020; Karpukhin et al., 2020; Fvry et al., 2020).",
"In contrast, we directly consider the hard-negative NCE loss used in practice (5), and justify it as a biased estimator of the gradient of the cross-entropy loss.",
"Our work is closely related to prior works on estimating the gradient of the cross-entropy loss, again by modifying NCE.",
"They assume the following loss (Bengio and Sencal, 2008), which we will denote by JPRIOR ( ) : E ( x,y 1 ) pop y 2: K ( | x,y 1 ) K (cid:34) log exp ( s ( x, y 1 , y 1 )) (cid:80) Kk =1 exp ( s ( x, y 1 , y k )) (cid:35) (10) Here, ( | x, y 1 ) is a conditional distribution over Y\\ { y 1 } , and s ( x, y (cid:48) , y ) is equal to s ( x, y ) if y = y (cid:48) and s ( x, y ) log(( K 1) ( y | x, y 1 )) otherwise.",
"It can be shown that JPRIOR ( ) = JCE ( ) iff ( y | x, y 1 ) exp( s ( x, y )) for all y Y\\ { y 1 } (Blanc and Rendle, 2018).",
"However, (10) requires adjusting the score function and iid negative examples, thus less aligned with practice than (5).",
"The bias analysis of JPRIOR ( ) for general ( | x, y 1 ) is also significantly more complicated than Theorem 3.1 (Rawat et al., 2019).",
"There is a great deal of recent work on unsupervised contrastive learning of image embeddings in computer vision (Oord et al., 2018; Hjelm et al., 2019; Chen et al., 2020, inter alia ).",
"Here, s ( x, y ) = E ( x ) (cid:62) F ( y ) is a similarity score between images, and E or F is used to produce useful image representations for downstream tasks.",
"The model is again learned by (4) where ( x, y 1 ) are two random corruptions of the same image and y 2: K are different images.",
"Robinson et al. (2021) propose a hard negative distribution in this setting and analyze the behavior of learned embeddings under that distribution.",
"In contrast, our setting is large-scale supervised classification, such as entity linking, and our analysis is concerned with NCE with general hard negative distributions.",
"In a recent work, Xiong et al. (2021) consider contrastive learning for text retrieval with hard negatives obtained globally from the whole data with asynchronous updates, as we do in our experiments.",
"They use the framework of importance sampling to argue that hard negatives yield gradients with larger norm, thus smaller variance and faster convergence.",
"However, their argument does not imply our theorems.",
"They also assume a pairwise loss, excluding non-pairwise losses such as (4).",
"We now study empirical aspects of the hard-negative NCE (Section 3) and the spectrum of score functions (Section 4).",
"Our main testbed is Zeshel (Logeswaran et al., 2019), a challenging dataset for zero-shot entity linking.",
"We also present complementary experiments on AIDA CoNLL-YAGO (Hoffart et al., 2011).",
"2 6.1 Task Zeshel contains 16 domains (fictional worlds like Star Wars ) partitioned to 8 training and 4 validation and test domains.",
"Each domain has tens of thousands of entities along with their textual descriptions, which contain references to other entities in the domain and double as labeled mentions.",
"The input x is a contextual mention and the label y is the description of the referenced entity.",
"A score function s ( x, y ) is learned in the training domains and applied to a new domain for classification and retrieval.",
"Thus the model must read descriptions of unseen entities and still make correct predictions.",
"We follow prior works and report micro-averaged top-64 recall and macro-averaged accuracy for evaluation.",
"The original Zeshel paper (Lo-geswaran et al., 2019) distinguishes normalized vs unnormalized accuracy.",
"Normalized accuracy assumes the presence of an external retriever and considers a mention only if its gold entity is included in top-64 candidates from the retriever.",
"In this case, the problem is reduced to reranking and a computationally expensive joint encoder can be used.",
"Unnormalized accuracy considers all mentions.",
"Our goal is to improve unnormalized accuracy.",
"Logeswaran et al. (2019) use BM25 for retrieval, which upper bounds unnormalized accuracy by its poor recall (first row of Table 1).",
"Wu et al. (2020) propose a two-stage approach in which a dual encoder is trained by hard-negative NCE and held fixed, then a BERT -based joint encoder is trained to rerank the candidates retrieved by the dual encoder.",
"This approach gives considerable improvement in unnormalized accuracy, primarily due to the better recall of a trained dual encoder over BM25 (sec-ond row of Table 1).",
"We show that we can further push the recall by optimizing the choice of hard negatives and architectures.",
"We represent x and y as length128 wordpiece sequences where the leftmost token is the special symbol [CLS]; we mark the boundaries of a mention span in x with special symbols.",
"We use two independent BERT -bases to calculate mention embeddings E ( x ) R 768 128 and entity embeddings F ( y ) R 768 128 , where the columns E t ( x ) , F t ( y ) are contextual embeddings of the t -th tokens.",
"Retriever.",
"The retriever defines s ( x, y ) , the score between a mention x and an entity y , by one of the architectures described in Section 4.2: E 1 ( x ) (cid:62) F 1 ( y ) ( DUAL ) F 1 ( y ) (cid:62) C m ( x, y ) ( POLYm ) max mt =1 E 1 ( x ) (cid:62) F t ( y ) ( MULTIm ) (cid:80) 128 t =1 max 128 t (cid:48) =1 E t ( x ) (cid:62) F t (cid:48) ( y ) ( SOM ) denoting the dual encoder, the poly-encoder (Humeau et al., 2020), the multi-vector encoder (Luan et al., 2020), and the sum-of-max encoder (Khattab and Zaharia, 2020).",
"These architectures are sufficiently efficient to calculate s ( x, y ) for all entities y in training domains for each mention x .",
"This efficiency is necessary for sampling hard negatives during training and retrieving candidates at test time.",
"Reranker.",
"The reranker defines s ( x, y ) = w (cid:62) E 1 ( x, y )+ b where E ( x, y ) RH 256 is BERT (either base H = 768 or large H = 1024 ) embeddings of the concatenation of x and y separated by the special symbol [SEP], and w, b are parameters of a linear layer.",
"We denote this encoder by JOINT .",
"Training a retriever.",
"A retriever is trained by minimizing an empirical estimate of the hard-negative NCE loss (5), (cid:98) JHARD ( ) = 1 NN (cid:88) i =1 log exp ( s ( x i , y i, 1 )) (cid:80) Kk (cid:48) =1 exp (cid:0) s ( x i , y i,k (cid:48) ) (cid:1) (11) where ( x 1 , y 1 , 1 ) . . . ( x N , y N, 1 ) denote N mention-entity pairs in training data, and y i, 2 . . . y i,K h ( | x i , y i, 1 ) are K 1 negative entities for the i th mention.",
"We vary the choice of negatives as follows.",
"Hard: The negatives are sampled from (8) each epoch.",
"That is, in the beginning of each training pass, for each i we sample entities y i, 2 . . . y i,K from Y\\ { y i, 1 } without replacement with probabilities proportional to exp ( s ( x i , y i,k )) .",
"This is slightly different from, and simpler than, the original hard negative mining strategy of Gillick et al. (2019) which pretrains the model using random negatives then greedily adds negative entities that score higher than the gold.",
"Mixedp : p percent of the negatives are hard, the rest are random.",
"Previous works have shown that such a combination of random and hard negatives can be effective.",
"We find the performance is not sensitive to the value of p (Appendix A).",
"We experimented with in-batch sampling as done in previous works (e.g., Gillick et al. (2019)), but found sampling from all training data to be as effective and more straightforward (e.g., the number of random negatives is explicitly unrelated to the batch size).",
"We use K = 64 in all experiments.",
"Training a reranker.",
"We use JOINT only for reranking by minimizing (11) with top-63 negatives given by a fixed retriever, where we vary the choice of retriever.",
"We also investigate other architectures for reranking such as the poly-encoder and the sum-of-max encoder, but we find the full cross attention of JOINT to be indispensable.",
"Details of reranking experiments can be found in Appendix B. Other details.",
"All models are trained up to 4 epochs using Adam.",
"We tune the learning rate over { 5e 5 , 2e 5 , 1e 5 } on validation data.",
"We use the training batch size of 4 mentions for all models Model Negatives Val Test BM 25 76.22 69.13 Wu et al. (2020) Mixed (10 hard) 91.44 82.06 DUAL Random 91.08 81.80 Hard 91.99 84.87 Mixed-50 91.75 84.16 DUAL(10) Hard 91.57 83.08 POLY16 Random 91.05 81.73 Hard 92.08 84.07 Mixed-50 92.18 84.34 MULTI8 Random 91.13 82.44 Hard 92.35 84.94 Mixed-50 92.76 84.11 SOM Random 92.51 87.62 Hard 94.49 88.68 Mixed-50 94.66 89.62 Table 1: Top-64 recalls over different choices of architecture and negative examples for a retriever trained by NCE.",
"except for JOINT , for which we use 2 .",
"Training time is roughly half a day on a single NVIDIA A100 GPU for all models, except the SOM retriever which takes 1-2 days.",
"We conduct experiments on synthetic data to empirically validate our bias analysis in Section 3.1.",
"Analogous experiments on Zeshel with similar find-ings can be found in Appendix C. We construct a population distribution over 1000 labels with small entropy to represent the peaky conditional label distribution pop ( y | x ) .",
"We use a feedforward network with one ReLU layer to estimate this distribution by minimizing the empirical cross-entropy loss based on 128 iid samples per update.",
"At each update, we compute cross-entropy (2) exactly, and estimate NCE (5) with 4 negative samples by Monte Carlo (10 simulations).",
"Figure 1 plots the value of the loss function (left) and the norm of the gradient bias (right) across updates.",
"We first observe that hard NCE yields an accurate estimate of cross entropy even with 4 samples.",
"In contrast, random NCE quickly converges to zero, reflecting the fact that the model can trivially discriminate between the gold and random labels.",
"We next observe that the bias of the gradient of hard NCE vanishes as the model distribution converges to the population distribution, which supports our analysis that the bias becomes smaller as the model improves.",
"The bias remains nonzero for random NCE.",
"Table 1 shows the top-64 recall (i.e., the percentage of mentions whose gold entity is included in the 64 entities with highest scores under a retriever trained by (5)) as we vary architectures and negative examples.",
"We observe that hard and mixed negative examples always yield sizable improvements over random negatives, for all architectures.",
"Our dual encoder substantially outperforms the previous dual encoder recall by Wu et al. (2020), likely due to better optimization such as global vs in-batch random negatives and the proportion of hard negatives.",
"We also train a dual encoder with the bias-corrected loss (10) and find that this does not improve recall.",
"The poly-encoder and the multi-vector models are comparable to but do not improve over the dual encoder.",
"However, the sum-of-max encoder delivers a decisive improvement, especially with hard negatives, pushing the test recall to above 89%.",
"Based on this finding, we use DUAL and SOM for retrieval in later experiments.",
"We show our main results in Table",
"2. Following Wu et al. (2020), we do two-stage training in which we train a DUAL or SOM retriever with hard-negative NCE and train a JOINT reranker to rerank its top-64 candidates.",
"All our models outperform the previous best accuracy of 63.03% by Wu et al. (2020).",
"In fact, our dual encoder retriever using a BERT -base reranker outperforms the dual encoder retriever using a BERT -large reranker (65.42% vs 63.03%).",
"We obtain a clear improvement by switching the retriever from dual encoder to sum-of-max due to its high recall (Table 1).",
"Using a sum-of-max retriever trained with mixed negatives and a BERT large reranker gives the best result 67.14%.",
"To better understand practical implications of hard negative mining, we compare a SOM retriever trained on Zeshel with random vs hard negatives (92.51 vs 94.66 in top-64 validation recall).",
"The Model Accuracy BLINK without finetuning 80.27 BLINK with finetuning 81.54 DUAL with p = 0 82.40 DUAL with p = 50 88.01 MULTI2 with p = 50 88.39 MULTI3 with p = 50 87.94 Table 4: Test accuracies on AIDA CoNLL-YAGO.",
"mention categories most frequently improved are Low Overlap (174 mentions) and Multiple Categories (76 mentions) (see Logeswaran et al. (2019) for the definition of these categories), indicating that hard negative mining makes the model less reliant on string matching.",
"A typical example of improvement is shown in Table",
"3. The random-negative model retrieves person, device, or institution entities because they have more string overlap (e.g. Mehrunes Dagon, Battlespire, and Tharn).",
"In contrast, the hard-negative model appears to better understand that the mention is referring to a chaotic event like the Fall of Ald'ruhn, Sack of Mournhold, and Oblivion Crisis and rely less on string matching.",
"We hypothesize that this happens because string matching is sufficient to make a correct prediction during training if negative examples are random, but insufficient when they are hard.",
"To examine the effect of encoder architecture, we also compare a DUAL vs SOM retriever both trained with mixed negatives (91.75 vs 94.66 in top-64 validation recall).",
"The mention categories most frequently improved are again Low Overlap (335 mentions) and Multiple Categories (41 mentions).",
"This indicates that cross attention likewise helps the model less dependent on simple string matching, presumably by allowing for a more expressive class of score functions.",
"We complement our results on Zeshel with additional experiments on AIDA.",
"We use BLINK , a Wikipedia-pretrained two-stage model (a dual encoder retriever pipelined with a joint reranker, both based on BERT ) made available by Wu et al. (2020).",
"3 We extract the dual encoder module from BLINK and finetune it on AIDA using the training portion.",
"During finetuning, we use all 5.9 million Wikipedia entities as candidates to be consistent with prior work.",
"Because of the large scale of the knowledge base we do not consider SOM and focus on the MULTIm retriever ( DUAL is a special case with m = 1 ).",
"At test time, all models consider all Wikipedia entities as candidates.",
"For both AIDA and the Wikipedia dump, we use the version prepared by the KILT benchmark (Petroni et al., 2020).",
"Table 4 shows the results.",
"Since Wu et al. (2020) do not report AIDA results, we take the performance of BLINK without and with finetuning from their GitHub repository and the KILT leaderboard.",
"4 We obtain substantially higher accuracy by mixed-negative training even without reranking.",
"5 There is no significant improvement from using m > 1 in the multi-vector encoder on this task.",
"Hard negatives can often improve NCE in practice, substantially so for entity linking (Gillick et al., 2019), but are used without justification.",
"We have formalized the role of hard negatives in quantifying the bias of the gradient of the contrastive loss with respect to the gradient of the full cross-entropy loss.",
"By jointly optimizing the choice of hard negatives and architectures, we have obtained new state-of-the-art results on the challenging Zeshel dataset (Logeswaran et al., 2019).",
"This work was supported by the Google Faculty Research Awards Program.",
"We thank Ledell Wu for many clarifications on the BLINK paper."
] | [
"abstain",
"abstain",
"objective",
"result",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"other",
"other",
"objective",
"method",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"objective",
"abstain",
"other",
"other"
] |
[
"Pre-trained language models like BERT are performant in a wide range of natural language tasks.",
"However, they are resource exhaustive and computationally expensive for industrial scenarios.",
"Thus, early exits are adopted at each layer of BERT to perform adaptive computation by predicting easier samples with the first few layers to speed up the inference.",
"In this work, to improve efficiency without performance drop, we propose a novel training scheme called Learned Early Exiting for BERT (LeeBERT).",
"First, we ask each exit to learn from each other, rather than learning only from the last layer.",
"Second, the weights of different loss terms are learned, thus balancing off different objectives.",
"We formulate the optimization of LeeBERT as a bi-level optimization problem, and we propose a novel cross-level optimization (CLO) algorithm to improve the optimization results.",
"Experiments on the GLUE benchmark show that our proposed methods improve the performance of the state-of-the-art (SOTA) early exiting methods for pre-trained models.",
"The last couple of years have witnessed the rise of pre-trained language models (PLMs), such as BERT (Devlin et al., 2018), GPT (Radford et al., 2019), XLNet (Yang et al., 2019), and ALBERT (Lan et al., 2020), etc.",
"By pre-training on the unlabeled corpus and fine-tuning on labeled ones, BERT-like models achieved considerable improvements in many Natural Language Processing (NLP) tasks, such as text classification and natural language inference (NLI), sequence labeling, etc.",
"However, these PLMs suffer from two problems.",
"The first problem is efficiency.",
"The state-of-the-art (SOTAs) achievements of these models usually rely Contact: [email protected].",
"on very deep model architectures accompanied by high computational demands, impairs their practicalities.",
"Like general search engines or online medical consultation services, industrial settings process generally millions of requests per minute.",
"What makes efficiency more critical is that the traf-fic of online services varies drastically with time.",
"For example, during the flu season, the search requests of Dingxiangyuan 1 are ten times more than usual.",
"And the number of claims during the holidays is five to ten times more than that of the workdays for online shopping.",
"Many servers need to be deployed to enable BERT in industrial settings, which is unbearable for many companies.",
"Second, previous literature (Fan et al., 2020; Michel et al., 2019; Zhou et al., 2020) pointed out that large PLMs with dozens of stacked Transformer layers are over-parameterized and could suffer from the overthinking problem (Kaya et al., 2019).",
"That is, for many input samples, their shallow representations at a shallow layer are enough to make a correct classification.",
"In contrast, the final layer's representations may be overfitting or distracted by irrelevant features that do not generalize.",
"The overthinking problem leads to not only poor generalization but also wasted computation.",
"To address these issues, both the industry and academia have devoted themselves to accelerating PLMs at inference time.",
"Standard methods include direct network pruning (Zhu and Gupta, 2018; Xu et al., 2020; Fan et al., 2020; Michel et al., 2019), knowledge distillation (Sun et al., 2019; Sanh et al., 2019; Jiao et al., 2020), weight quantization (Zhang et al., 2020; Bai et al., 2020; Kim et al., 2021) and adaptive inference (Zhou et al., 2020; Xin et al., 2020; Geng et al., 2021; Liu et al., 2020).",
"Among them, adaptive inference has attracted much attention.",
"Given that real-world data is usually com-1 https://search.dxy.cn/ posed of easy samples and difficult samples, adaptive inference aims to deal with simple examples with only a small part of a PLM, thus speeding up inference time on average.",
"The speed-up ratio can be controlled with certain hyper-parameters to cope with drastic changes in request traffic.",
"What's more, it can address the over-thinking problem and improve the model's generalization ability.",
"Early exiting is one of the most crucial adaptive inference methods (Bolukbasi et al., 2017).",
"It implements adaptive inference by installing exits, or intermediate prediction layer, at each layer of BERT and exiting easy samples at exits of the shallow layers to speed up inference (Figure 1).",
"Strategies for early exiting are designed (Teerapit-tayanon et al., 2016; Kaya et al., 2019; Xin et al., 2020; Zhou et al., 2020), which decides when to exit given the current obtained predictions (from previous and current layers).",
"Early exiting architectures' training procedure is essentially a multi-objective problem since each exit is trying to improve its performance.",
"Different objectives from different classifiers may conflict and interfere with one-another (Phuong and Lampert, 2019; Yu et al., 2020).",
"Thus they incorporate distillation loss to improve the training procedure by encouraging early exits to mimic the output distributions of the last exit.",
"The motivation is that the last exit has the maximum network capacity and should be more accurate than the earlier exits.",
"In their work, only the last exit can act as a teacher exit.",
"Besides, the multiple objectives are uniformly weighted.",
"In this work, we propose a novel training mechanism called Learned Early Exiting for BERT (Lee-BERT).",
"Our contributions are three folded.",
"First, instead of learning from the last exit, LeeBERT asks each exit to learn from each other.",
"The motivation is that different layers extract features of varying granularity.",
"Thus they have different perspectives of the sentence.",
"Distilling knowledge from each other improves the expressiveness of lower exits and alleviates the overfittng of the later exits.",
"Second, to achieve the optimal trade-offs between different loss terms, their weights are treated as parameters and are learned along with model parameters.",
"The optimization of the learnable weights and model parameters is formulated as a bi-level optimization problem, optimized with gradient descent.",
"Built upon previous literature (Liu et al., 2019), we propose a novel cross-level optimization (CLO) algorithm to solve the bilevel optimization better.",
"Extensive experiments are conducted on the GLUE benchmark (Wang et al., 2018), and show that LeeBERT outperforms existing SOTA BERT early exiting methods, sometimes by a large margin.",
"Ablation study shows that: (1) knowledge distillation among all the exits can improve their performances, especially for the shallow ones; (2) our novel CLO algorithm is useful in learning more suitable weights and brings performance gains.",
"Our contributions are integrated into our LeeBERT framework, which can be summarized as follows: We propose a novel training method for early exiting PLMs to ask each exit to learn from each other.",
"We propose to find the optimal trade-off of different loss terms by assigning learnable weights.",
"We propose a novel cross-level optimization (CLO) algorithm to learn the loss term weights better.",
"In this section, we introduce the necessary background for BERT early exiting.",
"Throughout this work, we consider the case of multi-class classification with samples { ( x n , y n ) , x n X , y n Y , i = 1 , 2 , ..., N } , e.g., sentences, and the number of classes is K .",
"In this work, we adopt BERT and ALBERT as backbone models.",
"BERT is a multi-layer Transformer (Vaswani et al., 2017) network, which is pre-trained in a self-supervised manner on a large corpus.",
"ALBERT is more lightweight than BERT since it shares parameters across different layers, and the embedding matrix is factorized.",
"As depicted in Figure 1, early exiting architectures are networks with exits at different transformer layers.",
"With M exits, M classifiers p m : X K ( m = 1 , 2 , ..., M ) are designated at M layers of BERT, each of which maps its input to the probability simplex K , i.e., the set of probability distributions over the K classes.",
"Previous literature (Phuong and Lampert, 2019; Liu et al., 2020) think Figure 1: The training procedure of LeeBERT, which differs from the previous literature in two aspects.",
"of p 1 , ..., p M as being ordered from least to most expressive.",
"However, in terms of generalization ability, due to the over-thinking problem, later layers may not be superior to shallow layers.",
"In principle, the classifiers may or may not share weights and computation, but in the most interesting and practically useful case, they share both.",
"There are mainly three early exiting strategies for BERT early exiting.",
"BranchyNet (Teerapittayanon et al., 2016), FastBERT (Liu et al., 2020) and DeeBERT (Xin et al., 2020) calculated the entropy of the prediction probability distribution as a proxy for the confidence of exiting classifiers to enable early exiting.",
"Shallow-Deep Nets (Kaya et al., 2019) and RightTool (Schwartz et al., 2020) leveraged the softmax scores of predictions of exiting classifiers, that is, if the score of a particular class is dominant and large enough, the model will exit.",
"Recently, PABEE (Zhou et al., 2020) propose a patience based exiting strategy analogous to early stopping model training, that is, if the exits' predictions remain unchanged for a pre-defined number of times (patience), the model will stop inference and exit.",
"PABEE achieves SOTAs results for BERT early exiting.",
"In this work, we mainly adopt the PABEE's patience based early exiting strategy.",
"However, in ablation studies, we will show that our LeeBERT framework can improve the inference performance of other exiting strategies.",
"In this section, we introduce the proposed LeeBERT framework.",
"First, we present our distillation based loss design, and then we elaborate on how to optimize with learnable weights.",
"Our main contribution is a novel training mechanism for BERT early exiting, which extends Liu et al. (2020) and Phuong and Lampert (2019) via mutual distillation and learned weights.",
"When receiving an input sample ( x n , y n ) , each exit will calculate the cross-entropy loss based on its predicted, and all the exits are simultaneously optimized with a summed loss, i.e.,",
"Note that the above objective directly assumes uniform weights for all M loss terms.",
"To introduce our contribution, we first remind the reader of the classical distillation framework as introduced in Hinton et al. (2015): assume we want a probabilistic classifier s (student) to learn from an-other classifier t (teacher).",
"This can be achieved by minimizing the (temperature-scaled) cross-entropy between their prediction distributions, LKD ( t, s ) = 2 K (cid:88) k =1 [ t 1 / ( x n )] k log[[ s 1 / ( x n )] k ] , (2) where R + is the distillation temperature, and [ t 1 / ( x )] k = t k ( x ) 1 / (cid:80) Kk (cid:48) =1 t k (cid:48) ( x ) 1 / , (3) is the distribution obtained from the distribution t ( x ) by temperature-scaling, and [ t 1 / ( x )] k is de-fined analogously.",
"The temperature parameter allows controlling the softness of the teachers' predictions: the higher the temperature, the more suppressed is the difference between the largest and the smallest value of the probability vector.",
"The temperature scaling allows compensating for the over-confidence of the network's outputs, i.e., they put too much probability mass on the top predicted class and too little on the others.",
"The factor 2 in Eq 2 ensures that the temperature scaling does not negatively affect the gradient magnitude.",
"Returning to the early exiting architecture, we follow the same strategy as classical distillation but use exits of different layers both as students and teachers.",
"For any exit m , let T ( m ) 1 , ..., M (which could be empty) be the set of teacher exits it is meant to learn from.",
"Then we define the overall distillation loss as LKD ( x n ) = M (cid:88) m =1 (cid:88) t T ( m ) LKD ( p t ( x n ) , p m ( x n )) M |T ( m ) | .",
"Previous work (Phuong and Lampert, 2019; Liu et al., 2020) considers using only the last exit as as the teacher and all exits learn from it.",
"The usual belief is that deeper exits have more network capacity and more accurate than the early exits.",
"However, the over-thinking phenomenon reveals that later exits may not be superior to earlier ones.",
"The more shallow exit may provide different perspectives in semantic understanding of the input sentences.",
"Thus, to fully learn from available information, later exits can benefit from learning from early exits.",
"With this motivation, we consider two settings: Learn from Later Exits (LLE) .",
"In this setting, early exits learn from all its later exits.",
"Learn from All Exits (LAE) .",
"In this setting, an exit learns from all other exits.",
"Previous work considers uniform weights for the distillation loss terms or classification loss term, which does not effectively take the trade-off among multiple objectives.",
"First, from the perspective of knowledge distillation, intuitively, later exits should place little weights on the very early exits since they have less to offer.",
"And all exits should place higher importance on exits that are performant and not overfitting.",
"Second, different loss objectives are usually competing, which may hurt the final results.",
"To address these issues, we propose to assign a set of learnable weights to our loss objective, which are updated via gradient descent along with the model parameters.",
"We give weight w i for each classification loss term and w m,t for the distillation loss term coming from exit m learning from exit t , and the overall loss objective becomes L ( x n , y n ) = M (cid:88) m =1 w i LCE ( p m ( x n ) , y n ) + M (cid:88) m =1 (cid:88) t T ( m ) w m,t LKD ( p t ( x n ) , p m ( x n )) M |T ( m ) | .",
"Assume we have two datasets D 1 and D 2 , which usually are both subsets of the training set D tr .",
"D 1 can be equal to D 2 .",
"For a given set of = { w i , w m,t } , the optimal solution () of network parameters are derived from D 1 , and the optimal are determined on D 2 .",
"We denote the loss on dataset D as LD ( , ) , a function of two sets of parameters for convenience.",
"Then the optimization problem becomes min LD 2 ( () , ) , s.t., () = arg min LD 1 ( , ) (6) Though the above bi-level optimization can accurately describe our problem, it is generally difficult to solve.",
"One heuristic simplification of the above equation is to let D 1 = D 2 = D tr , and the optimization problem in Eq 16 reduces to the single-level optimization (SLO), min , LD tr ( , ) , (7) which can be solved directly by stochastic gradient descent.",
"This reduced formulation treats the learnable weights just as a part of the model parameters.",
"Despite its efficiency, compared with , the number of parameters in is almost neglectable.",
"Thus optimization will need to fit well for gradient descent, resulting in inadequate solutions of .",
"The most widely adopted optimization algorithm for Eq 16 is the bi-level optimization (BLO) algorithm Liu et al. (2019), which asks D 1 and D 2 to be a random split of D tr .",
"2 And the gradient descent is done following: = 1 LD 1 , = 2 LD 2 .",
"that is, updating the parameters in an interleaving fashion: one-step gradient descent of on D 1 followed by one step gradient descent of on D 2 .",
"Note that ( ) in Eq 16 is not satisfied in BLO due to first-order approximation, leading gradient updates of into wrong directions, collapsing the bi-level optimization.",
"We now propose our cross-level optimization algorithm.",
"The gradient descent updating of and follows = 1 LD 1 , = 1 LD 1 2 LD 2 .",
"The above equation is the core of our CLO algorithm, which we will refer to as CLO-v1, which are derived and demonstrated in detail in the Appendix.",
"We can see that our cross-level optimization's core idea is to draw gradient information from both splits of the training set, thus making the updating of more reliable.",
"Note that updating requires its gradients on both the D 1 set and D 2 set.",
"Thus its computation complexity is higher than the BLO algorithm.",
"We propose a more efficient version of cross-level optimization (CLO-v2), which can also be found in the Appendix.",
"We divide the training procedure into 2 Note that on each epoch start, the split of D tr can be re-generated.",
"groups, each group containing C steps, is updated solely on the training set for C 1 steps, and updated following Eq 9 for the remaining one step.",
"We will call the hyper-parameter C as the cross-level cycle length.",
"CLO-v2 is more efficient than CLO-v1, and our experiments show that CLO-v2 works well and is comparable with CLO-v1.",
"We evaluate our proposed approach to the classification tasks on GLUE benchmark.",
"We only exclude the STS-B task since it is a regression task, and we exclude the WNLI task following previous work (Devlin et al., 2018; Jiao et al., 2020; Xu et al., 2020).",
"Backbone models .",
"All of the experiments are built upon the Google BERT, ALBERT.",
"We ensure fair comparison by setting the hyper-parameters related to the PLM backbones the same with HuggingFace Transformers (Wolf et al., 2020).",
"We compare with the previous BERT early exiting methods and compare other methods that speed up BERT inference.",
"Directly reducing layers .",
"We experiment with directly utilizing the first 6 and 9 layers of the original (AL)BERT with a single output layer on the top, denoted by (AL)BERT-6L and (AL)BERT-9L, respectively.",
"These two baselines serve as a lower bound for performance metrics since it does not employ any technique.",
"Static model compression approaches .",
"For model parameter pruning, we include the results of LayerDrop (Fan et al., 2020) and attention head pruning (Michel et al., 2019) on ALBERT.",
"For knowledge distillation, we include DistillBERT (Sanh et al., 2019), BERT-PKD (Sun et al., 2019).",
"3 For module replacing, we include BERT-of-Theseus (Xu et al., 2020).",
"Input-adaptive inference .",
"This category includes entropy-based method DeeBERT, score-based method Shallow-deep, and patience-based exiting method PABEE as our baselines.",
"We also 3 Note that the two methods consider knowledge distillation on the fine-tuning stage, whereas TinyBERT (Jiao et al., 2020) and Turc et al. (2019) investigate knowledge distillation during both the pre-training stage and fine-tuning stage.",
"We implement LeeBERT on the base of Hugging-Face's Transformers.",
"We conduct our experiments on a single Nvidia V100 16GB GPU.",
"Training .",
"We add a linear output layer after each intermediate layer of the pre-trained BERT/ALBERT model as the internal classifier.",
"The hyperparameter tuning is done in a cross-validation fashion on the training set so that the dev set information of GLUE tasks are not revealed.",
"We perform grid search over batch sizes of 16, 32, 128, and learning rates of { 1e-5, 2e-5, 3e-5, 5e-5 } for model parameters , and learning rates of { 1e-5, 1e-4, 1e-3, 5e-3 } for learnable weights .",
"The cross-level cycle length C will be selected from 2, 4, 8.",
"We will adopt the Adam optimizer.",
"At each epoch, the training set is randomly split into D 1 and D 2 with a ratio 5 : 5 .",
"We apply an early stopping mechanism with patience 5 and evaluate the model on dev set at each epoch end.",
"And we define the dev performance of our early exiting architecture as the average performance of all the exits.",
"We will select the model with the best average performance in cross validation.",
"We set CLO-v2 as the main optimization algorithm of LeeBERT, and LAE as the main distillation strategy.",
"4 To demonstrate LeeBERT's ditilla-tion objectives are beneficial, we train LeeBERT with the LLE strategy (LeeBERT-LLE).",
"We also let the loss term weights in FastBERT to be learnable and train with our CLO-v2 algorithm, i.e., FastBERT-CLO-v2.",
"To compare our LeeBERT's CLO optimization procedure with baselines, we also train LeeBERT with (1) single level algorithm (LeeBERT-SLO); (2) bi-level algorithm (LeeBERT-BLO).",
"To compare CLO-v1 and CLO-v2, we also train the LeeBERT with CLO-v1, i.e., LeeBERT-CLO-v1.",
"Besides, we also include LeeBERT with randomly assigned discrete weights (LeeBERT-rand) and uniform weights (LeeBERT-uniform) as baselines, which will serve to demonstrate that our optimization procedure is beneficial.",
"The discrete weights 4 Henceforth, unless otherwise specified, our LeeBERT method will be the one with LAE and CLO-v2.",
"are randomly selected from { 1, 2, ..., 50 } , and are normalized so that the loss terms at each exit have weights summed to 1. Inference .",
"Following prior work, inference with early exiting is on a per-instance basis, i.e., the batch size for inference is set to 1. We believe this setting mimics the common latency-sensitive production scenario when processing individual requests from different users.",
"We report the mean performance over 5 runs with different random seeds.",
"For DeeBERT and Shallow-deep, we set the threshold for entropy or score, such that the speedup ratio is between 1.80x to 2.1x.",
"For FastBERT and our LeeBERT, we mainly adopt the PABEE's patience based exiting strategy, and we compare the results when the patience is set at 4.",
"How the patience parameter affects the inference efficiency is also investigated for PABEE, FastBERT, and LeeBERT.",
"Table 1 reports the main results on GLUE with ALBERT as the backbone model.",
"ALBERT is parameter and memory-efficient due to its cross-layer parameter sharing strategy, however, it still has high inference latency.",
"From Table 1 we can see that our approach outperforms all compared methods to improve inference efficiency while maintaining good performances, demonstrating the proposed LeeBERT framework's effectiveness.",
"Note that our system can effectively enhance the original ALBERT and PABEE by a relatively large margin when speeding-up inference by 1.97x.",
"We also conduct experiments on the BERT backbone with the MNLI, MRPC, and SST-2 tasks, which can be found in the Appendix.",
"To give more insight into how early exits perform under different efficiency settings, we illustrate how the patience parameter affect the average number of inference layers (which is directly related to speed-up ratios) (Fig-ure 2), and prediction performances (Figure 3).",
"We also show that one can easily apply our LeeBERT framework to image classification tasks in the Appendix.",
"We now analyze more deeply the main take-aways from Table 1 and our experiments.",
"Our LeeBERT can speed up inference.",
"Figure 2 shows that on the MRPC task, with the same patience parameter, LeeBERT usually goes through fewer layers (on average) than PABEE and Fast-Figure 2: The curve of patience vs. avg inference layers for PABEE, FastBERT and LeeBERT.",
"The task is MRPC.",
"BERT, showing the LeeBERT can improve the efficiency of PLMs' early exiting.",
"Our knowledge distillation strategies are ben-eficial.",
"Table 1 reveals that our LAE setting provides the best overall performances on GLUE in terms of distillation strategies.",
"LeeBERT outperforms FastBERT-CLO-v2 on all tasks and exceeds LeeBERT-LLE on 6 of the seven tasks, and the scores on QNLI the results are comparable.",
"This result proves that exits learning from each other are generally beneficial.",
"Our CLO algorithm brings performance gains.",
"As a sanity check, LeeBERT-rand performs worse than all optimized LeeBERT models.",
"Table 1 also shows that the SLO and BLO algorithms perform worse than our CLO.",
"And we can see that CLO-v1 and CLO-v2 have comparable results.",
"CLO-v1 seems to have slight advantages on tasks with few samples, but the performance gaps seem to be marginal.",
"Since CLO-v2 is more efficient, we will use CLO-v2 as our main optimization algorithm.",
"The patience-score curves are different for different PLMs.",
"Figures",
"3(a) and",
"3(b) show that differnt PLMs have quite different patience-score curves.",
"For ALBERT, early exiting with PABEE's strategy can improve upon the ALBERT-base fine-tuning, and the best performance is obtained with patience 6.",
"With patience 6, the average number of inference layers is 8.11.",
"This phenomenon shows that ALBERT base may suffer from the overthinking problem.",
"With the help of our distillation strategy and CLO optimization, the performance gain is considerable.",
"Note that:",
"(a) Without",
"distilla-(a) ALBERT backbone",
"tion, shallow exits' performances are significantly worse, and our distillation can help these exits to improve;",
"(b) with LeeBERT, the performances of the later exits are comparable to the earlier ones, since the over-thinking problem is alleviated by distillation.",
"However, the patience-score curve for BERT is quite monotonic, suggesting that overthinking problem is less severe.",
"Note that BERT's shallow exits are significantly worse than that of ALBERT, and with LeeBERT, the shallow exits' performances are improved.",
"Training time costs .",
"Table 2 presents the parameter numbers and time costs of training for LeeBERT compared with the original (AL)BERT, and PABEE, FastBERT.",
"We can see that although exits need extra time for training, early exiting architectures actually can reduce the training time.",
"Intuitively, additional loss objectives can be regarded as additional parameter updating steps for lower layers, thus speeding up the model convergence.",
"LeeBERT-CLO-v1 requires a longer time for training.",
"Notably, our LeeBERT's time costs are comparable with PABEE and FastBERT, even though it has more complicated gradient updating steps.",
"Working with different exiting strategies .",
"Recall that our results are mainly obtained by adopting the PABEE's patience based exiting strategies.",
"However, our LeeBERT framework is quite off-the-shelf, and can be integrated with many other exiting strategies.",
"Our framework can work under different exiting strategies.",
"5 When using entropy-based strategy, LeeBERT outperforms DeeBERT 5 Due to length limitation, we will leave the detailed results of this ablation study in the Appendix.",
"In this work, we propose a new framework for improving PLMs' early exiting.",
"Our main contributions lie in two aspects.",
"First, we argue that exits should learn and distill knowledge from each other during training.",
"Second, we propose that early exiting networks' training objectives be weighted differently, where the weights are learnable.",
"The learnable weights are optimized with the cross-level optimization we propose.",
"Experiments on the GLUE benchmark datasets show that our framework can improve PLMs' early exiting performances, especially under high latency requirements.",
"Our framework is easy to implement and can be adapted to various early exiting strategies.",
"We want to explore novel exiting strategies that better guarantee exiting performances in the future."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"result",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"objective",
"objective",
"objective",
"objective",
"objective",
"result",
"method",
"objective"
] |
[
"Language technologies contribute to promoting multilingualism and linguistic diversity around the world.",
"However, only a very small number of the over 7000 languages of the world are represented in the rapidly evolving language technologies and applications.",
"In this paper we look at the relation between the types of languages, resources, and their representation in NLP conferences to understand the trajectory that different languages have followed over time.",
"Our quantitative investigation underlines the disparity between languages, especially in terms of their resources, and calls into question the language agnostic status of current models and systems.",
"Through this paper, we attempt to convince the ACL community to prioritise the resolution of the predicaments highlighted here, so that no language is left behind.",
"Languages X and Y are the official languages of two different countries; they have around 29M and 18M native speakers, and 2M and 5.5K Wikipedia articles, respectively.",
"X is syntactically quite similar to English, though uses dimunitives and has grammatical gender.",
"Y , on the other hand, has a different word order from English, and has a rare typological feature generally it is a head-final language, but noun phrases are head-initial.",
"It also features full and partial reduplication.",
"69 items on LDC and ELRA contain data in X , whereas for Y there are only 2 items.",
"X boasts of some of the best online machine translation systems, whereas Y is supported by very few online MT systems and that too with far inferior translation quality.",
"Figure 1 shows the number of papers in conferences ( ACL , NAACL , EACL , EMNLP , LREC , WS ) that Authors contributed equally to the work.",
"mention X and Y in the paper, across the years.",
"As you can see, while X has a steady and growing trend of research, our community has been mostly oblivious to Y , until recently when some of the zero-shot learning papers have started mentioning it.",
"Can you guess what X and Y are?",
"Regardless of whether you can guess the exact answer, most NLP researchers surely know of (and might even speak) several languages which are in the same boat as X ; languages which have a large amount of resources and therefore access to the benefits of the current NLP breakthroughs, and languages like Y ; those which lack resources and consequently the attention of the NLP community, despite having similar speaker base sizes and typologically diverse features.",
"You probably have come across the issue of extremely skewed distribution of resources across the world's languages before.",
"You might also be aware of the fact that most of our NLP systems, which are typically declared language agnostic, are not truly so (Bender, 2011).",
"The handful of languages on which NLP systems are trained and tested are often related and from the same geography, drawn from a few dominant language families, leading to a typological echo-chamber.",
"As a result, a vast majority of typologically diverse linguistic phenomena are never seen by our NLP systems (Ponti et al., 2019).",
"Nevertheless, it would be prudent to re-examine these issues in the light of recent advances in deep learning.",
"Neural systems, on one hand, require a lot more data for training than rule-based or traditional ML systems, creating a bigger technological divide between the X s and Y s; yet, some of the most recent techniques on zero-shot learning of massively multilingual systems (Devlin et al., 2019; Conneau and Lample, 2019; Aharoni et al., 2019; Artetxe and Schwenk, 2019) bridge this gap by obliterating the need for large labeled datasets in all languages.",
"Instead, they need only large unlabeled corpora across languages and labeled data in only some languages.",
"Assuming that this approach can be taken to its promising end, how does the fate of different languages change?",
"We break down this complex prescient question into the following more tractable and quantifiable questions on Linguistic Diversity and Inclusion : 1. How many resources, labeled and unlabeled, are available across the World's languages?",
"4. Does the amount of resource available in a language influence the research questions and the venue of publication?",
"If so, how?",
"How does this distribution correlate to their number of native speakers?",
"What can we expect to achieve today and in the near future for these languages?",
"2. Which typological features have current NLP systems been exposed to, and which typological features mostly remain unexplored by systems because we have hardly created any resources and conducted data-driven research in those languages?",
"3. As a community, how inclusive has ACL been in conducting and publishing research on various languages?",
"In 1980s and early 90s, when large scale datasets were not the prime drivers of research, was the linguistic diversity of ACL higher than what it has been in 2000s and 2010s?",
"Or has ACL become more inclusive and diverse over the years?",
"5. What role does an individual researcher, or a research community have to play in bridging the linguistic-resource divide?",
"In this paper, we take a multi-pronged quantitative approach to study and answer the aforementioned questions, presented in order, in the following five sections.",
"One of the key findings of our study, to spill the beans a bit, is that the languages of the World can be broadly classified into 6 classes based on how much and what kind of resources they have; the languages in each class have followed a distinct and different trajectory in the history of ACL , and some of the hitherto neglected classes of languages have more hope of coming to the forefront of NLP technology with the promised potential of zero-shot learning.",
"In order to summarize the digital status and rich-ness' of languages in the context of data availability, we propose a taxonomy based on the number of language resources which exist for different languages.",
"We frame the rest of our analyses based on this taxonomy and use it to emphasize the existence of such resource disparities.",
"We design this taxonomy using two feature axes: number of unlabeled resources vs. number of labeled resources.",
"Previous methods have mostly relied on supervised learning techniques which require labeled corpora.",
"However, the advent of transfer learning methods have boosted the importance of unlabeled data: massively multilingual models such as mBERT use Wikipedia for pre-training, and then fine-tune on downstream NLP tasks.",
"These features are suitable because the current NLP research is predominantly data-driven, and language inclusion depends on how much labeled or unlabeled data is available.",
"We believe these features are sufficient for the taxonomical design as the required metadata is consistently available across all languages, whereas features such as number of hours required to collect data aren't available.",
"We treat each data resource as a fundamental unit, based on the assumption that the collection of one unit is proportional to a certain extent of effort being invested towards the resource improvement of that language.",
"Moreover, this feature dis-cretization is unambiguous and concrete.",
"Other units such as the total number of datapoints across datasets can be misleading because different NLP tasks have different data requirements.",
"For example, while Machine Translation (MT) models require datapoints to the order of millions (Koehn and Knowles, 2017) to perform competitively, competent models in Question Answering require around 100 thousand datapoints (Rajpurkar et al., 2016).",
"Moreover, the unit of datapoints vary across different technologies (e.g. Speech data measured in hours, MT data measured in number of parallel sentences).",
"We focus our attention on the LDC catalog 1 and the ELRA Map 2 for labeled datasets.",
"Although there are other repositories of data available online, we found it practical to treat these organized collections as a representation of labeled dataset availability.",
"This way, we look at standardized datasets that have established data quality and consistency, and which have been used in prior work.",
"There are strong efforts such as PanLex (Kamholz et al., 2014), which is a large lexical database of a wide range of languages being used for a lexical translator, and OLAC (Simons and Bird, 2003), which contains a range of information for different languages (e.g. text collections, audio recordings, and dictionaries).",
"However, keeping within the purview of NLP datasets used in *CL conferences, we decided to focus on popular repositories such as the above-mentioned.",
"We look at Wikipedia pages as a measure for unlabeled data resources.",
"With regards to language technologies, Wikipedia pages represent a strong source of unsupervised training data which are freely and easily accessible.",
"In the perspective of digital resource availability, they are a comprehensive source of factual information and are accessed by a large, diverse set of online users.",
"Figure 2 is a visualization of the taxonomy.",
"We find a set of distinct partitions which can be used 1 https://catalog.ldc.upenn.edu/ 2 http://catalog.elra.info/en-us/ to categorize languages into 6 unique positions in the language resource race': 0 The Left-Behinds These languages have been and are still ignored in the aspect of language technologies.",
"With exceptionally limited resources, it will be a monumentous, probably impossible effort to lift them up in the digital space.",
"Unsupervised pre-training methods only make the poor poorer', since there is virtually no unlabeled data to use.",
"1 The Scraping-Bys With some amount of unlabeled data, there is a possibility that they could be in a better position in the race' in a matter of years.",
"However, this task will take a solid, organized movement that increases awareness about these languages, and also sparks a strong effort to collect labelled datasets for them, seeing as they have almost none.",
"2 The Hopefuls With light at the end of the tunnel, these languages still fight on with their gasping breath.",
"A small set of labeled datasets has been collected for these languages, meaning that there are researchers and language support communities which strive to keep them alive in the digital world.",
"Promising NLP tools can be created for these languages a few years down the line.",
"3 The Rising Stars Unsupervised pre-training has been an energy boost for these languages.",
"With a strong web presence, there is a thriving cultural community online for them.",
"However, they have been let down by insufficient efforts in labeled data collection.",
"With the right steps, these languages can be very well off if they continue to ride the pre-training' wave.",
"4 The Underdogs Powerful and capable, these languages pack serious amounts of resource fire-power'.",
"They have a large amount of unlabeled data, comparable to those possessed by the winners, and are only challenged by lesser amount of labeled data.",
"With dedicated NLP communities conducting research on these languages, they have the potential to become winners and enjoy the fruits of digital superiority'.",
"5 The Winners Running strong and fast, these languages have been in the lead for quite a while now, some longer than others.",
"With a dominant online presence, there have been massive industrial and government investments in the development of resources and technologies for these languages.",
"They are the quintessential rich-resource Class 5 Example Languages #Langs #Speakers % of Total Langs 0 Dahalo , Warlpiri , Popoloca , Wallisian , Bora 2191 1.2B 88.38% 1 Cherokee , Fijian , Greenlandic , Bhojpuri , Navajo 222 30M 5.49% 2 Zulu , Konkani , Lao , Maltese , Irish 19 5.7M 0.36% 3 Indonesian , Ukranian , Cebuano , Afrikaans , Hebrew 28 1.8B 4.42% 4 Russian , Hungarian , Vietnamese , Dutch , Korean 18 2.2B 1.07% 5 English , Spanish , German , Japanese , French 7 2.5B 0.28% Table 1: Number of languages, number of speakers, and percentage of total languages for each language class.",
"languages, reaping benefit from each state-of-the-art NLP breakthrough.",
"Some more information about the taxonomy is shown in Table 1. We also take 10 languages, and annotate their positions in Figure 3. 2.4 Findings On your marks As can be seen in Figure 3, the Winners take pole position in all rankings, and Class 0 languages remain out of the race' with no representation in any resource.",
"The Wikipedia distribution seems to be more fair for classes 1, 2, and 3 when compared to classes 4 and 5, whereas the Web distribution has a clear disparity.",
"Talk ain't cheap Looking at Table 1, we see that Class 0 contains the largest section of languages and represents 15% of all speakers across classes.",
"Although there is a large chunk of speakers which converse with Class 5 languages, the lack of technological inclusion for different languages could draw native speakers away from Class 0 languages and towards Class 5, exacerbating the disparity.",
"Linguistic typology is a field which involves the classification of languages based on their structural and semantic properties.",
"Large-scale efforts have led to the creation of a database of typological features (Dryer and Haspelmath, 2013).",
"Such documentation becomes important as there are barely any other classifications of similar scale.",
"In the context of NLP research, there has been work indicating the effectiveness of injecting typological information to guide the design of models (Ponti et al., 2019).",
"Also, transfer learning of resource-rich to resource-poor languages have been shown to work better if the respective languages contain similar typological features (Pires et al., 2019).",
"We look at how skewed language resource availability leads to an under-representation of certain typological features, which may in turn cause zero-shot inference models to fail on NLP tasks for certain languages.",
"We look at the WALS data (Dryer and Haspel-math, 2013), which contains typological features for 2679 languages.",
"There are a total of 192 typological features, with an average of 5.93 categories per feature.",
"We take the languages in classes 0, 1, 2, all of which have limited or no data resources as compared to 3, 4, 5 and look at how many categories, across all features, exist in classes 0, 1, 2 but not 3, 4, 5. This comes to a total of 549 out of 1139 unique categories, with an average of 2.86 categories per feature being ignored.",
"Typological features with the most and least ignored' categories are shown in Table 2. To get an idea of what these typological exclu-Feature #Cat #Lang 144E 23 38 144M 23 45 144F 22 48 144O 21 30 Feature #Cat #Lang 83A 0 1321 82A 0 1302 97A 0 1146 86A 0 1083 Table 2: Most and least ignored' typological features, the number of categories in each feature which have been ignored, and the number of languages which contain this feature.",
"sions' mean in the context of modern multilingual methods, we look at the specific languages which contain these excluded' categories in the respective features, and compare their performances in similarity search, from the results of Artetxe and Schwenk (2019).",
"Table 3 shows some examples of how ignored' features have been difficult to deal with even when jointly training of all languages.",
"Far-reaching repercussions The most ignored' feature in Table 2, 144E (Multiple Negative Constructions in SVO Languages), is a rare feature, existing in only 38 languages over the world.",
"These languages, however, are from various regions (e.g. Wolof , Icelandic , and Kilivila ).",
"Language tools in all these areas can be adversely affected without sufficient typological representation.",
"On the other hand, common features such as 83A (Or-der of Object and Verb) are well represented with definite feature values for 1321 languages, ranging from English to Mundari .",
"Does it run in the family?",
"Amharic , in Table 3, which among the Semitic family of languages, is the second most spoken language after Arabic (which has 300M speakers).",
"However, it has 9 ignored' typological features, whereas Arabic has none.",
"This reflects in the error rate of English to Amharic (60.71), which is significantly worse compared to 7.8 for English to Arabic .",
"NLP conferences have a huge impact on how language resources and technologies are constructed.",
"Exciting research in venues such as ACL , EMNLP , LREC have the ability to turn heads in both industry and government and have the potential to attract funds to a particular technology.",
"Has the usage of a small set of resource-rich languages in such conferences led to a disparity, pushing the less represented to the bottom of the ladder in terms of research?",
"We analyze the involvement of various languages in NLP research conferences over the years.",
"The ACL Anthology Corpus (ACL-ARC) (Bird et al., 2008) is the most extensively used dataset for analyzing trends in NLP research.",
"This dataset contains PDFs, and parsed XMLs of Anthology papers.",
"However, the latest versioned copy of ACL-ARC is till 2015 which makes it insufficient for analyzing trends in the most recent years.",
"Moreover, paper data for non-ACL conferences such as LREC , COLING are absent from this dataset.",
"In order to create a consistent data model, we augment this dataset by using Semantic Scholar's API and scraping ACL Anthology itself.",
"Thus, we gather a consolidated dataset for 11 conferences which are relevant in judging global trends in NLP research.",
"These include ACL , NAACL , EMNLP , EACL , COLING , LREC , CONLL , Workshops ( WS ) (all since 1990), SEMEVAL , TACL and CL Journals.",
"We have attached the statistics of the dataset in Appendix A. 4.2 Analysis 4.2.1 Language Occurrence Entropy The primary step of measuring the language diversity and inclusion of a conference and their progress is to measure the usage of language in that conference over multiple iterations.",
"One of the ways to do it is by using frequency-based techniques where we can measure the occurrence of languages in that iteration.",
"However, it is not a unified measure which represents the nature of language distribution with a single number.",
"To this end, we use entropy as our metric to measure language inclusivity of each conference.",
"It efficiently captures the skew in the distribution of languages,",
"thereby making the disparity in language usage more clearer.",
"The language occurrence entropy is calculated as follows: For a conference c held in year y having P papers, there exists a binary matrix { MP L } c,y where M ij is 1 if i th paper ( P ) mentions the j th language ( L ).",
"Then the entropy { S } c,y is: { S j } c,y = 1 PP (cid:88) i =1 { M ij } c,y { S (cid:48) j } c,y = { S j } c,y (cid:80) Lj =1 { S j } c,y { S } c,y = L (cid:88) j =1 { S (cid:48) j } c,y log e { S (cid:48) j } c,y (1) where { S j } c,y is a array of length L accounting for number of papers in a specific language, { S (cid:48) j } c,y is normalization done in order to get probability distribution for calculating entropy.",
"In short, the higher the entropy, the more spread out is the distribution over the languages.",
"The more peaked or skewed the distribution is, the lower is the entropy.",
"In Figure 4, we can observe the entropy S plotted for each c as a function of y .",
"To quantify the extent of inclusion of language classes from our taxonomy in different conferences, we employ class-wise Mean Reciprocal Rank (MRR) as a metric.",
"This helps in determining the standing of each class in a conference.",
"If the rank of the language (rank i ) is ordered by the frequency of being mentioned in papers of a particular conference, and Q is the total number of queries aka number of languages in each class, then: MRR = 1 | Q | | Q | (cid:88) i =1 1 rank i (2) Table 4 shows inverse mean reciprocal ranks of each category for a conference.",
"The smaller the inverse MRR value, the more inclusive that conference is to that language class.",
"All-Inclusive Looking at the combined trends, both the entropy plots and the MRR figures suggest that LREC and WS have been the most inclusive across all categories and have been continuing to do so over the years.",
"A ray of hope With regards to the proceedings of ACL , EMNLP , NAACL , LREC , we note a marked spike in entropy in the 2010s, which is absent in other conferences.",
"This might be due to the increased buzz surrounding cross-lingual techniques.",
"The later the merrier An interesting point to note is that conferences which started later have taken lessons from past in matters of language inclusion.",
"While the earlier established conferences have continued to maintain interest in a particular underlying theme of research which may or may not favour multilingual systems.",
"This can be observed in : COLING , ACL , EACL , EMNLP (order of their start dates).",
"Falling off the radar The taxonomical hierarchy is fairly evident when looking at the MRR table (Table",
"4) with class 5 coming within rank 2/3 and class 0 being left-behind' with average ranks ranging from 600 to 1000.",
"While the dip in ranks is more forgiving for conferences such as LREC , WS , it is more stark in CONLL , TACL , SEMEVAL .",
"The measures discussed in the previous section signal at variance in acceptance of different languages at different NLP venues across time.",
"However, there are usually multiple subtle factors which vanilla statistics fail to capture.",
"Embeddings, on the other hand, have been found extensively useful in NLP tasks as they are able to learn relevant signals directly from the data and uncover these rather complex nuances.",
"To this end, we propose a novel approach to jointly learn the representations of conferences, authors and languages, which we collectively term as entities.",
"The proposed embedding method allows us to project these entities in the same space enabling us to effectively reveal patterns revolving around them.",
"We define the following model to jointly learn the embeddings of entities such that entities which have similar contextual distributions should co-occur together.",
"For example, for an author A , who works more extensively on language L i than L j and publishes more at conference C m than at conference C n , the embeddings of A would be closer L i than L j and C m than C n .",
"Given an entity and a paper associated with the entity, the learning task of the model is to predict K randomly sampled words from the title and the abstract of the paper.",
"We only select the title and abstract as compared to the entire paper text as Entity Input ( E -dim) Hidden Layer e k h i N -dim WE NWN VWN VWN V Word Output ( Vdim) y 1,j y 2,j y C,j Figure 5: Model architecture to learn entity embeddings.",
"they provide a concise signal with reduced noise.",
"This model draws parallels to the Skipgram model of Word2Vec (Mikolov et al., 2013), where given an input word in Skipgram model, the task is to predict the context around the word.",
"The input entity and K randomly sampled words in our case correspond to the input word and context in the Skipgram model.",
"The goal of the model is to maximize probability of predicting the random K words, given the entity id as the input: 1 M 1 KM (cid:88) m =1 K (cid:88) k =1 I (cid:88) i =1 p ( w k | E <i,P j > ) (3) where E <i,P j > is the entity E i which is associated with the P jth paper and p is the probability of predicting the word w i out of the K words sampled from the paper and M is the total number of papers in the dataset.",
"To optimize for the above distribution, we define the typical SGD based learning strategy similar to Word2Vec(Mikolov et al., 2013).",
"Figure 5 shows an outline of the model.",
"The entity input layer has dimension equal to the total number of entities in the dataset ( E ).",
"Hidden layer size is set to the desired embedding dimension ( N ).",
"The output layer predicts words for the input entity and is of the same size as the vocabulary ( V ).",
"The entities we learn are: (1) authors of the paper, (2) languages mentioned in the paper, (3) conference where the paper was accepted (e.g. ACL ), and (4) the conference iteration (e.g. ACL'19).",
"We describe the model detail and hyperparameter tuning in Appendix A. Figure 6: t-SNE visualization of the learnt conference and language embeddings.",
"In order to better understand how languages are represented at different venues, we visualize the distribution of entity embeddings by projecting the generated embeddings into 2 dimensions using t-SNE (Maaten and Hinton, 2008) (as shown in Figure 6).",
"For clarity, we only plot ACL , LREC , WS and CL among the conferences, and all languages from the taxonomy, except those in Class 0.",
"We omit plotting Class 0 languages as their projections are noisy and scattered due to their infrequent occurrence in papers.",
"To understand the research contributions of individual authors or communities towards research in respective language classes, we leverage the distribution between author and language entities by computing a variation of the Mean Reciprocal Rank (MRR).",
"We consider a language L , and take the K closest authors to L using cosine distance, and then take the closest M languages to each author.",
"If L is present in the closest languages of an author, then we take the rank of L in that list, inverse it, and average it for the K authors.",
"To compute this metric for a class of languages from the taxonomy, we take the mean of the MRR for all languages in that class.",
"We fix M to be 20, so as to understand the impact of the community when the number of languages remains unchanged.",
"Table 5 shows the MRR of various class of languages.",
"A higher value of this measure indicates a more focused community working on that particular language, rather than a diverse range of authors.",
"Time waits for no conference We can see a left to right trend in Figure 6 with ACL in 1983 in the left, and subsequent iterations laid out as we go right.",
"We observe the same trend for EACL , NAACL , EMNLP , CONLL , TACL , and COLING .",
"We can say that the axis represents the progression of time to a certain extent.",
"Alternatively, it may even represent a shift in the focus of NLP research, moving from theoretical research focused on grammar and formalisms on the left to a data-driven, more ML-oriented approach on the right.",
"This can be observed as most of the CL embeddings are positioned on the left given their theoretical research focus.",
"Long distance relationships?",
"From Figure 6, we can note that the less-resourced language classes are farther away from the trend-line of ACL than the more resourced ones, with class 5 being closest, and class 1 being farthest.",
"The visualization illustrates that languages are spreading out radially downwards from the ACL trendline with popular classes of taxonomy like class 5 and class 4 being closer while others spreading out farther.",
"Again, as previous analyses have shown us, LREC and WS embeddings are closer to the language embeddings as compared to the other conferences as shown in Figure 6.",
"In fact, LREC cluster is right in the mid-dle of language clusters and so is the major part of the WS cluster, especially in recent iterations.",
"Not all heroes wear capes Table 5 shows the MRR for each class of languages in the taxonomy.",
"From Table 5, it can be seen that class 0 has the highest MRR across different K values.",
"This shows that perhaps low resource languages have some research groups solely focused on the challenges related to them.",
"There is a decreasing trend of MRR from class 0 to class 5, except for class 2, thereby indicating that more popular languages are addressed by more authors.",
"We also observe that even though Japanese , Mandarin , Turkish and Hindi (MRR(10) > 0 .",
"75 ) are part of class 5 and class 4, their MRR is higher even compared to low resource languages in another classes, indicating that these languages have focused research communities working on them.",
"On the other end of the spectrum, we observe a lot of low resource languages like Burmese (MRR(10) = 0 .",
"02 ), Javanese (MRR(10) = 0 .",
"23 ) and Igbo (MRR(10) = 0 .",
"13 ) which have millions of speakers but significantly low MRR values, potentially indicating that not a lot of attention is being given to them in the research community.",
"We set out to answer some critical questions about the state of language resource availability and research.",
"We do so by conducting a series of quantitative analyses through the lens of a defined taxonomy.",
"As a result, we uncover a set of interesting insights and also yield consistent findings about language disparity: The taxonomical hierarchy is repeatedly evident from individual resource availabilities (LDC, LRE, Wikipedia, Web), entropy calculations for conferences, and the embeddings analysis.",
"LREC and Workshops( WS ) have been more inclusive across different classes of languages, seen through the inverse MRR statistics, entropy plots and the embeddings projection.",
"There are typological features (such as 144E), existing in languages over spread out regions, represented in many resource-poor languages but not sufficiently in resource-rich languages.",
"This could potentially reduce the performance of language tools relying on transfer learning.",
"There is a possible indication of a time progression or even a technological shift in NLP, which can be visualized in the embeddings projection.",
"Newer conferences have been more language-inclusive, whereas older ones have maintained interests in certain themes of research which don't necessarily favour multilingual systems.",
"There is hope for low-resource languages, with MRR figures indicating that there are focused communities working on these languages and publishing works on them, but there are still plenty of languages, such as Javanese and Igbo , which do not have any such support.",
"We believe these findings will play a strong role in making the community aware of the gap that needs to be filled before we can truly claim state-of-the-art technologies to be language agnostic.",
"Pertinent questions should be posed to authors of future publications about whether their proposed language technologies extend to other languages.",
"There are ways to improve the inclusivity of ACL conferences.",
"Special tracks could be initiated for low-resource, language-specific tasks, although we believe that in doing so, we risk further marginaliza-tion of those languages.",
"Instead, a way to promote change could be the addition of D&I (Diversity and Inclusion) clauses involving language-related questions in the submission and reviewer forms: Do your methods and experiments apply (or scale) to a range of languages?",
"Are your findings and contributions contributing to the inclusivity of various languages?",
"Finally, in case you're still itching to know, Language X is Dutch , and Y is Somali .",
"We would like to thank Anshul Bawa, Adithya Pratapa, Ashish Sharma for their valuable feedback during the final phase of work.",
"We would also like to thank the anonymous reviewers for their many insightful comments and suggestions."
] | [
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other"
] |
[
"Suppose we want to specify the inductive bias that married couples typically go on honeymoons for the task of extracting pairs of spouses from text.",
"In this paper, we allow model developers to specify these types of inductive biases as natural language explanations.",
"We use BERT fine-tuned on MultiNLI to interpret these explanations with respect to the input sentence, producing explanation-guided representations of the input.",
"Across three relation extraction tasks, our method, ExpBERT, matches a BERT baseline but with 320 less labeled data and improves on the baseline by 310 F1 points with the same amount of labeled data.",
"Consider the relation extraction task of finding spouses in text, and suppose we wanted to specify the inductive bias that married couples typically go on honeymoons.",
"In a traditional feature engineering approach, we might try to construct a did they go on a honeymoon? feature and add that to the model.",
"In a modern neural network setting, however, it is not obvious how to use standard approaches like careful neural architecture design or data augmentation to induce such an inductive bias.",
"In a way, while the shift from feature engineering towards end-to-end neural networks and representation learning has alleviated the burden of manual feature engineering and increased model expressivity, it has also reduced our control over the inductive biases of a model.",
"In this paper, we explore using natural language explanations (Figure",
"1) to generate features that can augment modern neural representations.",
"This imbues representations with inductive biases corresponding to the explanations, thereby restoring some degree of control while maintaining their expressive power.",
"Prior work on training models with explanations use semantic parsers to interpret explanations: the parser converts each explanation into an executable logical form that is executable over the input sentence and uses the resulting outputs as features (Srivastava et al., 2017) or as noisy labels on unlabeled data (Hancock et al., 2018).",
"However, semantic parsers can typically only parse low-level statements like wife' appears between { o 1 } and { o 2 } and the last word of { o 1 } is the same as the last word of { o 2 } (Hancock et al., 2018).",
"We remove these limitations by using modern distributed language representations, instead of semantic parsers, to interpret language explanations.",
"Our approach, ExpBERT (Figure 2), uses BERT (Devlin et al., 2019) fine-tuned on the MultiNLI natural language inference dataset (Williams et al., 2018) to produce features that interpret each explanation on an input.",
"We then use these features to augment the input representation.",
"Just as a semantic parser grounds an explanation by converting it into a logical form and then executing it, the features produced by BERT can be seen as a soft execution of the explanation on the input.",
"On three benchmark relation extraction tasks, ExpBERT improves over a BERT baseline with no explanations: it achieves an F1 score of 310 points higher with the same amount of labeled data, and a similar F1 score as the full-data baseline but with 3 20x less labeled data.",
"ExpBERT also improves on a semantic parsing baseline (+3 to 5 points F1), suggesting that natural language explanations can be richer than low-level, programmatic explanations.",
"Problem.",
"We consider the task of relation extraction: Given x = ( s, o 1 , o 2 ) , where s is a sequence of words and o 1 and o 2 are two entities that are substrings within s , our goal is to classify the relation y Y between o 1 and o 2 .",
"The label space Y includes a NO-RELATION label if no relation applies.",
"Additionally, we are given a set of natural language explanations E = { e 1 , e 2 , . . . , e n } designed to capture relevant features of the input for classification.",
"These explanations are used to define a global collection of features and are not tied to individual examples.",
"Approach.",
"Our approach (Figure",
"2) uses pretrained neural models to interpret the explanations E in the context of a given input x .",
"Formally, we define an interpreter I as any function that takes an input x and explanation e j and produces a feature vector in R d .",
"In our ExpBERT implementation, we choose I to capture whether the explanation e j is entailed by the input x .",
"Concretely, we use BERT (Devlin et al., 2019) fine-tuned on MultiNLI (Williams et al., 2018): we feed wordpiece-tokenized versions of the explanation e j (hypothesis) and the instance x (premise), separated by a [SEP] token, to BERT.",
"Following standard practice, we use the vector at the [CLS] token to represent the entire input as a 768-dimensional feature vector: I ( x, e j ) = BERT (cid:0) [CLS] , s, [SEP] , e j (cid:1) .",
"These vectors, one for each of the n explanations, are concatenated to form the explanation representation v ( x ) R 768 n ,",
"In addition to v ( x ) , we also map x into an input representation u ( x ) R 768 |Y| by using the same interpreter over textual descriptions of each potential relation.",
"Specifically, we map each potential relation y i in the label space Y to a textual description r i (Figure 2), apply I ( x, ) to r i , and concatenate the resulting feature vectors: u ( x ) = (cid:2) I ( x, r 1 ) , I ( x, r 2 ) , . . . , I ( x, r |Y| ) (cid:3) .",
"Finally, we train a classifier over u ( x ) and v ( x )",
"Note that u ( x ) and v ( x ) can be obtained in a preprocessing step since I ( , ) is fixed (i.e., we do not additionally fine-tune BERT on our tasks).",
"For more model details, please refer to Appendix A.1.",
"Baselines.",
"We compare ExpBERT against several baselines that train a classifier over the same input representation u ( x ) .",
"NoExp trains a classifier only on u ( x ) .",
"The other baselines augment u ( x ) with variants of the explanation representation v ( x ) .",
"BERT+SemParser uses the semantic parser from Hancock et al. (2018) to convert explanations into executable logical forms.",
"The resulting denotations over the input x (a single bit for each explanation) are used as the explanation representation, i.e., v ( x ) { 0 , 1 } n .",
"We use two different sets of explanations for this baseline: our natural language explanations (LangExp) and the low-level explanations from Hancock et al. (2018) that are more suitable for the semantic parser (ProgExp).",
"BERT+Patterns converts explanations into a collection of unigram, bigram, and trigram patterns and creates a binary feature for each pattern based on whether it is contained in s or not.",
"This gives v ( x ) { 0 , 1 } n (cid:48) , where n (cid:48) is the number of patterns.",
"Finally, we compare ExpBERT against a Table 1: Dataset statistics.",
"variant called ExpBERT-Prob , where we directly use entailment probabilities obtained by BERT (in-stead of the feature vector at the [CLS] token) as the explanation representation v ( x ) [0 , 1] n .",
"Datasets.",
"We consider 3 relation extraction datasets from various domains Spouse and Disease (Hancock et al., 2018), and TACRED (Zhang et al., 2017).",
"Spouse involves classifying if two entities are married; Disease involves classifying whether the first entity (a chemical) is a cause of the second entity (a disease); and TACRED involves classifying the relation between the two entities into one of 41 categories.",
"Dataset statistics are in Table 1; for more details, see Appendix A.2.",
"Explanations.",
"To construct explanations, we randomly sampled 50 training examples for each y Y and wrote a collection of natural language statements explaining the gold label for each example.",
"For Spouse and Disease , we additionally wrote some negative explanations for the NORELATION category.",
"To interpret explanations for Disease , we use SciBERT, a variant of BERT that is better suited for scientific text (Beltagy et al., 2019).",
"A list of explanations can be found in Appendix A.3.",
"Benchmarks.",
"We find that explanations improve model performance across all three datasets: ExpBERT improves on the NoExp baseline by +10.6 F1 points on Spouse , +2.7 points on Disease , and +3.2 points on TACRED (Table 2).",
"1 On TACRED , which is the most well-established of our benchmarks and on which there is signifi-cant prior work, ExpBERT (which uses a smaller BERT-base model that is not fine-tuned on our task) outperforms the standard, fine-tuned BERT-large model by +1.5 F1 points (Joshi et al., 2019).",
"Prior work on Spouse and Disease used a simple logistic classifier over traditional features created from 1 We measure performance using F1 scores due to the class imbalance in the datasets ( Spouse : 8% positive, Disease : 20.8% positive, and TACRED : 20.5% examples with a relation).",
"dependency paths of the input sentence.",
"This performs poorly compared to neural models, and our models attain significantly higher accuracies (Han-cock et al., 2018).",
"Using BERT to interpret natural language explanations improves on using semantic parsers to evaluate programmatic explanations (+5.5 and +2.7 over BERT+SemParser (ProgExp) on Spouse and Disease , respectively).",
"ExpBERT also outperforms the BERT+SemParser (LangExp) model by +9.9 and +3.3 points on Spouse and Disease .",
"We exclude these results on TACRED as it was not studied in Hancock et al. (2018), so we did not have a corresponding semantic parser and set of programmatic explanations.",
"We note that ExpBERTwhich uses the full 768-dimensional feature vector from each explanationoutperforms ExpBERT (Prob), which summarizes these vectors into one number per explanation, by +25 F1 points across all three datasets.",
"Data efficiency.",
"Collecting a set of explanations E requires additional effortit took the authors about 1 minute or less to construct each explanation, though we note that it only needs to be done once per dataset (not per example).",
"However, collecting a small number of explanations can significantly and disproportionately reduce the number of labeled examples required.",
"We trained ExpBERT and the NoExp baseline with varying fractions of Spouse and TACRED training data (Fig-ure 3).",
"ExpBERT matches the NoExp baseline with 20x less data on Spouse ; i.e., we obtain the same performance with ExpBERT with 40 explanations and 2k labeled training examples as with NoExp with 22k examples.",
"On TACRED , ExpBERT requires 3x less data, obtaining the same performance with 128 explanations and 23k training examples as compared to NoExp with 68k examples.",
"These results suggest that the higher-bandwidth signal in language can help models be more data-efficient.",
"To understand which explanations are important, we group explanations into a few semantic categories (details in Appendix A.3) and cumulatively add them to the NoExp baseline.",
"In particular, we break down explanations for Spouse into the Table 2: Results on relation extraction datasets.",
"groups MARRIED (10 explanations), CHILDREN (5 explanations), ENGAGED (3 explanations), NEGATIVES (13 explanations) and MISC (9 explanations).",
"We find that adding new explanation groups helps performance (Table 3), which suggests that a broad coverage of various explanatory factors could be helpful for performance.",
"We also observe that the MARRIED group (which contains paraphrases of { o 1 } is married to { o 2 } ) alone boosts performance over NoExp, which suggests that a variety of paraphrases of the same explanation can improve performance.",
"We now test whether ExpBERT can do equally well with the same number of random explanations, obtained by replacing words in the explanation with random words.",
"The results are dataset-specific: random explanations help on Spouse but not on Disease .",
"However, in both cases, random explanations do significantly worse than the original explanations (Table 4).",
"Separately adding 10 random Table 4: ExpBERT accuracy is significantly lower when we replace words in the original explanations with random words.",
"explanations to our original explanations led to a slight drop ( 1 F1 point) in accuracy.",
"These results suggest that ExpBERT's performance comes from having a diverse set of high quality explanations and are not just due to providing more features.",
"Natural language explanations can capture different types of inductive biases and prior knowledge, but some types of prior knowledge are of course better introduced through other means.",
"We wrap up our experiments with a vignette on how language explanations can complement other forms of feature and representation engineering.",
"We consider Disease , where we have access to an external ontology (Comparative Toxicogenomic Database or CTD) from Wei et al. (2015) containing chemical-disease interactions.",
"Following Hancock et al. (2018), we add 6 bits to the explanation representation v ( x ) that test if the given chemical-disease pair follows certain relations in CTD (e.g., if they are in the ctd-therapy dictionary).",
"Table 5 shows that as expected, other sources of information can complement language explanations in ExpBERT.",
"Many other works have used language to guide model training.",
"As mentioned above, semantic parsers have been used to convert language explanations into features (Srivastava et al., 2017) and noisy labels on unlabeled data (Hancock et al., 2018; Wang et al., 2019).",
"Rather than using language to define a global collection of features, Rajani et al. (2019) and Cam-buru et al. (2018) use instance -level explanations to train models that generate their own explanations.",
"Zaidan and Eisner (2008) ask annotators to highlight important words, then learn a generative model over parameters given these rationales.",
"Others have also used language to directly produce parameters of a classifier (Ba et al., 2015) and as part of the parameter space of a classifier (Andreas et al., 2017).",
"While the above works consider learning from static language supervision, Li et al. (2016) and Weston (2016) learn from language supervision in an interactive setting.",
"In a related line of work, Wang et al. (2017), users teach a system high-level concepts via language.",
"Recent progress in general-purpose language representation models like BERT open up new opportunities to incorporate language into learning.",
"In this work, we show how using these models with natural language explanations can allow us to leverage a richer set of explanations than if we were constrained to only use explanations that can be programmatically evaluated, e.g., through n-gram matching (BERT+Patterns) or semantic parsing (BERT+SemParser).",
"The ability to incorporate prior knowledge of the right inductive biases into model representations dangles the prospect of building models that are more robust.",
"However, more work will need to be done to make this approach more broadly applicable.",
"We outline two such avenues of future work.",
"First, combining our ExpBERT approach with more complex state-of-the-art models can be conceptually straightforward (e.g., we could swap out BERT-base for a larger model) but can sometimes also require overcoming technical hurdles.",
"For example, we do not fine-tune ExpBERT in this paper; doing so might boost performance, but fine-tuning through all of the explanations on each example is computationally intensive.",
"Second, in this paper we provided a proof-of-concept for several relation extraction tasks, relying on the fact that models trained on existing natural language inference datasets (like MultiNLI) could be applied directly to the input sentence and explanation pair.",
"Extending ExpBERT to other natural language tasks where this relationship might not hold is an open problem that would entail finding different ways of interpreting an explanation with respect to the input.",
"We are grateful to Robin Jia, Peng Qi, John Hewitt, Amita Kamath, and other members of the Stanford NLP Group for helpful discussions and suggestions.",
"We also thank Yuhao Zhang for assistance with TACRED experiments.",
"PWK was supported by the Facebook Fellowship Program.",
"Toyota Research Institute (TRI) provided funds to assist the authors with their research but this article solely reflects the opinions and conclusions of its authors and not TRI or any other Toyota entity.",
"Code and model checkpoints are available at https://github.com/MurtyShikhar/ExpBERT.",
"The features generated by various interpreters can also be found at that link."
] | [
"method",
"abstain",
"method",
"result",
"result",
"method",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"abstain"
] |
[
"Most existing joint neural models for Information Extraction (IE) use local task-specific classifiers to predict labels for individual instances (e.g., trigger, relation) regardless of their interactions.",
"For example, a VICTIM of a DIE event is likely to be a VICTIM of an ATTACK event in the same sentence.",
"In order to capture such cross-subtask and cross-instance inter-dependencies, we propose a joint neural framework, ONEIE, that aims to extract the globally optimal IE result as a graph from an input sentence.",
"ONEIE performs end-to-end IE in four stages: (1) Encoding a given sentence as contextualized word representations; (2) Identifying entity mentions and event triggers as nodes; (3) Computing label scores for all nodes and their pairwise links using local classifiers; (4) Searching for the globally optimal graph with a beam decoder.",
"At the decoding stage, we incorporate global features to capture the cross-subtask and cross-instance interactions.",
"Experiments show that adding global features improves the performance of our model and achieves new state-of-the-art on all subtasks.",
"As ONEIE does not use any language-specific feature, we prove it can be easily applied to new languages or trained in a multilingual manner.",
"Our code and models for English, Spanish and Chinese are publicly available for research purpose 1 .",
"Information Extraction (IE) aims to extract structured information from unstructured texts.",
"It is a complex task comprised of a wide range of subtasks, such as named, nominal, and pronominal mention extraction, entity linking, entity coreference resolution, relation extraction, event extraction, and event coreference resolution.",
"Early efforts typically perform IE in a pipelined fashion, 1 http://blender.cs.illinois.edu/software/ oneie which leads to the error propagation problem and disallows interactions among components in the pipeline.",
"As a solution, some researchers propose joint inference and joint modeling methods to improve local prediction (Roth and Yih, 2004; Ji and Grishman, 2005; Ji et al., 2005; Sil and Yates, 2013; Li et al., 2014; Durrett and Klein, 2014; Miwa and Sasaki, 2014; Lu and Roth, 2015; Yang and Mitchell, 2016; Kirschnick et al., 2016).",
"Due to the success of deep learning, neural models have been widely applied to various IE subtasks (Col-lobert et al., 2011; Chiu and Nichols, 2016; Chen et al., 2015; Lin et al., 2016).",
"Recently, some efforts (Wadden et al., 2019; Luan et al., 2019) revisit global inference approaches by designing neural networks with embedding features to jointly model multiple subtasks.",
"However, these methods use separate local task-specific classifiers in the final layer and do not explicitly model the interdependencies among tasks and instances.",
"Figure 1 shows a real example where the local argument role classifier predicts a redundant PERSON edge.",
"The model should be able to avoid such mistakes if it is capable of learning and leveraging the fact that it is unusual for an ELECT event to have two PERSON arguments.",
"Example: Prime Minister Abdullah Gul resigned earlier Tuesday to make way for Erdogan , who won a parliamentary seat in by-elections Sunday.",
"To address this issue, we propose a joint neu-earthquake killed The 19 people and injured 300 in Kashmir region , India Identification Classification Trigger Entity Role Relation Decoding Encoding Die PER victim Injure Die PER victim Injure Die PER victim Injure ORG PER victim victim \u0001 Injure-victimORG \u0001 Injure-victimPER ... ...",
"ral framework, ONEIE, to perform end-to-end IE with global constraints.",
"As Figure 2 shows, instead of predicting separate knowledge elements using local classifiers, ONEIE aims to extract a globally optimal information network for the input sentence.",
"When comparing candidate information networks during the decoding process, we not only consider individual label scores for each knowledge element, but evaluate cross-subtask and cross-instance interactions in the network.",
"In this example, a graph with the INJURE-VICTIM-ORG (the VICTIM of an INJURE event is an ORG entity) structure is demoted.",
"Experiments show that our framework achieves comparable or better results compared to the state-of-the-art end-to-end architecture (Wadden et al., 2019).",
"To the best of our knowledge, ONEIE is the first end-to-end neural IE framework that explicitly models cross-subtask and cross-instance interdependencies and predicts the result as a unified graph instead of isolated knowledge elements.",
"Because ONEIE does not rely on language-specific features, it can be rapidly applied to new languages.",
"Furthermore, global features in our framework are highly explainable and can be explicitly analyzed.",
"Given a sentence, our ONEIE framework aims to extract an information network representation (Li et al., 2014), where entity mentions and event triggers are represented as nodes, and relations and event-argument links are represented as edges.",
"In other words, we perform entity, relation, and event extraction within a unified framework.",
"In this section, we will elaborate these tasks and involved terminologies.",
"Entity Extraction aims to identify entity mentions in text and classify them into pre-defined entity types.",
"A mention can be a name, nominal, or pronoun.",
"For example, Kashmir region should be recognized as a location ( LOC ) named entity mention in Figure 2. Relation Extraction is the task of assigning a relation type to an ordered pair of entity mentions.",
"For example, there is a PART-WHOLE relation between Kashmir region and India.",
"Event Extraction entails identifying event triggers (the words or phrases that most clearly express event occurrences) and their arguments (the words or phrases for participants in those events) in unstructured texts and classifying these phrases, respectively, for their types and roles.",
"An argument can be an entity, time expression, or value (e.g., MONEY , JOB-TITLE , CRIME ).",
"For example, in Figure 2, the word injured triggers an INJURE event and 300 is the VICTIM argument.",
"We formulate the task of extracting information networks as follows.",
"Given an input sentence, our goal is to predict a graph G = ( V, E ) , where V and E are the node and edge sets respectively.",
"Each node v i = (cid:104) a i , b i , l i (cid:105) V represents an entity mention or event trigger, where a and b are the start and end word indices, and l is the node type label.",
"Each edge e ij = (cid:104) i, j, l ij (cid:105) E is represented similarly, whereas i and j denote the indices of involved nodes.",
"For example, in Figure 2, the trigger injured is represented as (cid:104) 7, 7, INJURE (cid:105) , the entity mention Kashmir region is represented as (cid:104) 10, 11, LOC (cid:105) , and the event-argument edge between them is (cid:104) 2, 3, PLACE (cid:105) .",
"As Figure 2 illustrates, our ONEIE framework extracts the information network from a given sentence in four steps: encoding, identification, classification, and decoding.",
"We encode the input sentence using a pre-trained BERT encoder (Devlin et al., 2019) and identify entity mentions and event triggers in the sentence.",
"After that, we compute the type label scores for all nodes and pairwise edges among them.",
"During decoding, we explore possible information networks for the input sentence using beam search and return the one with the highest global score.",
"Given an input sentence of L words, we obtain the contextualized representation x i for each word using a pre-trained BERT encoder.",
"If a word is split into multiple word pieces (e.g., Mondrian Mon , ##dr , ##ian ), we use the average of all piece vectors as its word representation.",
"While previous methods typically use the output of the last layer of BERT, our preliminary study shows that enriching word representations using the output of the third last layer of BERT can substantially improve the performance on most subtasks.",
"At this stage, we identify entity mentions and event triggers in the sentence, which will act as nodes in the information network.",
"We use a feed-forward network FFN to compute a score vector y i = FFN( x i ) for each word, where each value in y i represents the score for a tag in a target tag set 2 .",
"After that, we use a conditional random fields (CRFs) layer to capture the dependencies between predicted tags (e.g., an I-PER tag should not follow a B-GPE tag).",
"Similar to (Chiu and Nichols, 2016), we calculate the score of a tag path z = { z 1 , ..., z L } as s ( X , z ) = L (cid:88) i =1 y i, z i + L +1 (cid:88) i =1 A z i 1 , z i , where X = { x 1 , ..., x L } is the contextualized representations of the input sequence, y i, z i is the z i -th 2 We use the BIO tag scheme, in which the prefix Bmarks the beginning of a mention, and Imeans inside of a mention.",
"A token not belonging to any mention is tagged with O .",
"component of the score vector y i , and A z i 1 , z i is the ( z i 1 , z i ) entry in matrix A that indicates the transition score from tag z i 1 to z i .",
"The weights in A are learned during training.",
"We append two special tags <start> and <end> to the tag path as z 0 and z L +1 to denote the start and end of the sequence.",
"At the training stage, we maximize the log-likelihood of the gold-standard tag path as log p ( z | X ) = s ( X , z ) log (cid:88) z Z e s ( X , z ) , where Z is the set of all possible tag paths for a given sentence.",
"Thus, we define the identification loss as LI = log p ( z | X ) .",
"In our implementation, we use separate taggers to extract entity mentions and event triggers.",
"Note that we do not use types predicted by the taggers.",
"Instead, we make a joint decision for all knowledge elements at the decoding stage to prevent error propagation and utilize their interactions to improve the prediction of node type.",
"We represent each identified node as v i by averaging its word representations.",
"After that, we use separate task-specific feed-forward networks to calculate label scores for each node as y ti = FFN t ( v i ) , where t indicates a specific task.",
"To obtain the label score vector for the edge between the i -th and j -th nodes, we concatenate their span representations and calculate the vector as y tk = FFN t ( v i , v j ) .",
"For each task, the training objective is to minimize the following cross-entropy loss L t = 1 N t N t (cid:88) i =1 y ti log y ti , where y ti is the true label vector and N t is the number of instances for task t .",
"If we ignore the inter-dependencies between nodes and edges, we can simply predict the label with the highest score for each knowledge element and thus generate the locally best graph G .",
"The score of G can be calculated as s (cid:48) ( G ) = (cid:88) t T N t (cid:88) i =1 max y ti , where T is the set of tasks.",
"We refer to s (cid:48) ( G ) as the local score of G .",
"A limitation of local classifiers is that they are incapable of capturing inter-dependencies between knowledge elements in an information network.",
"We consider two types of inter-dependencies in our framework.",
"The first type of inter-dependency is Cross-subtask interactions between entities, relations, and events.",
"Consider the following sentence.",
"A civilian aid worker from San Francisco was killed in an attack in Afghanistan.",
"A local classifier may predict San Francisco as a VICTIM argument because an entity mention preceding was killed is usually the victim despite the fact that a GPE is unlikely to be a VICTIM .",
"To impose such constraints, we design a global feature as shown in Figure",
"3(a) to evaluate whether the structure DIE-VICTIM-GPE exists in a candidate graph.",
"Another type of inter-dependency is Cross-instance interactions between multiple event and/or relation instances in the sentence.",
"Take the following sentence as an example.",
"South Carolina boy , 9, dies during hunting trip after his father accidentally shot him on Thanksgiving Day.",
"It can be challenging for a local classifier to predict boy as the VICTIM of the ATTACK event triggered by shot due to the long distance between these two words.",
"However, as shown in Figure",
"3(b), if an entity is the VICTIM of a DIE event, it is also likely to be the VICTIM of an ATTACK event in the same sentence.",
"Motivated by these observations, we design a set of global feature templates (event schemas) as listed in Table 1 to capture cross-subtask and cross-instance interactions, while the model fills in all possible types to generate features and learns the",
"(a) Cross-subtask Interaction",
"(b) Cross-instance Interactions PER dies Die Attack boy victim victim shot Die San Francisco killed victim GPE Figure 3: Examples of inter-dependencies between elements in information networks.",
"weight of each feature during training.",
"Given a graph G , we represent its global feature vector as f G = { f 1 ( G ) , ..., f M ( G ) } , where M is the number of global features and f i ( ) is a function that evaluates a certain feature and returns a scalar.",
"For example, f i ( G ) = (cid:40) 1 , G has multiple ATTACK events 0 , otherwise .",
"Next, ONEIE learns a weight vector u RM and calculates the global feature score of G as the dot product of f G and u .",
"We define the global score of G as the sum of its local score and global feature score, namely s ( G ) = s (cid:48) ( G ) + uf G , We make the assumption that the gold-standard graph for a sentence should achieve the highest global score.",
"Therefore, we minimize the following loss function LG = s ( G ) s ( G ) , where G is the graph predicted by local classifiers and G is the gold-standard graph.",
"Finally, we optimize the following joint objective function during training L = LI + (cid:88) t TL t + LG 3.5 Decoding As we have discussed, because local classifiers ignore interactions among elements in an information network, they may predict contradictory results or fail to predict difficult edges that require information from other elements.",
"In order to address these issues, ONEIE makes a joint decision for all nodes and their pairwise edges to obtain the globally optimal graph.",
"The basic idea is to calculate the global score for each candidate graph and select the one with the highest score.",
"However, exhaustive search is infeasible in many cases as the size of search space grows exponentially with the number of nodes.",
"Therefore, we design a beam search-based decoder as Figure 4 depicts.",
"Given a set of identified nodes V and the label scores for all nodes and their pairwise links, we perform decoding with an initial beam set B = { K 0 } , where K 0 is an order-zero graph.",
"At each step i , we expand each candidate in B in node step and edge step as follows.",
"Node step : We select v i V and define its candidate set as V i = {(cid:104) a i , b i , l ( k ) i (cid:105)| 1 k v } , where l ( k ) i denotes the label with the k -th highest local score for v i , and v is a hyper-parameter that controls the number of candidate labels to consider.",
"We update the beam set by B { G + v | ( G, v ) B V i } , Edge step : We iteratively select a previous node v j V, j < i and add possible edges between v j and v i .",
"Note that if v i is a trigger, we skip v j if it is also a trigger.",
"At each iteration, we construct a candidate edge set as E ij = {(cid:104) j, i, l ( k ) ij (cid:105)| 1 k e } , where l ( k ) ij is the label with k -th highest score for e ij and e is a threshold for the number of candidate labels.",
"Next, we update the beam set by B { G + e | ( G, e ) B E ij } , At the end of each edge step, if | B | is larger than the beam width , we rank all candidates by global score in descending order and keep the top ones.",
"We perform our experiments on the Automatic Content Extraction (ACE) 2005 dataset 3 , which provides entity, value, time, relation, and event annotations for English, Chinese, and Arabic.",
"Following Wadden et al. (2019)'s pre-processing 4 , we conduct experiments on two datasets, ACE05-R that includes named entity and relation annotations, and ACE05-E that includes entity, relation, and event annotations.",
"We keep 7 entity types, 6 coarse-grained relation types, 33 event types, and 22 argument roles.",
"In order to reinstate some important elements absent from ACE05-R and ACE05-E, we create a new dataset, ACE05-E + , by adding back the order of relation arguments, pronouns, and multi-token event triggers, which have been largely ignored in previous work.",
"We also skip lines before the <text> tag (e.g., headline, datetime) as they are not annotated.",
"In addition to ACE, we derive another dataset, ERE-EN, from the Entities, Relations and Events (ERE) annotation task created under the Deep Exploration and Filtering of Test (DEFT) program because it covers more recent articles.",
"Specifi-cally, we extract 458 documents and 16,516 sentences from three ERE datasets, LDC2015E29, LDC2015E68, and LDC2015E78.",
"For ERE-EN, we keep 7 entity types, 5 relation types, 38 event types, and 20 argument roles.",
"To evaluate the portability of our model, we also develop a Chinese dataset from ACE2005 and a Spanish dataset from ERE (LDC2015E107).",
"We refer to these datasets as ACE05-CN and ERE-ES respectively.",
"We optimize our model with BertAdam for 80 epochs with a learning rate of 5e-5 and weight decay of 1e-5 for BERT, and a learning rate of 1e-3 and weight decay of 1e-3 for other parameters.",
"We use use the bert-base-multilingual-cased 3 https://www.ldc.upenn.edu/collaborations/ past-projects/ace 4 https://github.com/dwadden/dygiepp Node Step E1 1 Candidate 1 of node E1 E1 2 Candidate 2 of node E1 Node Step E1 1 E1 1 E1 2 E1 2 T1 1 T1 2 T1 1 T1 2 E1 1 E1 1 E1 2 E1 2 T1 1 T1 2 T1 1 T1 2 E1 1 E1 1 E1 2 E1 2 T1 1 T1 2 T1 1 T1 2 Edge Step Add v 1 Add v 2 Add e 1,2 Sort FAC Fine Campbell fines PER Fine Campbell fines entity entity Example: He also brought a check from Campbell to pay the fines and fees.",
"model 5 for ACE05-CN and ERE-ES, and use the bert-large-cased model for other datasets.",
"Following (Wadden et al., 2019), we use two-layer FFNs with a dropout rate of 0.4 for local classifiers.",
"We use 150 hidden units for entity and relation extraction, and 600 hidden units for event extraction.",
"For global features, we set v and e to 2 and set to 10. In our experiments, we use random seeds and report averaged scores across runs.",
"We use the same criteria as (Zhang et al., 2019; Wadden et al., 2019) for evaluation as follows.",
"Relation : A relation is correct if its relation type 5 https://huggingface.co/transformers/ pretrained_models.html is correct and the offsets of the related entity mentions are correct.",
"Trigger : A trigger is correctly identified (Trig-I) if its offsets match a reference trigger.",
"It is correctly classified (Trig-C) if its event type also matches the reference trigger.",
"Argument : An argument is correctly identified (Arg-I) if its offsets and event type match a reference argument mention.",
"It is correctly classified (Arg-C) if its role label also matches the reference argument mention.",
"In Table 3, we compare our results with two models: (1) DY GIE++ (Wadden et al., 2019), the state-of-the-art end-to-end IE model that utilizes multi-sentence BERT encodings and span graph propagation; (2) BASELINE that follows the architecture of ONEIE but only uses the output of the last layer of BERT and local classifiers.",
"We can see that our model consistently outperforms DY GIE++ and BASELINE on ACE05-R and ACE05-E.",
"In (Wadden et al., 2019), the authors show that combining triggers predicted by a four-model ensemble optimized for trigger detection can improve the performance of event extraction.",
"While we also report our results using a four-model ensemble in Table 4 for fair comparison, we hold the opinion that the single-model scores in Table 3 better reflect the actual performance of ONEIE and should be used for future comparison.",
"In Table 6 we list salient global features learned by the model.",
"Take feature #9 as an example, if a Dataset Task DY GIE++ BASELINEONEIE ACE05-R Entity 88.6 -88.8 Relation 63.4 -67.5 ACE05-E Entity 89.7 90.2 90.2 Trig-I -76.6 78.2 Trig-C 69.7 73.5 74.7 Arg-I 53.0 56.4 59.2 Arg-C 48.8 53.9 56.8 Table 3: Results on ACE2005 datasets (F-score, %).",
"candidate graph contains multiple ORG-AFF edges incident to the same node, the model will demote this graph by adding a negative value into its global score.",
"We also observe that the weights of about 9% global features are almost not updated, which indicates that they are barely found in both gold-standard and predicted graphs.",
"In Table 8, we perform qualitative analysis on concrete examples.",
"As Table 7, we evaluate the proposed framework on ACE05-CN and ERE-ES.",
"The results show that ONEIE works well on Chinese and Spanish data without any special design for the new language.",
"We also observe that adding English training data can improve the performance on Chinese and Spanish.",
"We have analyzed 75 of the remaining errors and in Figure 5 we present the distribution of various error types which need more features and knowledge acquisition to address in the future.",
"In this section, we will discuss some main categories with examples.",
"Need background knowledge .",
"Most of current IE methods ignore external knowledge such as entity attributes and scenario models.",
"For exam-Positive Feature Weight 1 ATRANSPORT event has only one DESTINATION argument 2.61 2 An ATTACK event has only one PLACE argument 2.31 3 ATRANSPORT event has only one ORIGIN argument 2.01 4 An END-POSITION event has only one PERSON argument 1.51 5 APER-SOC relation exists between two PER entities 1.08 6 AGEN-AFF relation exists between ORG and LOC entities 0.96 7 ABENEFICIARY argument is a PER entity 0.93 8 AGEN-AFF relation exists between ORG and GPE entities 0.90 Negative Feature Weight 9 An entity has an ORG-AFF relation with multiple entities -3.21 10 An entity has an PART-WHOLE relation with multiple entities -2.49 11 An event has two PLACE arguments -2.47 12 ATRANSPORT event has multiple DESTINATION arguments -2.25 13 An entity has a GEN-AFF relation with multiple entities -2.02 14 An ATTACK event has multiple PLACE arguments -1.86 15 An entity has a PHYS relation with multiple entities -1.69 16 An event has multiple VICTIM arguments -1.61 Table 6: Salient positive and negative global features.",
"ple, in the following sentence, And Putin's media aide, Sergei Yastrzhembsky, told Kommersant Russia would not forgive the Iraqi debt , our model mistakenly identifies Kommersan as a person instead of organization.",
"With entity linking, we Sentence & Analysis Baseline +Global Features #1: Russia 's foreign minister expressed outrage at suggestions from a top Washington official last week that Moscow should forgive the eight billion dollars in Soviet-era debt that Baghdad owes it, as a gesture of good will.",
"can correct this error based on the first sentence in its Wikipedia page Kommersant is a nationally distributed daily newspaper published in Russia mostly devoted to politics and business .",
"Rare words .",
"The second challenge is the famous long-tail problem: many triggers, entity mentions (e.g., caretaker, Gazeta.ru) and contextual phrases in the test data rarely appear in the training data.",
"While most event triggers are verbs or nouns, some adverbs and multi-word expressions can also serve as triggers.",
"Multiple types per trigger .",
"Some trigger words may indicate both the procedure and the result status of an action.",
"For example, named may indicate both NOMINATE and START-POSITION events; killed and eliminate may indicate both ATTACK and DIE events.",
"In these cases the human ground truth usually only annotates the procedure types, whereas our system produces the resultant event types.",
"Need syntactic structure .",
"Our model may benefit from deeper syntactic analysis.",
"For example, in the following sentence As well as previously holding senior positions at Barclays Bank, BZW and Kleinwort Benson, McCarthy was formerly a top civil servant at the Department of Trade and Industry , our model misses all of the employers Barclays Bank, BZW and Kleinwort Benson for McCarthy probably because they appear in a previous sub-sentence.",
"Uncertain events and metaphors .",
"Our model mistakenly labels some future planned events as specific events because its lacking of tense prediction and metaphor recognition.",
"For example, START-ORG triggered by formation does not happen in the following sentence The statement did not give any reason for the move, but said Lahoud would begin consultations Wednesday aimed at the formation of a new government .",
"Our model also mistakenly identifies camp as a facility, and a DIE event triggered by dying in the following sentence Russia hints peace camp' alliance with Germany and France is dying by Dmitry Zaks. .",
"The IE community is lacking of newer data sets with end-to-end annotations.",
"Unfortunately, the annotation quality of the ACE data set is not perfect due to some long-term debates on the annotation guideline;",
"e.g., Should government be tagged as a GPE or an ORG ?",
"Should dead be both an entity and event trigger?",
"Should we consider designator word as part of the entity mention or not?",
"Previous work (Roth and Yih, 2004; Li et al., 2011) encodes inter-dependency among knowledge elements as global constraints in an integer linear programming framework to effectively remove extraction errors.",
"Such integrity verification results can be used to find knowledge elements that violate the constraints and identify possible instances of detector errors or failures.",
"Inspired by these previous efforts, we propose a joint neural framework with global features in which the weights are learned during training.",
"Similar to (Li et al., 2014)'s method, ONEIE also uses global features to capture cross-subtask and cross-instance interdependencies, while our features are language-independent and do not rely on other NLP tools such as dependency parsers.",
"Our methods also differ in local features, optimization methods, and decoding procedures.",
"Some recent efforts develop joint neural models to perform extraction of two IE subtasks, such as entity and relation extraction (Zheng et al., 2017; Katiyar and Cardie, 2017; Bekoulis et al., 2018; Fu et al., 2019; Luan et al., 2019; Sun et al., 2019) and event and temporal relation extraction (Han et al., 2019).",
"Wadden et al. (2019) design a joint model to extract entities, relations and events based on BERT and dynamic span graphs.",
"Our framework extends (Wadden et al., 2019) by incorporating global features based on cross-subtask and cross-instance constraints.",
"Unlike (Wadden et al., 2019) that uses a span-based method to extract mentions, we adopt a CRF-based tagger in our framework because it can extract mentions of any length, not restricted by the maximum span width.",
"We propose a joint end-to-end IE framework that incorporates global features to capture the interdependency",
"interdependency between knowledge elements.",
"Experiments show that our framework achieves better or comparable performance compared to the state of the art and prove the effectiveness of global features.",
"Our framework is also proved to be language-independent and can be applied to other languages, and it can benefit from multi-lingual training.",
"In the future, we plan to incorporate more comprehensive event schemas that are automatically induced from multilingual multimedia data and external knowledge to further improve the quality of IE.",
"We also plan to extend our framework to more IE subtasks such as document-level entity coreference resolution and event coreference resolution.",
"This research is based upon work supported in part by U.S. DARPA KAIROS Program No.",
"FA8750-19-2-1004, U.S. DARPA AIDA Program No.",
"FA8750-18-2-0014, Air Force No.",
"FA8650-17-C-7715, the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via contract No.",
"FA8650-17-C-9116.",
"The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA, ODNI, IARPA, or the U.S. Government.",
"The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"result",
"objective",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"method",
"method",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"objective",
"method",
"objective",
"other",
"other",
"abstain",
"method",
"objective",
"abstain",
"result",
"method",
"result",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other"
] |
[
"Style transfer has been widely explored in natural language generation with non-parallel corpus by directly or indirectly extracting a notion of style from source and target domain corpus.",
"A common shortcoming of existing approaches is the prerequisite of joint annotations across all the stylistic dimensions under consideration.",
"Availability of such dataset across a combination of styles limits the extension of these setups to multiple style dimensions.",
"While cascading single-dimensional models across multiple styles is a possibility, it suffers from content loss, especially when the style dimensions are not completely independent of each other.",
"In our work, we relax this requirement of jointly annotated data across multiple styles by using independently acquired data across different style dimensions without any additional annotations.",
"We initialize an encoder-decoder setup with transformer-based language model pre-trained on a generic corpus and enhance its re-writing capability to multiple target style dimensions by employing multiple style-aware language models as discriminators.",
"Through quantitative and qualitative evaluation, we show the ability of our model to control styles across multiple style dimensions while preserving content of the input text.",
"We compare it against baselines involving cascaded state-of-the-art uni-dimensional style transfer models.",
"Style transfer is a popular task in natural language processing and has been studied on attributes like age or gender (Subramanian et al., 2018), styles emanating from social construct like formality (Rao and Tetreault, 2018) and politeness (Madaan et al., 2020), linguistic styles based on author writing style (Syed et al., 2020), or psycho-linguistic styles based on personality types (Mairesse and Walker, 2011).",
"While early style transfer frameworks were modeled as a supervised learning task on a parallel corpus, state-of-the-art models are semi-supervised/unsupervised and operate on nonparallel corpus.",
"These models achieve style transfer by aligning source and target distribution of sentences from non-parallel corpus (Shen et al., 2017), disentangling content space from style space in latent representation (Hu et al., 2017) or employing self-reconstruction (Dai et al., 2019) and back translation (Lample et al., 2018) objectives to achieve pseudo-supervision with non-parallel corpus.",
"Recent works have also modeled this in a self-supervised manner where rewriting (trans-fer) is achieved by utilizing corpus from the target style alone (Syed et al., 2020).",
"These wide studies have also led to the curation and benchmarking of non-parallel dataset for various style dimensions, such as sentiment (Li et al., 2018), formality (Rao and Tetreault, 2018), politeness (Danescu-Niculescu-Mizil et al., 2013), excitement (Sancheti et al., 2020), etc.",
"But availability of data with joint tagging across multiple styles is limited and has restricted the ability of existing approaches to scale from single-dimensional transfer to multiple style dimensions.",
"In this paper, we propose a multidimensional style transfer approach that can work off partially labelled data for style transfer across multiple dimensions simultaneously.",
"The work by Subramanian et al. (2018) attempts style transfer with multiple attributes such as age, gender, and sentiment simultaneously.",
"However, their approach avails corpus tagged with each of these three style dimensions.",
"In contrast to this and other similar explorations in multi-style transfer, our approach does not require jointly labelled data across all the stylistic dimensions in source and/or target corpus.",
"We focus on the problem where independent corpus is available across different stylistic dimensions (say sentiment and formality ) and we achieve style transfer spanning different stylistic dimensions (say make a sentence more positive and formal ).",
"While state-of-the-art approaches can be extended to achieve this by sequentially transferring one style after another, it is limited as different style dimensions are not necessarily independent of each other.",
"In aspects that are not independent, changing one style aspect of the text might affect another aspect considered, making a sequential brute-force approach non-ideal.",
"As we show in our experiments later, the cascaded setup also lacks common grounding between the content from different styles leading to erratic changes in content.",
"We circumvent this by grounding our framework on the linguistic understanding of a large language model.",
"Our model builds understanding of interplay between the different styles by incorporating multiple discriminative language models (LM) with language model-based encoder-decoder setup.",
"The key contributions of this paper are:",
"1) An encoder-decoder setup with multiple language models as discriminator, with each entity harnessing the language understanding from a large pre-trained transformer model.",
"2) Relaxing the requirement of jointly labelled data for multi-style transfer, by leveraging independently acquired disjoint corpus for different styles.",
"3) Achieving better style control with better content preservation in multi-dimensional style transfer than a cascaded setup of state-of-the-art unidimensional style transfer models.",
"One line of work in style transfer attempts to learn disentangled latent representation for style and content, and transfer style by manipulating latent representation of style (Shen et al., 2017).",
"Although these approaches perform well with one style at a time, they do not trivially scale to multidimensional style transfer.",
"Several other works develop unsupervised approach for style transfer by employing Denoising Autoencoding (DAE) (Fu et al., 2017) and back-translation (BT) (Lample et al., 2018) loss to develop interaction and hence transfer between the source and target domain.",
"Subramanian et al. (2018) extend this approach to multiple styles by conditioning on average of embedding of each target attribute and using combination of DAE and back-translation techniques.",
"DAE takes as input a sentence x from style s and tries to reconstruct sentence x from its corrupted version x .",
"This relies on the assumption that the input sentence x is from a certain style combination s = { s 1 , s 2 , . . . , s k } .",
"Similarly back translation (BT) objective with input sentence x from style s , first estimates x (cid:48) = f ( x, s (cid:48) ) , where s (cid:54) = s (cid:48) and then reconstruct x from x = f ( x (cid:48) , s ) .",
"Thus, these approaches are inherently dependent on knowledge of annotation of each sentence with all the style combinations.",
"Dai et al. (2019) achieve state-of-the-art style transfer in single style dimensions by employing transformer-based model in conjunction with classifier-based discriminator.",
"In addition to discriminator losses, their proposed technique uses self-reconstruction and cycle reconstruction losses, which similar to DAE and BT losses are also reliant on availability of jointly annotated data to be extendable to multiple style setup.",
"Language modeling is integral to several natural language generation (NLG) tasks like text summarization, spelling correction, image captioning, etc.",
"The model architecture for these tasks has evolved from n-gram based methods to Recurrent Neural Networks to transformer architectures.",
"The introduction of Transformer-based architecture accompanied with generative pre-training (Radford, 2018) capabilities have led to strong improvements in many downstream generation and GLUE (Wang et al., 2018) tasks.",
"Generative pre-training aims to adapt a large Transformer language model to large unsupervised corpus.",
"This capability of generative pre-training is exploited in many large language models like BERT (Devlin et al., 2019), GPT-2 (Radford et al., 2018), ERNIE 2.0 (Sun et al., 2020) which have the ability to perform tasks like reading comprehension (Xu et al., 2019), summarization (Liu and Lapata, 2019), question-answering (Rajpurkar et al., 2016) and translation (Clinchant et al., 2019) in zero-shot and few-shot settings.",
"Recently these pre-trained generative language models have been explored in translation (Con-neau and Lample, 2019) and style transfer tasks (Syed et al., 2020).",
"Conneau and Lample (2019) develop cross-lingual models for unsupervised machine translation by initializing encoder and decoder with a pre-trained language model trained on Masked Language Modeling (MLM) (Devlin et al., 2019) objective and fine-tuning the encoder-decoder framework with adversarial training.",
"Syed et al. (2020) extend this to stylized re-writing task by employing DAE during fine-tuning.",
"The joint encoder-decoder framework learns to reconstruct sentences in target-domain from its noisy version using DAE objective.",
"As previously discussed, the DAE objective is reliant on the corpus being tagged for the target domain style (or combination of style) and restricts the generalization of this setup to multiple attributes.",
"We overcome this by employing discriminative language models to assist the decoder with feedback for various target styles.",
"Shen et al. (2017) show that even with nonparallel data, the content distribution across source and target style is shared.",
"Based on this, a language model trained on target style will have high perplexity on transferred text if it does not match target style and low perplexity otherwise.",
"Yang et al. (2018) exploit this ability of language models to replace standard binary classifier-based discriminators with an implicitly trained language model as discriminator.",
"They show that using the language model as structured discriminator allows for more stable training by eliminating the adversarial step.",
"We extend this idea to a multi-discriminator approach.",
"Training a LM on combination of target styles is not possible in absence of jointly labelled dataset.",
"Due to this, we attempt to use multiple discriminators for each of the target styles.",
"Since with multiple styles, the underlying corpus is independently acquired, the variation in content distribution across different styles is more noticeable.",
"Consequently, an independently trained LM on one of the target styles might have high perplexity even if the transferred sentence fits in the corresponding target style, due to the content space of source sentence.",
"To equip discriminative LM with more generalized notion of content, we use large transformer-based LM pre-trained on large unsupervised corpus to establish generic content distribution before style-oriented fine-tuning.",
"Our proposed approach has two key elements a Transformer-based encoder-decoder model initialized with a pre-trained Transformer Language Model and fine-tuned on DAE loss to achieve style transfer (Section 3.1) and the multiple language models as discriminators stacked together to enable multi-style transfer (Section 3.2).",
"Similar to Syed et al. (2020), we first pre-train a Transformer-based language model with Masked Language Modeling (MLM) objective on English Wikipedia data extracted using WikiExtractor.",
"1 This equips LM with the ability to predict masked 1 https://github.com/attardi/wikiextractor words over a large corpus.",
"Masked Language Modeling leverages bidirectional context of the input, thus enabling better language understanding.",
"Following Masked Language Modeling objective from Devlin et al. (2019), we randomly sample 15% of the tokens from the text stream and replace them with the [MASK] token 80% of the time, by a random token 10% of the time and keep them unchanged 10% of the time, with the objective of predicting the original identity of the masked word based on its bidirectional context.",
"To enable style transfer from a given sentence to target style, we use independently trained language models (LMs) to initialize the encoder and decoder and connect these with randomly initialized attention layers to arrive at a encoder-decoder setup .",
"As discussed by Syed et al. (2020), the Transformer architecture (Vaswani et al., 2017) allows such independent initialization by implicitly aligning encoder-decoder layers via attention mechanism.",
"Pre-training an encoder only transformer on generative task and then leveraging it to initialize as both encoder and decoder as opposed to pretraining a joint encoder-decoder model has several advantages.",
"Transformer-based models with encoder-only (Devlin et al., 2019) or decoder-only (Radford et al., 2018) blocks have been shown to perform well in generative pre-training task.",
"Clearly, pre-training a single transformer block on generative task and then utilizing it as both encoder and decoder blocks has lower computational cost than training the entire encoder-decoder block jointly.",
"Moreover, this also enables us to use the same pre-trained model to initialize both style transfer module and the discriminator models, explained in the following section.",
"This is not only computationally more efficient but it also closely ties the underlying language distribution of the two modules.",
"This is expected to make the discriminative feedback more effective while fine tuning the transfer model for multiple styles.",
"In Syed et al. (2020)'s setup, both encoder and decoder in the style transfer module are initialized with the pre-trained language model (trained on MLM objective).",
"Instead, we initialize the decoder with the language model fine-tuned with the target style using Causal Language Modeling (CLM) objective, before training the joint encoder-decoder model, as detailed in Section 3.2.",
"The encoder is initialized with the pre-trained model directly.",
"Aligning the decoder to the distribution of the tar-.",
"get style helps speed up the fine-tuning process as decoder is more adept at generating stylized outputs.",
"This does not add to computational overhead as these fine-tuned models are repurposed as discriminators for stylistic feedback (Section 3.2).",
"To instill style-awareness to the encoder-decoder setup initialized with pre-trained Transformer models, we fine-tune it with Denoising Autoencoder (DAE) loss using the target-domain corpus.",
"In case of multiple styles, we use a randomized mixture of target-domain corpus from each of the target styles.",
"Under the DAE objective, the encoder takes a noisy masked version x of the text x as input and attempts to fill in the mask token as per the MLM objective that it was pre-trained on.",
"In turn, the decoder re-creates stylistic version of original sentence from this noisy output from the encoder.",
"The overall training objective is LDAE ( G ) = E x T [ log P G ( x | x )] , (1) where G are the trainable parameters of the encoder-decoder model.",
"The noisy version of sentence x from the target corpus T is obtained after dropping tokens from x with probability p drop and masking with a probability of p mask .",
"In conjunction, the encoder and decoder enable style transfer to the target style.",
"The noteworthy aspect here is that the model has no sense of source style and is trained to generate sentences to match the style of the target-domain corpus with which it is trained.",
"To extend the single-dimensional style transfer setup above to multi-dimensional setting, we use language models as discriminators to provide the feedback to the model for partially annotated na-ture of input data.",
"As opposed to a classifier-based discriminator, the language model as discriminator takes into account the wider language distribution of the target style.",
"Additionally, such a setup allows us to use only the target style corpus for training the transfer model, whereas the classifier would require both source and target style corpus to distinguish between a sentence as being from one style or another.",
"Inspired by Yang et al. (2018), we fine-tune a language model on the target style s i , so that the language model is equipped with language distribution of target domain data.",
"This entails generating the probability of next token, given the previous tokens also known as Causal Language Modeling objective (Conneau and Lample, 2019).",
"The training loss for the LM for target style s i with corresponding corpus T i is E x T i (cid:20) n (cid:88) t =1 [ log PLM ( x t | x 1 , . . . , x t 1 )] (cid:21) (2) We show in our experiments that such a fine-tuning step transforms language distribution of this language model to style s i and hence serve as soft-discriminator for our framework.",
"We exploit this capability of language models to imbibe style of fine-tuning corpus by employing language models as style discriminators for transferred sentences.",
"This is based on the idea that if the transferred sentence does not fit well in the target style, then the perplexity of language model fine-tuned on that style will be high (Section 4.1).",
"For k -dimensional style transfer with target styles s = { s 1 , s 2 , . . . , s k } , we independently fine-tune k language models on each of the target styles.",
"As discussed in Yang et al. (2018), we are able to forgo the adversarial training for the discriminator, since the fine-tuned discriminative language model is implicitly capable of assigning high perplexity to negative samples (out-of-style samples), as shown in Section 4.1.",
"For the transferred sentence x (cid:48) , the training objective for each target style s i is, argmin GL s i = E x T,x (cid:48) P G ( x ) (cid:20) n (cid:88) t =1 log PLM i ( x (cid:48) t | x (cid:48) 1 ,",
".., x (cid:48) t 1 ) (cid:21) (3) This dictates that transferred sentence x (cid:48) has low perplexity on the language model fine-tuned on style s i , for each target style s i .",
"However, we cannot directly find the argmin G using gradient descent because of discrete sampling of x (cid:48) P G ( x ) .",
"To account for this, we use a policy gradient reinforcement learning approach using REINFORCE algorithm (Sutton et al., 1999).",
"The reward for an input sequence x to the style discriminator LM i is calculated as, r ( x ) = n (cid:88) t =1 log PLM i ( x t | x 1 , .., x t 1 ) (4) Using these rewards, the RL objective is to minimize the loss L s i given by, L s i = E x T,x (cid:48) P G ( x ) ( r ( x (cid:48) ) r ( x )) [ log P G ( x (cid:48) | x )] (5) for style s i , where P G ( x | x ) is as in Equation 1 and r ( x (cid:48) ) is the reward in the Equation 4 for the transferred sentence x (cid:48) .",
"The rewards r ( x ) represents the baseline reward of greedily sampling the input sequence x by the style discriminator LM i .",
"For the style combination s = { s 1 , s 2 , . . . , s k } , the joint encoder-decoder model is trained on randomized mixture of data from each of the target-domain corpus.",
"The mixture is thus agnostic of individual style of each of the sentence and the discriminative LM for each style guides the generation towards that specific style by rewarding style adherence in the transferred sentence.",
"Randomized mixture of training corpus across styles allows for unified and cohesive understanding of multiple styles by diversifying rewards from different discriminators across samples.",
"The overall training loss for the joint encoder-decoder model is L = DAEE x T [ log P ( x | x )] + k (cid:88) i =1 i L s i , (6) where L s i is as defined in Equation 5, and DAE and { i } ki =1 are hyper-parameters.",
"The overall training process is summarized in Figure 1.",
"First, we pre-train a transformer model with Masked language modeling objective as shown in Figure 1(Left).",
"We then initialize discriminator model with this pre-trained language model and fine-tune it with Causal language modeling objective, shown in Figure 1(Right), for each target style.",
"Finally, we initialize the encoder and decoder of the style transfer module with the pretrained and style-specific fine-tuned language models, respectively.",
"In case of multiple styles, the decoder can be initialized with the language model which is fine-tuned with CLM loss on the mixture of data from target styles, i.e., CLM loss in Equation 2 with x T .",
"The joint encoder-decoder model (Figure 1(Centre)) is then trained with a combination of DAE objective and rewards from fine-tuned discriminators of respective target styles.",
"We experiment with a combination of sentiment and formality styles.",
"For sentiment, we use a mixture of IMDB (Maas et al., 2011) and Yelp dataset (Li et al., 2018) with 300 k examples in the positive and negative sentiment each.",
"For formality, we use GYAFC corpus (Rao and Tetreault, 2018) which has 104 k examples in each formal and informal class.",
"The test set has 3000 and 4849 examples for sentiment and formality respectively, following the data split available in Dai et al. (2019); Rao and Tetreault (2018).",
"For both datasets, the training corpus is non-parallel and the test corpus has human written references available, which we use for content evaluation (Section 4.2).",
"For pre-training, we use 12 -layer Transformer model with 512 hidden units, 16 heads, a dropout rate of 0 .",
"1 and learned positional embedding.",
"We train our models with the Adam optimizer, and Style/Dimension Sentiment % Formality % Positive 71.41 67.09 Negative 76.17 75.59 Table 1: Accuracy of sentences generated by model fine-tuned on style s i as % of generated sentences labelled as class s i by the classifier trained on the corresponding style dimension.",
"a learning rate of 10 4 .",
"To handle large vocabulary sizes, we use Byte Pair Encoding (BPE) (Sen-nrich et al., 2016) learned on the Wikipedia dataset.",
"The s in Equation 6 are determined using hyper-parameter tuning on validation set, with style transfer accuracy (Section 4.2) as search criteria.",
"To evaluate style variation across language models fine-tuned on different styles, we compare the generations of the fine-tuned models.",
"For single-dimensional style evaluation, we generate sentences from models fine-tuned on negative corpus and positive corpus and compare the style accuracy of generated sentences.",
"The style accuracy is evaluated by employing a FastText (Joulin et al., 2016) classifier trained on the corresponding style dimension.",
"For instance, the classifier for evaluating sentiment accuracy is trained on sentiment corpus tagged with positive and negative class in IMDB and Yelp data.",
"Table 1 shows the accuracy of sentences generated by a model fine-tuned on style s i as belonging to the class s i .",
"For both sentiment and formality, the fine-tuned language models are able to generate text faithful to the target style dimension.",
"Thus, we conclude that the language models trained on style s i are able to capture the essence of the corresponding style reasonably well.",
"These accuracies are an indication of the style awareness in these fine-tuned LMs.",
"We, therefore, employ the perplexities of these fine-tuned language models to gauge the style of the input text to guide our style transfer model.",
"As discussed in discriminative modeling (Section 3.2), the model fine-tuned with corpus from a certain style is expected to have high perplexity on sentence not from that style and low perplexity otherwise.",
"To this end, we experiment with two models independently fine-tuned on positive and negative corpus.",
"We calculate the perplexity of each of these models on the test corpus from the same style and from the opposite style.",
"As seen in Table 2, the perplexity for each model is substantially lower on the same corpus as compared to that on the opposite corpus.",
"This implies that a language model fine-tuned on positive corpus shows higher perplexity for negative sentences and lower for positive sentences and vice versa.",
"This corroborates the effectiveness of these fine-tuned language models to serve as discriminators for training the style transfer module.",
"We measure the performance of our model and the baselines based on the style control, content preservation and fluency.",
"To measure the accuracy of style transfer , we train two Fasttext 2 classifiers independently for sentiment and formality using the train corpus, as described in Section 4.1.",
"These classifiers have accuracy of 93 .",
"74% and 88 .",
"95% respectively on test corpus of respective datasets.",
"We note that formality as a style is more intricately designed, so we also check lexical scoring by Brooke et al. (2010) to evaluate formality, which uses a formality lexicon to assign formality score between 1 (informal) and 1 (formal) to each word and averages it.",
"We scale these scores between 0 100 , where higher ( 100 ) lexical score signifies formal style and lower (0) score signifies informal style.",
"For informal target style, we report lexical score as 100 n , so that a higher average lexical score signifies a better transfer for either polarity.",
"To measure content preservation on transfer, we calculate the BLEU score (Papineni et al., 2002) between the transferred sentence and the input sentence (self-BLEU) .",
"Besides this, we also calculate BLEU score between the transferred sentence generated by our model and the corresponding human reference transferred sentence, available for GYAFC and Yelp corpus (ref-BLEU) .",
"Since both these corpus account for transfer across only one style dimension each, the provided references are only partial indication of expected outcome.",
"This 2 https://github.com/facebookresearch/fastText Model Style Accuracy Content Preservation Fluency Classifier Lexical Scoring BLEU Perplexity Sentiment Formality Formality -self -ref Cascaded Style Transformer 72 .",
"is also apparent from low ref-BLEU scores for our model as well as baselines.",
"Since, the results are presented on aggregated dataset from both these style dimensions, this evaluation is still able to provide reasonable indication of content preservation.",
"To measure the fluency of the text, we calculate perplexity assigned to the generated text sequence by a language model trained on the train corpus, as is standard in style transfer literature (Dai et al., 2019; Subramanian et al., 2018).",
"The perplexity is the measure of log likelihood of the generated sentence on the language model.",
"A lower perplexity is indicative of a more fluent sentence.",
"We use a generative transformer-based language model trained on the dataset combined from two styles.",
"Dai et al. (2019) use transformer-based model ( Style Transformer ) for single-dimensional style transfer.",
"We train two independent Style Transformer models for sentiment and formality transfer and then perform transfer one after another to compare results with our model.",
"We term this as Cascaded Style Transformer setup.",
"The Style Transformer model is shown to have state-of-the-art performance in single-dimensional style transfer; thus it provides an estimate of the performance of sequential single style transfer.",
"We also experiment with Adapted Rewriting LM (Syed et al., 2020) as another baseline.",
"Their work on style rewriting to match author-specific style does not require explicit annotations for the various aspects that constitutes an author's style, but is based on the assumption that the training corpus reflects the target style.",
"In this context, we train their framework on the mixture of data from the respective target styles and report the performance.",
"These are the closest baselines to our proposed approach, since other works dealing with multi-style transfer assume presence of jointly annotated dataset, which is a stronger assumption that we aim to relax.",
"In addition to our proposed model with multiple style transfer, we also train our encoder-decoder architecture with single discriminative LM for one style at a time and perform two stage transfer, similar to one with Cascaded Style Transformer (Dai et al., 2019) setup.",
"The results in Table 3 show that our model achieves better style control than the Cascaded Style Transformer (Dai et al., 2019) as well as the joint transfer using Syed et al. (2020) for both sentiment and formality.",
"As seen in Table 3, cascaded style transfer models perform poorly on content preservation.",
"This is because transferring style one after other leads to huge loss in content, thus both the two-stage models score lower on content preservation metrics, both w.r.t. the input text and the reference transferred text.",
"This demonstrates the advantage of using single model to control for multiple styles.",
"The effect can also be observed in Table 4 which demonstrates qualitative results for Cascaded Style Transformer model and our model.",
"We can see in many cases content loses the underlying meaning of source sentence during the two-stage transfer, whereas our model is able to retain original meaning of the sentence well, corroborating the findings of automatic evaluation.",
"Among the cascaded models, the Discriminative LM scores marginally better on content preservation than the Style Transformer model.",
"We attribute this to initialization with the same pre-trained LM resulting in shared content space in the underlying single style transfer models.",
"However, due to independent training of the two single style transfer models, they are not able to model interplay between these styles and hence perform worse on style control than our proposed model trained jointly on multiple styles.",
"ples in Table 4, where sentences generated by Cascaded Style Transformer are much less coherent.",
"Qualitative experiments also highlight the ability of our model to incorporate intricacies of formality stylistic dimension (shown in bold) better than the Cascaded Style Transformer model.",
"Among single step transfer models (Syed et al. (2020) and our proposed approach), we note that content preservation is marginally better for Syed et al. (2020)'s model, however, our model is able to yield much better style transfer owing to feedback on style control by multiple discriminators.",
"To augment automatic evaluation results, we conduct a human study to evaluate the model outputs across various dimensions such as content preservation, style control, fluency, and overall transfer",
"transfer quality.",
"Based on comparable style control in Cascaded Style Transformer and our proposed approach on automatic metrics, we compare the transfer quality across these two models by a small-scale human study.",
"We select 40 sentences, with 10 examples from each combinations of sentiment and formality as target style, and collect annotations from 4 5 participants for each example.",
"Out of resulting annotations, more than 85% annotations favoured our results over baseline.",
"The average participant rating across different dimensions is shown in Table 5.",
"We test the statistical significance of these results using z-test statistic.",
"With = 0 .",
"05 , the preferences indicated in human study are sig-nificant across all metrics.",
"These results are in line with our automatic evaluations and add confidence to the efficacy of our proposed approach in achieving style transfer across multiple dimensions.",
"We propose an approach to extend currently existing style transfer work to multiple style setting without imposing any extra constraints on availability of dataset.",
"Our method makes use of disjoint corpus from separate styles to enable one step transfer across multiple target styles.",
"We exploit multiple discriminative language models with an encoder-decoder framework, all emerging from large transformer-based language models pretrained on Masked Language Modeling objective and fine-tuned separately for transfer and discriminative purposes.",
"We show that unified single step transfer approach is able to achieve better transfer while offering much better content preservation which is paramount to any style transfer task.",
"Further improvements are in scope for adding modularity to the proposed transfer module.",
"In the current setup, each version of model is trained for a specific combination of target style(s).",
"The utility of such a model increases manifold with added ease of transfer across multiple style combinations within a single model.",
"This could be attempted by employing a controlled language model as a unified discriminator for multiple styles, which would be the subject of further research.",
"Ethics Statement.",
"We recognise the ethical implication of employing large language models trained on data infused with unchecked biases.",
"As with any generative task, style transfer too suffers from the potential misuse for fact distortion, plagiarism and more.",
"The paper aims at establishing academic utility of proposed framework.",
"To meet ethical standards, this solution has to coupled with strict misrepresentation, offensiveness and bias checks."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"result",
"method",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"result",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"objective",
"other",
"method",
"other",
"other",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain"
] |
[
"Open-domain code generation aims to generate code in a general-purpose programming language (such as Python) from natural language (NL) intents.",
"Motivated by the intuition that developers usually retrieve resources on the web when writing code, we explore the effectiveness of incorporating two varieties of external knowledge into NL-to-code generation: automatically mined NL-code pairs from the online programming QA forum StackOverflow and programming language API documentation.",
"Our evaluations show that combining the two sources with data augmentation and retrieval-based data re-sampling improves the current state-of-the-art by up to 2.2% absolute BLEU score on the code generation testbed CoNaLa.",
"The code and resources are available at https://github.com/ neulab/external-knowledge-codegen .",
"Semantic parsing, the task of generating machine executable meaning representations from natural language (NL) intents, has generally focused on limited domains (Zelle and Mooney, 1996; Deborah A. Dahl and Shriber, 1994), or domain-specific languages with a limited set of operators (Berant et al., 2013; Quirk et al., 2015; Dong and Lap-ata, 2016; Liang et al., 2017; Krishnamurthy et al., 2017; Zhong et al., 2017; Yu et al., 2018, 2019b,a).",
"However, recently there has been a move towards applying semantic parsing to automatically generating source code in general-purpose programming languages (Yin et al., 2018; Yao et al., 2018; Lin et al., 2018; Agashe et al., 2019; Yao et al., 2019).",
"Prior work in this area (Xiao et al., 2016; Ling et al., 2016; Rabinovich et al., 2017; Yin and Neubig, 2017, 2018; Dong and Lapata, 2018; Suhr et al., The first two authors contributed equally. Annotated pairs <code, NL> External Knowledge Resources: Pre-train Mined pairs from Parsed pairs from API docs Text-to-Code Gen. Model Noisy but real-use distributed Clean but uniformly distributed Re-sampling w/ Real Distribution Human Curated Data: Real Distribution Estimation Fine-tune Figure 1: Our approach: incorporating external knowledge by data re-sampling, pre-training and fine-tuning. 2018; Iyer et al., 2018; Yin and Neubig, 2019) used a variety of models, especially neural architectures, to achieve good performance.",
"However, open-domain code generation for general-purpose languages like Python is challenging.",
"For example, given the intent to choose a random file from the directory contents of the C drive, C: \\\\ ' , one would expect the Python code snippet random.choice(os.listdir(C: \\\\ ')) , that realizes the given intent.",
"This would involve not just generating syntactically correct code, but also using (and potentially combining) calls to APIs and libraries that implement some of the desired functionality.",
"As we show in 3, current code generation models still have difficulty generating the correct function calls with appropriate argument placement.",
"For example, given the NL intent above, although the state-of-the-art model by Yin and Neubig (2018) that uses a transition-based method to generate Python abstract syntax trees is guaranteed to generate syntactically correct code, it still incorrectly outputs random.savefig(random(",
"compile(open(C:\\\\'))+100).isoformat()) .",
"A known bottleneck to training more accurate code generation models is the limited number of manually annotated training pairs available in existing human-curated datasets, which are insufficient to cover the myriad of ways in which some complex functionality could be implemented in code.",
"However, increasing the size of labeled datasets through additional human annotation is relatively expensive.",
"It is also the case that human developers rarely reference such paired examples of NL and code, and rather take external resources on the web and modify them into the desired form (Brandt et al., 2009, 2010; Gu et al., 2016).",
"Motivated by these facts, we propose to improve the performance of code generation models through a novel training strategy: pretraining the model on data extracted automatically from external knowledge resources such as existing API documentation, before fine-tuning it on a small manually curated dataset ( 2.1).",
"Our approach, outlined in Figure 1, combines pairs of NL intents and code snippets mined automatically from the Q&A website StackOverflow ( 2.2), and API documentation for common software libraries ( 2.3).",
"1 While our approach is model-agnostic and generally applicable, we implement it on top of a state-of-the-art syntax-based method for code generation, TranX (Yin and Neubig, 2018), with additional hypothesis reranking (Yin and Neubig, 2019).",
"Experiments on the CoNaLa benchmark (Yin et al., 2018) show that incorporating external knowledge through our proposed methods increases BLEU score from 30.1 to 32.3, outperforming the previous state-of-the-art model by up to 2.2% absolute.",
"Qualitatively analyzing a sample of code snippets generated by our model reveals that the generated code is more likely to use the correct API calls for desired functionality and to arrange arguments in the right order.",
"The overall strategy for incorporating external knowledge that we take on this work is to (1) pretrain the model on the NL-code pairs obtained from external resources, then (2) fine-tune on a small manually curated corpus.",
"This allows the model to first learn on larger amounts of potentially noisy data, while finally being tailored to the actual NL and code we want to model at test time.",
"In order to perform this pre-training we need to convert external data sources into NL-code pairs, and we describe how to do so in the following sections.",
"When developers code, most will inevitably search online for code snippets demonstrating how to achieve their particular intent.",
"One of the most 1 Of course external knowledge for code covers a large variety of resources, other than these two types.",
"prominent resources online is StackOverflow, 2 a popular programming QA forum.",
"However, it is not the case that all code on StackOverflow actually reflects the corresponding intent stated by the questioner some may be methods defining variables or importing necessary libraries, while other code may be completely irrelevant.",
"Yin et al. (2018) propose training a classifier to decide whether an NL-code pair is valid, resulting in a large but noisy parallel corpus of NL intents and source code snippets.",
"The probability assigned by the method can serve as confidence, representing the quality of the automatically mined NL-code pairs.",
"We use these mined pairs as a first source of external knowledge.",
"Second, motivated by the intuition that much of modern software development relies on libraries, and that developers often turn to programming language and software library references for help while writing code, we consider API documentation as another source of external knowledge.",
"Figure 2 shows some examples from the Python standard library API documentation.",
"It contains descriptions of libraries, classes, methods, functions, and arguments.",
"The documentation is already in a paired form consisting of code signatures and their descriptions.",
"However, the signatures shown in the documentation mainly provide the prototype of the API rather than valid API usages appearing in source code.",
"The text descriptions in the documentation tend to be verbose for clarity, while real questions from developers are usually succinct.",
"We use a few heuristics to transform these to emulate 2 https://stackoverflow.com real inputs a code generation system may face.",
"Most APIs define required and optional arguments in the signature.",
"In real usage, developers usually provide none or only some of those arguments.",
"To simulate this, we permute all possible combinations (with a limit) of the optional arguments and append them to the required arguments, following correct syntax.",
"For class constructors and methods, we create a heuristic variable name based on the class name to store the instantiated class object and to call methods upon.",
"To make concise description for each code snippet created, we preserve only the first sentence in the corresponding documentation, as well as the first sentences that contain mentions of each argument in the snippet.",
"In the rare case where arguments are not found in the original description, we add another sentence containing these arguments to the end of the NL snippet, ensuring all variables in code are covered in the NL.",
"We detail this process in Appendix A. 2.4 Re-sampling API Knowledge External knowledge from different sources has different characteristics.",
"NL-code pairs automatically mined from StackOverflow are good representatives of the questions that developers may ask, but are inevitably noisy.",
"NL-code pairs from API documentation are clean, but there may be a topical distribution shift from real questions asked by developers.",
"For example, the library curses has significantly more API entries than json (178 vs. 17), 3 while json is more frequently asked about and used.",
"This distributional shift between pretraining and fine-tuning causes performance degradation, as shown later in 3.2.",
"To mitigate this problem, we propose a retrieval-based re-sampling method to close the gap between the API documentation and the actual NL-code pairs we want to model.",
"We use both human annotated data D ann and mined data D mine to model the distribution of NL-code pairs because they are both produced by real users.",
"For each sample in this real usage distribution, we retrieve k NL-code pairs from the set of pairs harvested from API documentation DAPI and aggregate the frequencies of each pair y DAPI being retrieved: freq ( y ) = X x D ann+mined ( y R ( x, DAPI , k )) , 3 https://docs.python.org/3.7/library/ curses.html and https://docs.python.org/3.",
"7/library/json.html where R ( x, DAPI , k ) retrieves the top k most similar samples from DAPI given x , either according to NL intent or code snippet.",
"( ) is Kronecker's delta function, returning 1 if the internal condition is true, and 0 otherwise.",
"We use the BM25 retrieval algorithm (Jones et al., 2000) implemented in ElasticSearch.",
"4 We take this frequency and calculate the probability distribution after smoothing with a temperature [1 , ] : P ( y ) = freq ( y ) 1 / / X y D API freq ( y ) 1 / As changes from 1 to , P ( y ) shifts from a distribution proportional to the frequency to a uniform distribution.",
"Using this distribution, we can sample NL-code pairs from the API documentation that are more likely to be widely-used API calls.",
"Dataset and Metric: Although the proposed approach is generally applicable and model-agnostic, for evaluation purposes, we choose CoNaLa (Yin et al., 2018) as the human-annotated dataset (2,179 training, 200 dev and 500 test samples).",
"It covers real-world English queries about Python with diverse intents.",
"We use the same evaluation metric as the CoNaLa benchmark, corpus-level BLEU calculated on target code outputs in test set.",
"Mined Pairs: We use the CoNaLa-Mined (Yin et al., 2018) dataset of 600K NL-code pairs in Python automatically mined from StackOverflow ( 2.2).",
"We sort all pairs by their confidence scores, and found that approximately top 100K samples are of reasonable quality in terms of code correctness and NL-code correspondence.",
"We therefore choose the top 100K pairs for the experiments.",
"API Documentation Pairs: We parsed all the module documentation including libraries, builtin types and functions included in the Python 3.7.5 distribution.",
"5 After pre-processing ( 2.3), we create about 13K distinct NL-code pairs (without resampling) from Python API documentation.",
"For fair comparison, we also sample the same number of pairs for the re-sampling setting ( 2.4).",
"Methods: We choose the current state-of-the-art NL-to-code generation model TranX (Yin and Neubig, 2018) with hypothesis reranking (Yin and Neubig, 2019) as the base model.",
"Plus, we incorporate length normalization (Cho et al., 2014) to prevent beam search from favoring shorter results over longer ones.",
"Man denotes training solely on CoNaLa.",
"Man+Mine refers to first pre-training on mined data, then fine-tuning on CoNaLa.",
"Man+Mine+API combines both mined data and API documentation for pre-training.",
"As a comparison to our distribution-based method (de-noted by dist. , 2.4), we also attempt to directly retrieve top 5 NL-code pairs from API documents (denoted by direct ).",
"6 Implementation Details: We experiment with k = { 1 , 3 , 5 } and = { 1 , 2 , 5 } in re-sampling, and find that k = 1 and = 2 perform the best.",
"We follow the original hyper-parameters in TranX, except that we use a batch size of 64 and 10 in pre-training and fine-tuning respectively.",
"Results are summarized in Table",
"1. We can first see that by incorporating more noisy mined data during pre-training allows for a small improvement due to increased coverage from the much larger training set.",
"Further, if we add the pairs harvested from API docs for pre-training without re-sampling the performance drops, validating the challenge of distributional shift mentioned in 2.4.",
"Comparing the two re-sampling strategies direct vs. dist.",
", and two different retrieval targets NL intent vs. code snippet, we can see that dist. performs better with the code snippet as the retrieval target.",
"We expect that using code snippets to re-6 We choose 5 to obtain comparable amount of pairs.",
"trieve pairs performs better because it makes the generation target , the code snippet, more similar to the real-world distribution, thus better training the decoder.",
"It is also partly because API descriptions are inherently different than questions asked by developers (e.g. they have more verbose wording), causing intent retrieval to be less accurate.",
"Lastly, we apply hypothesis reranking to both the base model and our best approach and find improvements afforded by our proposed strategy of incorporating external knowledge are mostly orthogonal to those from hypothesis reranking.",
"After showing the effectiveness of our proposed re-sampling strategy, we are interested in the performance on more-used versus less-used APIs for the potentially skewed overall performance.",
"We use string matching heuristics to obtain the standard Python APIs used in the dataset and calculated the average frequency of API usages in each data instance.",
"We then select the top 200 and the bottom 200 instances out of the 500 test samples in terms of API usage frequencies.",
"Before and after adding API docs into pre-training, the BLEU score on both splits saw improvements: for high-frequency split, it goes from 28.67 to 30.91 and for low-frequency split, it goes from 27.55 to 30.05, indicating that although the re-sampling would skew towards high-frequency APIs, with the appropriate smoothing temperature experimentation, it will still contribute to performance increases on low-frequency APIs.",
"Besides using BLEU scores to perform holistic evaluation, we also perform more fine-grained analysis of what types of tokens generated are improving.",
"We apply heuristics on the abstract syntax tree of the generated code to identify tokens for API calls and variable names in the test data, and calculated the token-level accuracy for each.",
"The API call accuracy increases from 31.5% to 36.8% and the variable name accuracy from 41.2% to 43.0% after adding external resources, meaning that both the API calls and argument usages are getting better using our approach.",
"We further show selected outputs from both the baseline and our best approach in Table",
"2. In general, we can see that the NL to code generation task is still challenging, especially with more complex intents that require nested or chained API calls, or functions with more arguments.",
"The vanilla model already can generate basic functions and Open a file f.txt in write mode.",
"f=open(f.txt', w') f=open(f.txt', f.txt') f=open(f.txt', w') lower a string text and remove non-alphanumeric characters aside from space.",
"re.sub(r[^ \\ sa zA Z0 9]', ', text).",
"lower().strip() text.decode.translate(text.strip(),non-alphanumeric',') re.sub(r[^ \\ sa zA Z0 9]', ', text) choose a random file from the directory contents of the C drive, C: \\\\ '.",
"random.choice(os.listdir(C: \\\\ ')) random.savefig(random(compile(open(C: \\\\ ')",
")+100).isoformat()) random.choice(os.path.expanduser(C: \\\\ ')) Table 2: Examples, where is the ground-truth code snippet, is the original output, and is the output with our proposed methods.",
"copy strings/variables to the output, but we observe that incorporating external knowledge improves the results in two main ways: 1) better argument placement for APIs, and 2) better selection of which API call should be used for a certain intent.",
"In the first example, we can see that although the baseline gets the function call open() correct, it fails to generate the correct second argument specifying write mode, while our approach is able to successfully generate the appropriate w' .",
"In the second and third example, we can see that the baseline uses the wrong API calls, and sometimes makes up APIs on its own (e.g. random.savefig() ).",
"However, our ap-proach's outputs, while not perfect, are much more successful at generating correct API calls that actually exist and make sense for the intent.",
"On a closer look, we can observe that both the addition of mined examples and API docs may have brought the improvement.",
"The example of the open() function added from API docs uses the default mode r , so learning the meaning of w argument is due to the added mined real examples, but learning the argument placement (first file name as a string, second a shorthand mode identi-fier as a character) may have occurred from the API docs.",
"In other examples, random.choice() and re.sub() both are Python standard library APIs so they are included in the API doc examples.",
"We proposed a model-agnostic approach based on data augmentation, retrieval and data re-sampling, to incorporate external knowledge into code generation models, which achieved state-of-the-art results on the CoNaLa open-domain code generation task.",
"In the future, evaluation by automatically executing generated code with test cases could be a better way to assess code generation results.",
"It will also likely be useful to generalize our re-sampling procedures to zero-shot scenarios, where a programmer writes a library and documents it, but nobody has used it yet.",
"For example, developers may provide relative estimates of each documented API usages to guide the re-sampling; or we could find nearest neighbors to each API call in terms of semantics and use existing usage statistics as estimates to guide the re-sampling.",
"This research was supported by NSF Award No. 1815287 Open-domain, Data-driven Code Synthesis from Natural Language."
] | [
"abstain",
"objective",
"result",
"other",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"abstain",
"result",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"result",
"other"
] |
[
"Text-based games provide an interactive way to study natural language processing.",
"While deep reinforcement learning has shown effectiveness in developing the game playing agent, the low sample efficiency and the large action space remain to be the two major challenges that hinder the DRL from being applied in the real world.",
"In this paper, we address the challenges by introducing world-perceiving modules, which automatically decompose tasks and prune actions by answering questions about the environment.",
"We then propose a two-phase training framework to decouple language learning from reinforcement learning, which further improves the sample efficiency.",
"The experimental results show that the proposed method significantly improves the performance and sample efficiency.",
"Besides, it shows robustness against compound error and limited pre-training data.",
"Text-based games are simulated environments where the player observes textual descriptions, and acts using text commands (Hausknecht et al., 2020; Urbanek et al., 2019).",
"These games provide a safe and interactive way to study natural language understanding, commonsense reasoning, and dialogue systems.",
"Besides language processing techniques, Reinforcement Learning has become a quintessential methodology for solving text-based games.",
"Some RL-based game agents have been developed recently and proven to be effective in handling challenges such as language representation learning and partial observability (Narasimhan et al., 2015; Fang et al., 2017; Ammanabrolu and Riedl, 2019).",
"Despite the effectiveness, there are two major challenges for RL-based agents, preventing them from being deployed in real world applications: the low sample efficiency , and the large action space (Dulac-Arnold et al., 2021).",
"The low sample Corresponding author efficiency is a crucial limitation of RL which refers to the fact that it typically requires a huge amount of data to train an agent to achieve human-level performance (Tsividis et al., 2017).",
"This is because human beings are usually armed with prior knowledge so that they don't have to learn from scratch (Dubey et al., 2018).",
"In a language-informed RL system, in contrast, the agent is required to conduct both language learning and decision making regimes, where the former can be considered as prior knowledge and is much slower than the later (Hill et al., 2021).",
"The sample efficiency could be improved through pre-training methods, which decouple the language learning from decision making (Su et al., 2017).",
"The selection of pre-training methods thus plays an important role: if the pre-trained modules perform poorly on unseen data during RL training, the incurred compound error will severely affect the decision making process.",
"Another challenge is the large discrete action space: the agent may waste both time and training data if attempting irrelevant or inferior actions (Dulac-Arnold et al., 2015; Zahavy et al., 2018).",
"In this paper, we aim to address these two challenges for reinforcement learning in solving text-based games.",
"Since it is inefficient to train an agent to solve complicated tasks (games) from scratch, we consider decomposing a task into a sequence of subtasks as inspired by (Andreas et al., 2017).",
"We design an RL agent that is capable of automatic task decomposition and subtask-conditioned action pruning, which brings two branches of benefits.",
"First, the subtasks are easier to solve, as the involved temporal dependencies are usually short-term.",
"Second, by acquiring the skills to solve subtasks, the agent will be able to learn to solve a new task more quickly by reusing the learnt skills (Bar-reto et al., 2020).",
"The challenge of large action space can also be alleviated, if we can filter out the actions that are irrelevant to the current subtask.",
"ings can understand the environment conditions through question answering (Das et al., 2020; Ammanabrolu et al., 2020), we design world-perceiving modules to realize the aforementioned functionalities (i.e., task decomposition and action pruning) and name our method as Q uestion-guided W orld-perceiving A gent (QWA) * .",
"Fig. 1",
"(b) shows an example of our decision making process.",
"Being guided by some questions, the agent first decomposes the task to obtain a set of available subtasks, and selects one from them.",
"Next, conditioned on the selected subtask, the agent conducts action pruning to obtain a refined set of actions.",
"In order to decouple language learning from decision making, which further improves the sample efficiency, we propose to acquire the world-perceiving modules through supervised pre-training.",
"We design a two-phase framework to train our agent.",
"In the first phase, a dataset is built for the training of the world-perceiving modules.",
"In the second phase, we deploy the agent in games with the pre-trained modules frozen, and train the agent through reinforcement learning.",
"We conduct experiments on a series of cooking games.",
"We divide the games as simple games and complex games, and construct the pre-training dataset from simple games only.",
"The experimental results show that QWA achieves high sample efficiency in solving complex games.",
"We also show that our method enjoys robustness against compound error and limited pre-training data.",
"Our contributions are summarized as follows: Firstly, we develop an RL agent featured with question-guided task decomposition and action space reduction.",
"Secondly, we design a two-phase * Code is available at: https://github.com/ YunqiuXu/QWA framework to efficiently train the agent with limited data.",
"Thirdly, we empirically validate our method's effectiveness and robustness in complex games.",
"The RL agents for text-based games can be divided as text-based agents and KG-based agents based on the form of observations.",
"Compared with the text-based agents (Narasimhan et al., 2015; Yuan et al., 2018; Adolphs and Hofmann, 2020; Jain et al., 2020; Yin and May, 2019; Xu et al., 2020a; Guo et al., 2020), which take the raw textual observations as input to build state representations, the KG-based agents construct the knowledge graph and leverage it as the additional input (Ammanabrolu and Riedl, 2019; Xu et al., 2020b).",
"By providing structural and historical information, the knowledge graph helps the agent to handle partial observability, reduce action space, and improve generalizability across games.",
"Based on how actions are selected, the RL agents can also be divided as parser-based agents, choice-based agents, and template-based agents.",
"The parser-based agents generate actions word by word, leading to a huge combinatorial action space (Kohita et al., 2021).",
"The choice-based agents circumvent this challenge by assuming the access to a set of admissible actions at each game state (He et al., 2016).",
"The template-based agents achieve a trade-off between the huge action space and the assumption of admissible action set by introducing the template-based action space, where the agent selects first a template, and then a verb-object pair either individually (Hausknecht et al., 2020) or conditioned on the selected template (Ammanabrolu and Hausknecht, 2020).",
"In this work, we aim to improve the sam-539 ple efficiency and reduce the action space through pre-training.",
"Being agnostic about the form of observations and the action selecting methods, our work complements the existing RL agents.",
"Our work is closely related to task decomposition (Oh et al., 2017; Shiarlis et al., 2018; Sohn et al., 2018) and hierarchical reinforcement learning (Dayan and Hinton, 1992; Kulkarni et al., 2016; Vezhnevets et al., 2017).",
"Similar to our efforts, Jiang et al. (2019) and Xu et al. (2021) designed a meta-policy for task decomposition and subtask selection, and a sub-policy for goal-conditioned decision making.",
"Typically, these works either assume the access to a set of available subtasks, or decompose a task through pre-defined rules, while we aim to achieve automatic task decomposition through pre-training, and remove the requirement for expert knowledge during reinforcement learning.",
"Besides, existing work assumes that unlimited interaction data can be obtained to train the whole model through RL.",
"In contrast, we consider the more practical situation where the interaction data is limited, and focus on improving the RL agent's data efficiency.",
"Regarding the sub-policy, we do not assume the access to the termination states of the subtasks.",
"We also do not require additional handcrafted operations in reward shaping (Bah-danau et al., 2019).",
"There have been a wide range of work studying pre-training methods or incorporating pre-trained modules to facilitate reinforcement learning (Ey-senbach et al., 2018; Hansen et al., 2019; Sharma et al., 2019; Gehring et al., 2021; Liu et al., 2021; Schwarzer et al., 2021).",
"One major branch among them is Imitation Learning (IL), where the agent is trained to imitate human demonstrations before being deployed in RL (Hester et al., 2018; Zhu et al., 2018; Reddy et al., 2019).",
"Although we also collect human labeled data for pre-training, we leverage the data to help the agent to perceive the environment instead of learning the solving strategies.",
"Therefore, we do not require the demonstrations to be perfect to solve the game.",
"Besides, our method prevails when pre-trained on simple tasks rather than complicated ones, making it more feasible for human to interact and annotate (Arumugam et al., 2017; Mirchandani et al., 2021).",
"Further discussions to compare our method with IL are provided in subsequent sections.",
"In the domain of text-based games, some prior works have involved pre-training tasks such as state representation learning (Ammanabrolu et al., 2021; Singh et al., 2021), knowledge graph constructing (Murugesan et al., 2021) and action pruning (Hausknecht et al., 2019; Tao et al., 2018; Yao et al., 2020).",
"For example, Ammanabrolu et al. (2020) designed a module to extract triplets from the textual observation by answering questions, and use these triplets to update the knowledge graph.",
"As far as we know, we are the first to incorporate pre-training based task decompositon in this domain.",
"Besides, instead of directly pruning the actions based on the observation, we introduce subtask-conditioned action pruning to further reduce the action space.",
"POMDP Text-based games can be formulated as a Partially Observable Markov Decision Processes (POMDPs) (Ct et al., 2018).",
"A POMDP can be described by a tuple G = S , A , P, r, , O, , with S representing the state set, A the action set, P ( s | s, a ) : S A S (cid:55) R + the state transition probabilities, r ( s, a ) : S A (cid:55) R the reward function, the observation set, O the conditional observation probabilities, and (0 , 1] the discount factor.",
"At each time step, the agent receives an observation o t based on the probability O ( o t | s t , a t 1 ) , and select an action a t A .",
"The environment will transit into a new state based on the probability T ( s t +1 | s t , a t ) , and return a scalar reward r t +1 .",
"The goal of the agent is to select the action to maximize the expected cumulative discounted rewards: R t = E [ (cid:80) t =0 k r t ] .",
"Observation form In text-based games, the observation can be in the form of text, knowledge graph, or hybrid.",
"Fig. 1",
"(a) shows an example of the textual observation and the corresponding KG-based observation.",
"We do not make assumptions about the observation form and our method is compatible with any of those forms.",
"Problem setting We aim to design an RL-based agent that is able to conduct automatic task decomposition and action pruning in solving text-based games.",
"We consider games sharing similar themes and tasks, but varying in their complexities (Ad-hikari et al., 2020; Chen et al., 2021).",
"Taking the cooking games (Ct et al., 2018) as an example, 540 Figure 2: Subtasks for solving",
"the task is always make the meal.",
"To accomplish this task, the agent has to explore different rooms to collect all ingredients, prepare them in right ways, and make the meal.",
"A game's complexity depends on the number of rooms, ingredients, and the required preparation steps.",
"We define a subtask as a milestone towards completing the task (e.g., get apple if apple is included in the recipe), and a subtask requires a sequence of actions to accomplish (e.g., the agent has to explore the house to find the apple).",
"A game is considered simple, if it consists of only a few subtasks, and complex if it consists of more subtasks.",
"Fig. 2 gives examples of simple games and complex games.",
"While being closer to real world applications, complex games are hard to solve by RL agents because: 1) it's expensive to collect sufficient human labeled data for pre-training; 2) it's unrealistic to train an RL agent from scratch.",
"We therefore focus on agent's sample efficiency and performance on complex games.",
"Our objective is to leverage the labeled data collected from simple games to speed up RL training in complex games, thus obtaining an agent capable of complex games.",
"For more details and statistics of the simple / complex games used in our work, please refer to Sec. 5.1.",
"Fig. 3 shows the overview of our QWA agent.",
"We consider two world-perceiving modules: a task selector and an action validator.",
"Given the observation o t and the task candidate set T , we use the task selector to first obtain a subset of currently available subtasks T t T , then select a subtask T t T t .",
"Given T t and the action candidate set A , we use the action validator to get an action subset A t A , which contains only those relevant to the subtask T t .",
"Finally, given o t and T t , we use an action selector to score each action a A t , and the action with the highest score will be selected as a t .",
"The training of the world-perceiving modules can be regarded as the language learning regime, while the training of the action selector can be regarded as the decision making regime.",
"We consider a two-phase training strategy to decouple these two regimes to further improve the sample efficiency (Hill et al., 2021).",
"In the pre-training phase, we collect human interaction data from the simple games, and design QA datasets to train the world-perceiving modules through supervised learning.",
"In the reinforcement learning phase, we freeze the pre-trained modules, and train the action selector in the complex games through reinforcement learning.",
"Depending on the experiment settings, T and A can be either fixed vocabulary sets (parser-based), or changing over time (choice-based).",
"We regard a subtask available if it is essential for solving the global task, and there's no prerequisite subtask.",
"For example, the subtask get apple in Fig. 1, as the object apple is an ingredient which has not been collected.",
"Although another subtask dice apple is also essential for making the meal, it is not available since there exists a prerequisite subtask (i.e., you should collect the apple before dicing it).",
"The aim of the task selector is to identify a subset of available subtasks T t T , and then select one subtask T t T t .",
"We formulate the mapping f ( o t , T ) T t as a multi-label learning problem (Zhang and Zhou, 2013).",
"For simplicity, we assume that the subtask candidates are independent with each other.",
"Thus, the multi-label learning problem can be decomposed as |T | binary classification problems.",
"Inspired by the recent progress of question-conditional probing (Das et al., 2020), language grounding (Hill et al., 2021), and QA-based graph construction (Ammanabrolu et al., 2020), we cast these binary classification problems as yes-or-no questions, making the task selector a world-perceiving module.",
"For example, the corresponding question for the subtask candidate get apple could be Whether get apple' is an available sub-task?.",
"This module can guide the agent to under-541 Figure 3: The overview of QWA.",
"stand the environment conditions through answering questions, but will not directly lead the agent to a specific decision.",
"We can obtain this module through supervised pre-training, and decouple it from reinforcement learning to yield better sample efficiency.",
"Fig. 1",
"(b) shows some sample QAs, where a human answerer can be replaced by a pre-trained task selector.",
"Some previous work also considered task decomposition (Chen et al., 2021; Hu et al., 2019), but the related module is obtained through imitating human demonstrations, which is directly related to decision making instead of world perceiving.",
"Compared with these work, our method has two folds of benefits.",
"First, there may exist multiple available subtasks at a timestep.",
"Imitating human demonstrations will specify only one of them, which may be insufficient and lead to information loss.",
"Second, we do not require expert demonstrations which guarantee to solve the game.",
"Instead, we can ask humans to annotate either imperfect demonstrations, or even demonstrations from a random agent.",
"We will treat the IL-based method as a baseline and conduct comparisons in the experiments.",
"Given the set of available subtasks T t , arbitrary strategies can be used to select a subtask T t from it.",
"For example, we can employ a non-learnable task scorer to obtain T t by random sampling, since each subtask T T t is essential for accomplishing the task.",
"We can also train a task scorer via a meta-policy for adaptive task selection (Xu et al., 2021).",
"After obtaining the subtask T t , we conduct action pruning conditioned on it (or on both T t and o t ) to reduce the action space, tackling the challenge of large action space.",
"Similar to the task selector, we formulate action pruning as |A| binary classification problems, and devise another world-perceiving module: the action validator.",
"The action validator is designed to check the relevance of each action candidate a A with respect to T t by answering questions like Is the action candidate take beef' relevant to the subtask fry chicken'?, so as to obtain a subset of actions A t A with irrelevant actions filtered.",
"Fig. 3 shows the module architecture.",
"Similar to the task selector, we pre-train this module through question answering.",
"Sample QAs have been shown in Fig. 1",
"(b).",
"After pre-training, we deploy the agent in the complex games, and train the action selector through RL.",
"We freeze the pre-trained modules, as no human labeled data will be obtained in this phase.",
"At each time step, we use the task selector and the action validator to produce T t and A t , respectively.",
"We keep using the same subtask T over time until it is not included in T t , as we do not want the agent to switch subtasks too frequently.",
"The agent can simply treat T t as the additional observation of o t .",
"If we do not limit the use of human knowledge in this phase, we can also treat T t as a goal with either hand-crafted (Jiang et al., 2019) or learnt reward function (Colas et al., 2020).",
"Arbitrary methods can be used for optimizing (Ammanabrolu and Hausknecht, 2020; Adhikari et al., 2020).",
"One issue we are concerned about is the compound error the prediction error from imperfect pre-trained modules will adversely affect RL training (Talvitie, 2014; Racanire et al., 2017).",
"For example, the false predictions made by the binary classifier in the task selector may lead to a wrong T t , which affects A t and a t in turn.",
"To alleviate the influence of the compound error, we assign time-awareness to subtasks.",
"A subtask is bounded 542 Table 1: Game statistics.",
"by a time limit [0 , ] .",
"If the current subtask T is not finished within its time limit, we force the agent to re-select a new subtask T t T t \\ { T } , regardless whether T is still available.",
"Besides making the agent robust against errors, another benefit by introducing time-awareness to subtasks is that it improves the subtask selection diversity, which helps the agent to avoid getting stuck in local minima (Pong et al., 2020; Campero et al., 2020).",
"We conduct experiments on cooking games provided by the rl.0.2 game set and the FTWP game set , which share the vocabulary set.",
"Based on the number of subtasks, which is highly correlated to the number of ingredients & preparing requirements, we design three game sets with varying complexities: 3488 simple games, 280 medium games and 420 hard games.",
"Note that there is no overlapping games between the simple set and the medium / hard game sets.",
"Table 1 shows the game statistics.",
"Besides Traj.Length, which denotes the average length of the expert demonstrations per game , other statistic metrics are averaged per time step per game (e.g., #Subtasks and #Avail.Subtasks denote the average number of subtask candidates T , and the average number of available subtasks T t , respectively).",
"We will collect human interaction data from the simple games for pre-training.",
"We regard both medium & hard games as complex, and will conduct reinforcement learning on these two game sets without labeled data.",
"We consider the following four models, and compare with more variants in ablation studies:",
"IL (Chen et al., 2021): a hierarchical agent which also uses two training phases.",
"In the first phase, both the task selector and the action selector are pre-trained through imitation learning.",
"Then in the second phase, the action selector is fine-tuned through reinforcement learning.",
"IL w/o FT : a variant of the IL baseline, where only the imitation pre-training phase is conducted, and there's no RL fine-tuning.",
"Model architecture All models are implemented based on GATA's released code .",
"In particular, we use the version GATA-GTF, which takes only the KG-based observation, and denote it as GATA for simplicity.",
"The observation encoder is implemented based on the Relational Graph Convolutional Networks (R-GCNs) (Schlichtkrull et al., 2018) by taking into account both nodes and edges.",
"Both the task encoder and the action encoder are implemented based on a single transformer block with single head (Vaswani et al., 2017) to encode short texts.",
"The binary classifier, the task scorer and the action scorer are linear layers.",
"The GATA and IL models are equipped with similar modules.",
"Please refer to Appendix C for details.",
"Pre-training We train the task selector and the action validator separately, as they use different types of QAs.",
"We ask human players to play the simple games, and answer the yes-or-no questions based on the observations.",
"The details of the dataset construction (interaction data collection, question generation, answer annotation, etc. ) could be found at Appendix B. We train the task selector with a batch size of 256, and the action https://github.com/xingdi-eric-yuan/ GATA-public 543 Table 2: The testing performance at 20% / 100% of the reinforcement learning phase.",
"validator with a batch size of 64.",
"The modules are trained for 10-20 epochs using Focal loss and Adam optimizer with a learning rate of 0.001.",
"Reinforcement learning We consider the medium game set and hard game set as different experiments.",
"We split the medium game set into 200 training games / 40 validation games / 40 testing games, and the hard game set into 300 / 60 / 60.",
"We follow the default setting of (Adhikari et al., 2020) to conduct reinforcement learning.",
"We set the step limit of an episode as 50 for training and 100 for validation / testing.",
"We set the subtask time limit = 5 .",
"For each episode, we sample a game from the training set to interact with.",
"We train the models for 100,000 episodes.",
"The models are optimized via Double DQN (epsilon decays from 1.0 to 0.1 in 20,000 episodes, Adam optimizer with a learning rate of 0.001) with Pritorized Experience Replay (replay buffer size 500,000).",
"For every 1,000 training episodes, we validate the model and report the testing performance.",
"We measure the models through their RL testing performance.",
"We denote a game's score as the episodic sum of rewards without discount.",
"As different games may have different maximum available scores, we report the normalized score, which is defined as the collected score normalized by the maximum score for a game.",
"Fig. 4 shows the RL testing performance with respect to the training episodes.",
"Table 2 shows the testing performance after 20,000 training episodes (20%) / at the end of RL training (100%).",
"Compared with GATA, which needs to be trained from scratch, the proposed QWA model achieves high sample efficiency: it reaches convergence with high performance before 20% of the training stage, Figure 4: The RL testing performance w.r.t. training episodes.",
"saving 80% of the online interaction data in complex games.",
"The effectiveness of pre-training can also be observed from the variant IL w/o FT: even though it requires no further training on the medium / hard games, it achieves comparable performance to our model.",
"However, the performance of QWA can be further improved through RL, while it does not work for the IL-based model, as we can observe the performance of IL becomes unstable and drops significantly during the RL fine-tuning.",
"A possible reason is that there exists large domain gap between simple and medium (hard) games, and our model is more robust against such domain shifts.",
"For example, our world-perceiving task selector performs better than IL-based task selector in handling more complex observations (according to Table 1, the observations in medium / hard games contain more triplets, rooms and objects), facilitating the training of the action selector.",
"Besides the domain gap in terms of the observation space, there is also a gap between domains in terms of the number of available subtasks while there's always one available subtask per time step in simple games, the model will face more available subtasks in the medium / hard games.",
"Different from our task selector, which is trained to check the availability of every subtask candidate, the IL pre-trained task selector can not adapt well in this situation, as it is trained to find the unique subtask and ignore the other subtask candidates despite whether they are also available.",
"We further investigate the generalization performance of our model on simple games, considering that simple games are not engaged in our RL training.",
"To conduct the experiment, after RL training, we deploy all models on a set of 140 held-out sim-544 Table 3: The RL testing performance on simple games.",
"ple games for RL interaction.",
"Table 3 shows the results, where Medium 100% (Hard 100%) denotes that the model is trained on medium (hard) games for the whole RL phase.",
"The generalizability of GATA, which is trained purely with medium and hard games, is significantly low and cannot perform well on simple games.",
"In contrast, our model performs very well and achieves over 80% of the scores.",
"The world-perceiving modules, which are pre-trained with simple games, help to train a decision module that adapts well on unseen games.",
"It is not surprising that the variant IL w/o FT also performs well on simple games, since they are only pre-trained with simple games.",
"However, as indicated by the performance of IL, after fine-tuning on medium/hard games (recalling Sec. 6.1), the action scorer forgets the experience/skills dealing with simple games and the model fails to generalize on unseen simple games.",
"In summary, the best performance achieved by QWA demonstrates that our model can generalize well on games with different complexities.",
"We study the contribution of the subtask time-awareness by comparing our full model with the variant without this technique.",
"Fig. 5 shows the result.",
"Although the models perform similarly in the medium games, the full model shows better performance in the hard games, where there may exist more difficult subtasks (we regard a subtask more difficult if it requires more actions to be completed).",
"Assigning each subtask a time limit prevents the agent from pursuing a too difficult subtask, and improves subtask diversity by encouraging the agent to try different subtasks.",
"Besides, it prevents the agent from being stuck in a wrong subtask, making the agent more robust to the compound error.",
"We then investigate the performance upper bound of our method by comparing our model to variants with oracle world-perceiving modules.",
"Fig. 6 shows the results, where +expTS (+expAV) denotes that the model uses an expert task selector (action validator).",
"There's still space to improve the Figure 5: The performance of our model and the variant without time-awareness.",
"pre-trained modules.",
"The variant QWA +expTS +expAV solves all the medium games and achieves nearly 80% of the scores in hard games, showing the potential of introducing world-perceiving modules in facilitating RL.",
"We also find that assigning either the expert task selector or the expert action validator helps to improve the performance.",
"In light of these findings, we will consider more powerful pre-training methods as a future direction.",
"Although we only collect labeled data from the simple games, it is still burdensome for human players to go through the games and answer the questions.",
"We are thus interested in investigating how the performance of our QWA (or world-perceiving modules) varies with respect to a reduced amount of pre-training data.",
"Fig. 7 shows the results, where the pre-training dataset has been reduced to 75%, 50% and 25%, respectively.",
"Our model still performs well when the pre-training data is reduced to 75% and 50%.",
"When we only use 25% of the pre-training data, the model exhibits instability during the learning of hard games.",
"Being pre-trained on a largely-reduced dataset, the world-perceiving modules might be more likely to make wrong predictions with the progress of RL training, leading to the performance fluctuation.",
"However, the fi-545 Figure 7: The performance of our model with varying amounts of pre-training data.",
"nal performance of this variant is still comparable.",
"To summarize, our model is robust to limited pretraining data and largely alleviates the burden of human annotations.",
"In this paper, we addressed the challenges of low sample efficiency and large action space for deep reinforcement learning in solving text-based games.",
"We introduced the world-perceiving modules, which are capable of automatic task decomposition and action pruning through answering questions about the environment.",
"We proposed a two-phase training framework, which decouples the language learning from the reinforcement learning.",
"Experimental results show that our method achieves improved performance with high sample efficiency.",
"Besides, it shows robustness against compound error and limited pre-training data.",
"Regarding the future work, we would like to further improve the pre-training performance by introducing contrastive learning objective (You et al., 2020) and KG-based data augmentation (Zhao et al., 2021).",
"This work is supported in part by ARC DP21010347, ARC DP180100966 and Facebook Research.",
"Joey Tianyi Zhou is supported by A*STAR SERC Central Research Fund (Use-inspired Basic Research).",
"We thank the anonymous reviewers for their constructive suggestions.",
"We thank Smashicons and Trazobanana for providing the icons in Fig. 1."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"result",
"objective",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"objective",
"method",
"abstain",
"other",
"other",
"objective",
"objective",
"method",
"method",
"other",
"other",
"objective",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"objective",
"method",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"method",
"objective",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"other",
"other",
"other",
"other"
] |
[
"We introduce MeSys , a meaning-based approach, for solving English math word problems (MWPs) via understanding and reasoning in this paper.",
"It first analyzes the text, transforms both body and question parts into their corresponding logic forms, and then performs inference on them.",
"The associated context of each quantity is represented with proposed role-tags (e.g., nsubj , verb , etc.), which provides the flexibility for annotating an extracted math quantity with its associated context information (i.e., the physical meaning of this quantity).",
"Statistical models are proposed to select the operator and operands.",
"A noisy dataset is designed to assess if a solver solves MWPs mainly via understanding or mechanical pattern matching.",
"Experimental results show that our approach outperforms existing systems on both benchmark datasets and the noisy dataset, which demonstrates that the proposed approach understands the meaning of each quantity in the text more.",
"The math word problem (MWP) (see Figure 1 ) is frequently chosen to study natural language understanding and simulate human problem solving (Bakman, 2007; Hosseini et al., 2014; Liang et al., 2016) for the following reasons: (1) the answer to the MWP cannot be simply extracted by performing keyword/pattern matching.",
"It thus shows the merit of understanding and inference.",
"(2) An MWP usually possesses less complicated syntax and requires less amount of domain knowledge, so the researchers can focus on the task of understanding and reasoning.",
"(3) The body part of MWP that provides the given information for solving the problem consists of only a few sentences.",
"The understanding and reasoning procedures thus could be more efficiently checked.",
"(4) The MWP solver has its own applications such as Computer Math Tutor (for students in primary school) and Helper for Math in Daily Life (for adults who are not good in solving mathematics related real problems).",
"According to the approaches used to identify entities, quantities, and to select operations and operands, previous MWP solvers can be classified into: (1) Rule-based approaches (Mukherjee and Garain, 2008 1 ; Hosseini et al., 2014), which make all related decisions based on a set of rules; (2) Purely statistics-based approaches (Kushman et al., 2014; Roy et al., 2015; Zhou et al., 2015; Upadhyay et al., 2016), in which all related decisions are done via a statistical classifier; (3) DNN-based approaches (Ling et al., 2017; Wang et al., 2017), which map the given text into the corresponding math operation/equation via a DNN; and (4) Mixed approaches, which identify entities and quantities with rules, yet, decide operands and operations via statistical/DNN classifiers.",
"This category can be further divided into two subtypes:",
"(a) Without understanding (Roy and Roth, 2015; Koncel-Kedziorski et al., 2015; Huang et al., 2017; Shrivastava et al., 2017), which does not check the entity-attribute consistency between each quantity and the target of the given question; and",
"(b) With understanding (Lin et al., 2015; Mitra and Baral, 2016; Roy and Roth, 2017), which also checks the entity-attribute consistency while solving the problem.",
"Math Word Problem Mike takes 88 minutes to walk to school.",
"If he rides a bicycle to school, it would save him 64 minutes.",
"How much time did Mike save?",
"Solution 88 64 = 22 Figure 1: An example of math word problem.",
"However, a widely covered rule-set is difficult to construct for the rule-based approach.",
"Also, it is awkward in resolving ambiguity problem.",
"In contrast, the performance of purely statistics-based approaches deteriorates significantly when the MWP includes either irrelevant information or information gaps (Hosseini et al., 2014), as it is solved without first understanding the meaning.",
"For the category (4a), since the physical meaning is only implicitly utilized and the result is not generated via inference, it would be difficult to explain how the answer is obtained in a human comprehensible way.",
"Therefore, the categories (2), (3) and (4a) belong to the less favored direct translation approach 2 (Pape, 2004).",
"In contrast, the approaches of (4b) can avoid the problems mentioned above.",
"However, among them, Mitra and Baral (2016) merely handled Addition and Subtraction .",
"Only the meaning-based framework proposed by Lin et al. (2015) can handle general MWPs via understanding and reasoning.",
"Therefore, it is possible to explain how the answer is obtained in a human comprehensible way (Huang et al., 2015).",
"However, although their design looks promising, only a few Chinese MWPs had been tested and performance was not evaluated.",
"Accordingly, it is hard to make a fair comparison between their approach and other state-of-the-art methods.",
"In addition, in their prototype system, the desired operands of arithmetic operations are identified with predefined lexico-syntactic patterns and ad-hoc rules.",
"Reusing the patterns/rules designed for Chinese in another language is thus difficult even if it is possible.",
"In this paper, we adopt the framework proposed by Lin et al. (2015) to solve English MWPs (for its potential in solving difficult/complex MWPs and providing more human comprehensible ex-planations).",
"Additionally, we make the following improvements: (1) A new statistical model is proposed to select operands for arithmetic operations, and its model parameters can be automatically learnt via weakly supervised learning (Artzi and Zettlemoyer, 2013).",
"(2) A new informative and robust feature-set is proposed to select the desired arithmetic operation.",
"(3) We show the proposed approach significantly outperforms other existing systems on the common benchmark datasets reported in the literature.",
"(4) A noisy dataset with 2 According to (Pape, 2004), the meaning-based approach of solving MWPs achieves the best performance among various behaviors adopted by middle school children.",
"more irrelevant quantities in MWPs is created and released.",
"It could be used to check if an approach really understands what a given MWP looks for.",
"(5) An experiment is conducted to compare various approaches on this new dataset.",
"The superior performance of our system demonstrates that the proposed meaning-based approach has good potential in handling difficult/complex MWPs.",
"The adopted meaning-based framework (Lin et al., 2015) is a pipeline with following four stages (see Figure 2): (1) Language Analysis , (2) Solution Type Identification , (3) Logic Form Transformation and (4) Logic Inference .",
"We use the Stanford CoreNLP suite (Manning et al., 2014) as the language analysis module.",
"The other three modules are briefly described below.",
"Last, we adopt the weakly supervised learning (Artzi and Zettlemoyer, 2013; Kushman et al., 2014) to automatically learn the model parameters without manually annotating each MWP with the adopted solution type and selected operands benchmark.",
"After language analysis, each MWP is assigned with a specific solution type (such as Addition , Multiplication , etc.) which indicates the stereotype math operation pattern that should be adopted to solve this problem.",
"We classify the English MWPs released by Hosseini et al. (2014) and Roy and Roth (2015) into 6 different types: Addition , Subtraction , Multiplication , Division , Sum and TVQ-F 3 .",
"An SVM (Chang and Lin, 2011) is used to identify the solution type with 26 features.",
"Most of them are derived from some important properties associated with each quantity.",
"In addition to the properties Entity 4 and Verb (Hosseini et al., 2014) associated with the quantity , we also introduce a new property Time which encodes the tense and aspect of a verb into an integer to specify a point in the timeline.",
"We assign 2, 4, and 6 to the tenses Past , Present and Future , respectively, and then adjust it with the aspect-values -1, 0 and 1 for Perfect , Simple , and Progressive , respectively.",
"Another property Anchor is associated with the unknown quantity asked in the question sentence.",
"If the subject of the question sentence is a noun phrase (e.g., how many apples does John have? ), Anchor is the subject (i.e., John ).",
"If the subject is an expletive nominal (e.g. how many apples are there in the box? ), then Anchor is the associated nominal modifier nmod (i.e., box ).",
"Otherwise, Anchor is set to Unknown .",
"Inspired by (Hosseini et al., 2014), we transform Verb to Verb-Class ( VC ) which is positive , negative or stative .",
"A verb is positive/negative if it increases/decreases the associated quantity of the subject.",
"For example, in the sentence Tom borrowed 3 dollars from Mike , the verb is positive because the money of subject Tom increases.",
"However, a positive verb does not always imply the Addition operation.",
"If the question is How much money does Mike have now? for the above body sentence, the operation should be Subtraction .",
"Two new properties Anchor-Role ( AR ) and Action ( A ) are thus proposed: AR i indicates the role that Anchor associated with q i , and is set to nsubj / obj / nmod / .",
"A i is determined by following rules: (1) A i =positive if (VC i , AR i ) is either ( positive , nsubj ) or ( negative , obj / nmod ).",
"(2) A i =negative if (VC i , AR i ) is either ( negative , 4 In our works, the term Entity also includes the unit of the quantity (e.g., cup of coffee ).",
"nsubj ) or ( positive , obj / nmod ).",
"(3) Otherwise, A i =VC i .",
"To rule out the noisy quantities introduced by irrelevant information, we further associate each known quantity with the property Relevance ( R ) according to the unknown quantity asked in the question sentence.",
"Let q i denote the i -th known quantity, E i denote the entity of q i , X i denote the property X of q i , q U denote the unknown quantity asked, and XU denote the property X of q U .",
"R i is specified with following rules: (1) R i =2 (Directly-Related) if either { Anchor is Unknown & E i entails EU } or { Anchor is not Unknown & AR i & E i entails EU } (2) R i =1 (Indirectly-Related) if there is a q j which maps 5 to q i and R j =2 (i.e., q j is Directly-Related).",
"(3) R i =0 (Unrelated) otherwise.",
"The solution type is identified by an SVM based on 26 binary features.",
"Let the symbols p , n , s , A , E , R , T , V , SB , SQ and w Q stand for positive , negative , stative , Action , Entity , Relevance , Time , Verb , a body sentence , the question sentence and a word in question sentence respectively.",
"Also, let I ( x ) be the indicator function to check if x is true.",
"The 26 features are briefly described as follows: (1) VCU = p ; (2) R i =2 s.t. A i = p ; (3) R i =2 s.t. A i = n ; (4) R i =2 s.t. A i = s ; (5) I ( R i =2) > 2; (6) I ( R i =2 & A i { p , n } ) = 2; (7) R i =2 s.t. A i = p & TU < T i ; (8) R i =2 s.t. A i = n & TU < T i ; (9) R i =2 s.t. A i = s & T i =max T j ; (10) R i =2 s.t. A i = s & T i <T U ; (11) TU max T i ; (12) TU min T i ; (13) R i =2, V i are the same; (14) R i =2 s.t. T i = TU ; (15) R i =2, T i are the same; (16) R i =2, R j =1 s.t. q i maps to q j & q i > q j ; 5 That is, is linked to a directly-related quantity under an expression such as 2 pencils weigh 30 grams .",
"with a word each/every/per/a/an; (18) R i =2, R j =1 s.t. q i maps to q j & q j is associated with a word each/every/per/a/an; (19) q i , q j , q k s.t. R i = R j = R k =2 & V i = V j = V k ; (20) w Q { total, in all, altogether, sum }; (21) w Q { more, than} or w Q s.t. w Q -POS=RBR; (22) w Q = left ; (23).",
"q i appears in SQ ; (24) the rest VEU appears in SB ( V for any verb); (25) each NN appears in SQ ( NN for any noun); (26) Anchor U is Unknown / nmod & VCU = s .",
"The results of language analysis are transformed into a logic form, which is expressed with the first-order logic (FOL) formalism (Russell and Norvig, 2009).",
"Figure 3 shows how to transform the sentence",
"(a) Pack 100 candies into 5 boxes. into the corresponding logic form",
"(d).",
"First, the dependency tree",
"(b) is transformed into the semantic representation tree",
"(c) adopted by Lin et al., (2015).",
"Afterwards, according to the procedure proposed in (Lin et al., 2015), the domain-dependent logic expressions are generated in",
"(d).",
"The domain-dependent logic expressions are related to crucial generic math facts, such as quantities and relations between quantities.",
"The FOL function quan(quan id , unit 6 ,entity)=number is for describing the quantity fact.",
"The first argument denotes its unique identifier .",
"The other arguments and the function value describe its meaning.",
"Another FOL predicate qmap(map id , quan id1 , quan id2 ) (denotes the mapping from quan id1 to quan id2 ) is for describing a relation between two quantity facts, where the first argument is a unique identifier to represent this relation.",
"The role-tags (e.g., verb , dobj , etc.) associated with quan id and map id denote entity attributes (i.e., the physical meaning of the quantity), are created to help the logic inference module find the 6 This second argument denotes the associated unit used to count the entity.",
"solution.",
"For example, quan(q 2 ,#,box) = 5 & verb(q 2 ,pack) & means that q 2 is the quantity of boxes being packed.",
"With those role-tags, the system can select the operands more reliably, and the inference engine can also derive new quantities to solve complex MWPs which require multi-step arithmetic operations (see section 2.3).",
"The question in the MWP is also transformed into an FOL-like utility function according to the solution type to ask the logic inference module to find out the answer.",
"For example, the utility function instance Division(quan(q 1 , #, candy), quan(q 2 , #, box)) asks the inference module to divide 100 candies by 5 boxes .",
"Since associated operands must be specified before calling those utility functions, a statistical model (see section 2.4) is used to identify the appropriate quantities.",
"The logic inference module adopts the inference engine from (Lin et al., 2015).",
"Figure 4 shows how it uses inference rules to derive new facts from the initial facts directly provided from the description.",
"The MWP",
"(a) provides some facts",
"(b) generated from the LFT module.",
"An inference rule",
"(c) 7 , which implements the common sense that people must pay money to buy something, is unified with the given facts",
"(b) and derives new facts",
"(d).",
"The facts associated with q6 can be interpreted as Mary paid 0.5 dollar for two puddings .",
"The inference engine (IE) also provides 5 utility functions, including Addition , Subtraction , Multiplication and Division , and Sum .",
"The first four utilities all return a value by performing the named math operation on its two input arguments.",
"On the other hand, Sum(function,condition) returns the sum of the values of FOL function instances which can be unified with the first argument (i.e., function ) and satisfy the second argument (i.e., condition ).",
"For example, according to 7 In the inference rule, $q is a meta symbol to ask the inference engine to generate a unique identifier for the newly derived quantity fact.",
"(a) A sandwich is priced at $0.75.",
"A pudding is priced at $0.25.",
"Tim bought 2 sandwiches and 4 puddings.",
"Mary bought 2 puddings.",
"How much money should Tim pay?",
"(b) price(sandwich,0.75)&price(pudding,0.25) quan(q1,#,sandwich)=2&verb(q1,buy)&nsubj(q1,Tim) quan(q2,#,pudding)=4&verb(q2,buy)&nsubj(q2,Tim) quan(q3,#,pudding)=2&verb(q3,buy)&nsubj(q3,Mary) ASK Sum(quan(?q,dollar,#),verb(?q,pay)&nsubj(?q,Tim))",
"(c) quan(?q,?u,?o)&verb(?q,buy)&nsubj(?q,?a)&price(?o,?p)",
"quan($q,dollar,#)=quan(?q,?u,?o)?p & verb($q,pay) & nsubj($q,?a)",
"(d) quan(q4,dollar,#)=1.5&verb(q4,pay)&nsubj(q4,Tim) quan(q5,dollar,#)=1&verb(q5,pay)&nsubj(q5,Tim) quan(q6,dollar,#)=0.5&verb(q6,pay)&nsubj(q6,Mary) Figure 2: A logic inference example",
"(a) A sandwich is priced at $0.75.",
"A pudding is priced at $0.25.",
"Tim bought 2 sandwiches and 4 puddings.",
"Mary bought 2 puddings.",
"How much money should Tim pay?",
"(b) price(sandwich,0.75)&price(pudding,0.25) quan(q1,#,sandwich)=2&verb(q1,buy)&nsubj(q1,Tim) quan(q2,#,pudding)=4&verb(q2,buy)&nsubj(q2,Tim) quan(q3,#,pudding)=2&verb(q3,buy)&nsubj(q3,Mary) ASK Sum(quan(?q,dollar,#),verb(?q,pay)&nsubj(?q,Tim))",
"(c) quan(?q,?u,?o)&verb(?q,buy)&nsubj(?q,?a)&price(?o,?p)",
"quan($q,dollar,#)=quan(?q,?u,?o)?p & verb($q,pay) & nsubj($q,?a)",
"(d) quan(q4,dollar,#)=1.5&verb(q4,pay)&nsubj(q4,Tim) quan(q5,dollar,#)=1&verb(q5,pay)&nsubj(q5,Tim) quan(q6,dollar,#)=0.5&verb(q6,pay)&nsubj(q6,Mary) Figure 4 : A logic inference example 655 the last line in Figure",
"4(b), three newly derived quantity facts q4 , q5 and q6 (in",
"4(d)) can be unified with the first argument quan(?q,dollar,#) in",
"4(c), but only q4 and q5 satisfy the second argument verb(?q,pay)&nsubj(?q,Tim) .",
"As a result, the answer 2.5 is returned by taking sum on the values of the quantity facts quan(q4,dollar,#) and quan(q5,dollar,#) .",
"(b) quan(q1,#,rose)=2&verb(q1,buy)&nsubj(q1,Tim) quan(q2,#,lily)=3&verb(q2,buy)&nsubj(q2,Tim) quan(q3,#,rose)=4&verb(q3,buy)&nsubj(q3,Mary) quan(q4,#,lily)=5&verb(q4,buy)&nsubj(q4,Mary) quan(q Q ,#,flower)&verb(q Q ,buy)&nsubj(q Q ,Tim) Figure 5: An example for operand selection 656 and xcomp of a quantity are extracted according to the dependency relations tmod (i.e., temporal modifier ) and xcomp (i.e., open clausal complement ), respectively.",
"The most error-prone part in the LFT module is instantiating the utility function of math operation especially if many irrelevant quantity facts appear in the given MWP.",
"Figure 5 shows the LFT module needs to select two quantity facts (among 4) for Addition .",
"Please note that the question quantity q Q , transformed from how many flowers , is not a candidate for operand selection.",
"Lin et al., (2015) used predefined lexico-syntactic patterns and ad-hoc rules to instantiate utility functions.",
"However, their rule-based approach fails when the MWP involves more quantities.",
"Therefore, we propose a statistical model to select operands for the utility functions Addition , Subtraction , Multiplication and Division .",
"The operand selection procedure can be regarded as finding the most likely configuration ( 1 , ) , where 1 = 1 , , is a sequence of random indicators which denote if the corresponding quantity will be selected as an operand, and is a tri-state variable to represent the relation between the values of two operands (i.e., = 1, 0 or 1 ; which denote that the first operand is less than, equal to, or greater than the second operand, respectively).",
"Given a solution type , the MWP logic expressions and the quantities 1 = 1 , , in .",
"The procedure is formulated as: ( , 1 | 1 , , ) ( | ) ( 1 | 1 , , ), (1) ( | ) simply refers to Relative Frequency (as it has only a few parameters and we have enough training samples).",
"( 1 | 1 , , ) is further derived as: ( 1 | 1 , , ) ( | , , ) =1 ( , , ) , =1 (2) where ( ) is a feature extraction function to map and its context into a feature vector.",
"Here, the probabilistic factor ( , , ) is obtained via an SVM classifier (Chang and Lin, 2011).",
"( ) extracts total 25 features (specified as follows, and 24 of them are binary) for .",
"The following 11 of them are independent on the question in the MWP: 1. Four features to indicate if is Addition , Subtraction , Multiplication or Division .",
"2. A feature to indicate if is within a qmap ().",
"3. A feature to indicate if = 1 .",
"4. Five features to indicate if < 2 , = 2 , = 3 , = 4 or > 4 ; where is the number of quantities in Eq (1).",
"( ) also extracts features by matching the logic expressions of with those of question quantity q Q to check the role-tag consistencies between and q Q .",
"Another fourteen features are extracted with three indicator functions ( ), ( ), ( ) and one tri-state function ( ) as follows: [ ( , q Q , entity ), ( , q Q , entity ), ( , q Q , verb ), ( , q Q , verb ), ( q Q , nsubj ), ( , q Q , nsubj ), ( q Q , modifier ), ( , q Q , modifier ), ( q Q , place ), ( , q Q , place ), ( q Q , temporal ), ( , q Q , temporal ), ( q Q , xcomp ), ( , q Q , xcomp ) ] where the indicator functions ( , , ) checks if the of matches the of , ( , , ) checks if the of entails the of and ( , ) checks if the of exists.",
"( , q Q , nsubj ) returns exact-match (if nsubj of matches nsubj of q Q ), quasi-match (if nsubj of q Q does not exist or is a plural pronoun), and unmatch.",
"( ) uses the WordNet hypernym and hyponym relationship to judge whether one entity/verb entails another one or not via checking if they are in an inherited hypernym-path in WordNet.",
"The entity , verb and nsubj of a quantity are determined according to the logic expressions.",
"The modifier, place, temporal and xcomp of a quantity are extracted from the dependency tree with some lexico-syntactic patterns.",
"For example, the modifier and place of the quantity in the sentence There are 30 red flowers in the garden. are red and garden respectively.",
"The temporal",
"(a) Tim bought 2 roses and 3 lilies.",
"Mary bought 4 roses and 5 lilies.",
"How many flowers did Tim buy?",
"The AI2 dataset provided by Hosseini et al. (2014) and the IL dataset released by Roy and Roth (2015) are adopted to compare our approach with other state-of-the-art methods.",
"The AI2 dataset has 395 MWPs on addition and subtraction, with 121 MWPs containing irrelevant information (Hosseini et al., 2014).",
"It is the most popular one for comparing different approaches.",
"On the other hand, the IL dataset consists of 562 elementary MWPs which can be solved by one of the four arithmetic operations (i.e., + , , , and ) without any irrelevant quantity.",
"It is the first publicly available dataset for comparing performances that covers all four arithmetic operations.",
"However, the difficulty of solving an MWP depends not only on the number of arithmetic operations required, but also on how many irrelevant quantities inside, and even on how the quantities are described.",
"One way to test if a proposed approach solves the MWPs with understanding is to check whether it is robust to those irrelevant quantities.",
"Therefore, it is desirable to have a big enough dataset that contains irrelevant quantities which are created under different situations (e.g., confusing with an irrelevant agent, entity, or modifier, etc.) and allow us to probe the system weakness from different angles.",
"We thus create a new dataset with more irrelevant quantities 8 .",
"But before we do that, we need to know how difficult the task of solving the given MWPs is.",
"Therefore, we first propose a way to measure how easy that a system solves the problem by simply guessing.",
"We propose to adopt the Perplexity to measure the task difficulty, which evaluates how likely a solver will get the correct answer by guessing.",
"Every MWP in the datasets can be associated with a solution expression template, such as + or , where the symbol represents a slot to hold a quantity.",
"The solution can be obtained by placing correct quantities at appropriate slots.",
"A 8 The IL dataset does not include any irrelevant information; on the other hand, the AI2 dataset only contains 121 MWPS with irrelevant information (but not systematically created).",
"random baseline is to solve an MWP by guessing.",
"It first selects a solution expression template according to the prior distribution of the templates and then places quantities into the selected template according to the uniform distribution.",
"The expected accuracy of the random baseline on solving an MWP is a trivial combination and permutation exercise 9 .",
"For example, the expected accuracy of solving an MWP associated with + template is + 21 , where the factor + denotes the prior probability of the template + and is the total number of quantities (including irrelevant ones) in the MWP.",
"On the other hand, expected accuracy of solving an MWP associated with 10 template is 21 .",
"Let denote the expected accuracy of solving the -th MWP in a dataset.",
"The accuracy of the random baseline on the dataset of size is then computed as = (1/ ) =1 .",
"The word Accuracy comprises the opposite sense of the word Perplexity 11 (i.e., in the sense of how hard a prediction problem is).",
"The lower the Accuracy is, the higher the Perplexity is.",
"Therefore, we transform the Accuracy measure into a Perplexity-Flavor measure (PP) via the formula: PP = 2 log 2 For instance, the Perplexity-Flavor measures of AI2 and IL datasets are 4.46 and 8.32 respectively.",
"Human Math/Science tests have been considered more suitable for judging AI progress than Turing test (Clark and Etzioni, 2016).",
"In our task, solving MWPs is mainly regarded as a test for intelligence (not just for creating a Math Solver package).",
"By injecting various irrelevant quantities into original MWPs, a noisy dataset is thus created to assess if a solver solves the MWPs mainly via understanding or via mechanical/statistical pattern matching .",
"If a system solves an MWP mainly via pattern matching, it would have difficulty in solving a similar MWP augmented from the original one with some irrelevant quantities.",
"Therefore, we first create a noisy dataset by selecting some 9 Let denote -combinations of and denote permutations of .",
"MWPs that can be correctly solved, and then augmenting each of them with an additional noisy sentence which involves an irrelevant quantity.",
"This dataset is created to examine if the solver knows that this newly added quantity is irrelevant.",
"How many Figure 6: Examples of noisy sentences OSS NDS # MWPs 136 396 Perplexity (PP) 7.42 18.83 #Quantities 2.64 3.62 Table 1: Perplexity measures of OSS and NDS AI2 IL Our system (Statistical) 81.5 81.0 Our system (DNN) 69.8 70.6 (Roy and Roth, 2017) 76.2 74.4 (Roy and Roth, 2015) 78.0 73.9 (Kushman et al., 2014) 64.0 73.7 Table 2: Performances of various approaches 658 of various datasets.",
"Figure 6 shows how we inject noise into an MWP",
"(a).",
"(a.1) is created by associating an irrelevant quantity to a new subject (i.e., Mary ).",
"Here the ellipse symbol denotes unchanged text.",
"(a.2) is obtained by associating an irrelevant quantity to a new entity (i.e., books ).",
"In addition, we also change modifiers (such as yellow , red , ) to add new noisy sentence (not shown here).",
"Since the noisy dataset is not designed to assess the lexicon coverage rate of a solver, we reuse the words in the original dataset as much as possible while adding new subjects, entities and modifiers.",
"136 MWPs that both Illinois Math Solver 12 (Roy and Roth, 2016) and our system can correctly solve are selected from the AI2 and IL datasets.",
"This subset is denoted as OSS (Original Sub-Set).",
"Afterwards, based on the 136 MWPs of OSS, we create a noisy dataset of 396 MWPs by adding irrelevant quantities.",
"This noisy dataset is named as NDS 13 .",
"Table 1 lists the size of MWPs, Perplexities (PP), and the average numbers of quantities in each MWP of these two datasets.",
"We compare our approach with (Roy and Roth, 2015) and (Roy and Roth, 2017) because they achieved the state-of-the-art performance on the IL dataset.",
"In the approach of (Roy and Roth, 2015), each quantity in the MWP was associated with a quantity schema whose attributes are extracted from the context of the quantity.",
"Based on the attributes, several statistical classifiers were used to select operands and determine the operator.",
"They also reported the performances on the AI2 dataset to compare their approach with those 12 We submit MWPs to Illinois Math Solver (https://cogcomp.cs.illinois.edu/page/demo_view/Math) in May and June, 2017.",
"13 The noisy dataset can be downloaded from https://github.com /chaochun/nlu-mwp-noise-dataset.",
"It includes 102 Addition, 147 Subtraction, 101 Multiplication and 46 Division MWPs.",
"of others (e.g., Kushman et al. (2014), which is a purely statistical approach that aligns the text with various pre-extracted equation templates).",
"Roy and Roth (2017) further introduced the concept of Unit Dependency Graphs to reinforce the consistency of physical units among selected operands associated with the same operator.",
"To compare the performance of the statistical method with the DNN approach, we only implement a Bi-directional RNN-based Solution Type Identifier (as our original statistical Operand Selector is relatively much better).",
"It consists of a word embedding layer (for both body and question parts), and a bidirectional GRU layer as an encoder.",
"We apply the attention mechanism to scan all hidden state sequence of body by the last hidden state of question to pay more attention to those more important (i.e., more similar between the body and the question) words.",
"Lastly, it outputs the solution type by a softmax function.",
"We train it for 100 epochs, with mini-batch-size = 128 and learning-rate = 0.001; the number of nodes in the hidden layer is 200, and the drop-out rate is 0.7 14 .",
"We follow the same n-fold cross-validation evaluation setting adopted in (Roy and Roth, 2015) exactly.",
"Therefore, various performances could be directly compared.",
"Table 2 lists the accuracies of different systems in solving the MWPs 14 Since the dataset is not large enough for splitting a development set, we choose those hyper parameters based on the test set in coarse grain.",
"Therefore, the DNN performance we show here might be a bit optimistic.",
"(a) Tim has 10 yellow flowers and 12 red flowers.",
"How many flowers does Tim have?",
"(a.1)",
"Tim has Mary has 3 yellow flowers .",
"How many (a.2) Tim has Tim also has 3 books.",
"The performance of (Roy and Roth, 2017) system is directly delivered by their code 15 .",
"The last two rows are extracted from (Roy and Roth, 2015).",
"The results show that our performances of the statistical approach significantly outperform that of our DNN approach and other systems on every dataset.",
"The performances of STI and LFT modules are listed in Table 3. As described in section 2, the benchmark for both solution type and the operand selection benchmark are automatically determined by weakly supervised learning.",
"The first and second rows of Table 3 show the solution type accuracies of our statistical and DNN approaches, respectively.",
"The third row shows the operand selection accuracy obtained by given the solution type benchmark.",
"Basically, LFT accuracies are from 92% to 95%, and the system accuracies are dominated by STI.",
"We analyzed errors resulted from our statistical STI on AI2 and IL datasets, respectively.",
"For AI2, major errors come from: (1) failure of ruling out some irrelevant quantities (40%), and (2) making confusion between TVQ-F and Sum these two solution types (20%) for those cases that only involve addition operation (however, both types would return the same answer).",
"For IL, major errors come from: (1) requiring additional information (35%), and (2) not knowing Part-Whole relation (17%).",
"Table 4 shows a few examples for different STI error types.",
"The left-half of Table 5 shows the performances on the OSS and NDS datasets.",
"Recall that OSS is created by selecting some MWPs which both Illinois Math Solver (Roy and Roth, 2016) and our system 16 can correctly solve.",
"Therefore, both systems have 100% accuracy in solving the OSS dataset.",
"However, these two systems behave very differently while solving the noisy dataset.",
"The much higher accuracy of our system on the noisy dataset shows that our meaning-based approach understands the meaning of each quantity more.",
"Therefore, it is less confused 17 with the irrelevant quantities.",
"One MWP in the noisy dataset that confuses Illinois Math Solver (IMS) is Tom has 9 yellow balloons. Sara has 8 yellow balloons. Bob has 5 yellow flowers. How many yellow balloons do 15 https://github.com/CogComp/arithmetic. 16 In evaluating the performances on OSS and NDS datasets, our system is trained on the folds 2-5 of the IL dataset. 17 Since the gap between two different types of approaches is quite big, those 396 examples on OSS and 196 examples on NDS are sufficient to confirm the conclusion. they have in total? , where the underlined sentence is the added noisy sentence.",
"The solver sums all quantities and gives the wrong answer 22, which reveals that IMS cannot understand that the quantity 5 yellow flowers is irrelevant to the question How many yellow balloons? .",
"On the contrary, our system avoids this mistake.",
"Although the meaning of each quantity is explicitly checked in our LFT module, our system still cannot correctly solve all MWPs in NDS.",
"The error analysis reveals that the top-4 error sources are STI, LFT, CoreNLP and incorrect problem construction (for 27%, 27%, 18%, 18%), which indicates that our STI and LFT still cannot completely prevent the damage caused from the noisy sentences (which implies that more consistency check for quantity meaning should be done).",
"The remaining errors are due to incorrect quantity extraction, lacking common-sense or not knowing entailment relationship between two entities.",
"A similar experiment is performed to check if the DNN approach will be affected by the noisy information more.",
"We first select 124 MWPs (de-noted as OSS ) from OSS that can be correctly solved by both our statistical and DNN approaches and then filter out 350 derived MWPs (denotes as NDS ) from NDS.",
"The right-half of Table 5 shows that the performance of the DNN approach drops more than the statistical approach does in the noisy dataset, which indicates that our statistical approach is less sensitive to the irrelevant quantities and more close to human's approach.",
"To the best of our knowledge, MWP solvers proposed before 2014 all adopted the rule-based approach.",
"Mukherjee and Garain (2008) had given a good survey for all related approaches before 2008.",
"Afterwards, Ma et al. (2010) proposed a MSWPAS system to simulate human arithmetic multi-step addition and subtraction behavior without evaluation.",
"Besides, Liguda and Pfeiffer (2012) proposed a model based on augmented semantic networks, and claimed that it could solve multi-step MWPs and complex equation systems and was more robust to irrelevant information (al-so no evaluation).",
"Recently, Hosseini et al. (2014) proposed a Container-Entity based approach, which solved the MWP with a sequence of state transition.",
"And Kushman et al. (2014) proposed the first statistical approach, which heuristically extracts some algebraic templates from labeled equations, and then aligns them with the given sentence.",
"Since no semantic analysis is conducted, the performance is quite limited.",
"In more recent researches (Roy and Roth, 2015; Koncel-Kedziorski et al., 2015; Roy and Roth, 2017), quantities in an MWP were associated with attributes extracted from their contexts.",
"Based on the attributes, several statistical classifiers were used to select operands and determine operators to solve multi-step MWPs.",
"Since the physical meaning of each quantity is not explicitly considered in getting the answer, the reasoning process cannot be explained in a human comprehensible way.",
"Besides, Shi et al. (2015) attacked the number word problem , which only deal with numbers, with a semantic parser.",
"Mitra and Baral (2016) mapped MWPs into three types of problems, including Part-Whole, Change and Comparison.",
"Each problem was associated with a generic formula.",
"They used a log-linear model to determine how to instantiate the formula with quantities and solve the only one Unknown variable.",
"They achieved the best performance on the AI2 dataset.",
"However, their approach cannot handle Multiplication or Division related MWPs.",
"Recently, DNN-based approaches (Ling et al, 2017; Wang et al, 2017) have emerged.",
"However, they only attacked algebraic word problems, and required a very large training-set.",
"Our proposed approach mainly differs from those previous approaches in combining the statistical framework with logic inference , and also in adopting the meaning-based statistical approach for selecting the desired operands .",
"A meaning-based logic form represented with role-tags (e.g., nsubj , verb , etc.) is first proposed to associate the extracted math quantity with its physical meaning, which then can be used to identify the desired operands and filter out irrelevant quantities.",
"Afterwards, a statistical framework is proposed to perform understanding and reasoning based on those logic expressions.",
"We further compare the performance with a typical DNN approach, the results show the proposed approach is still better.",
"We will try to integrate domain concepts into the DNN approach to improve the learning efficiency in the future.",
"The main contributions of our work are: (1) Adopting a meaning-based approach to solve English math word problems and showing its superiority over other state-of-the-art systems on common datasets.",
"(2) Proposing a statistical model to select operands by explicitly checking the meanings of quantities against the meaning of the question sentence.",
"(3) Designing a noisy dataset to test if a system solves the problems by understanding.",
"(4) Proposing a perplexity-flavor measure to assess the complexity of a dataset."
] | [
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"objective",
"result",
"objective",
"abstain",
"abstain",
"abstain"
] |
[
"Large transformer-based language models have been shown to be very effective in many classification tasks.",
"However, their computational complexity prevents their use in applications requiring the classification of a large set of candidates.",
"While previous works have investigated approaches to reduce model size, relatively little attention has been paid to techniques to improve batch throughput during inference.",
"In this paper, we introduce the Cascade Transformer, a simple yet effective technique to adapt transformer-based models into a cascade of rankers.",
"Each ranker is used to prune a subset of candidates in a batch, thus dramatically increasing throughput at inference time.",
"Partial encodings from the transformer model are shared among rerankers, providing further speed-up.",
"When compared to a state-of-the-art transformer model, our approach reduces computation by 37% with almost no impact on accuracy, as measured on two English Question Answering datasets.",
"Recent research has shown that transformer-based neural networks can greatly advance the state of the art over many natural language processing tasks.",
"Efforts such as BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019c), XLNet (Dai et al., 2019), and others have led to major advancements in several NLP subfields.",
"These models are able to approximate syntactic and semantic relations between words and their compounds by pre-training on copious amounts of unlabeled data (Clark et al., 2019; Jawahar et al., 2019).",
"Then, they can easily be applied to different tasks by just fine-tuning them on training data from the target domain/task (Liu et al., 2019a; Peters et al., 2019).",
"The impressive effectiveness of transformer-based neural networks can be partially attributed to their large number of parameters (ranging from 110 million for base models to over 8 billion (Shoeybi et al., 2019)); however, this also makes them rather expensive in terms of computation time and resources.",
"Being aware of this problem, the research community has been developing techniques to prune unnecessary network parameters (Lan et al., 2019; Sanh et al., 2019) or optimize the transformer architecture (Zhang et al., 2018; Xiao et al., 2019).",
"In this paper, we propose a completely different approach for increasing the efficiency of transformer models, which is orthogonal to previous work, and thus can be applied in addition to any of the methods described above.",
"Its main idea is that a large class of NLP problems requires choosing one correct candidate among many.",
"For some applications, this often entails running the model over hundreds or thousands of instances.",
"However, it is well-known that, in many cases, some candidates can be more easily excluded from the optimal solution (Land and Doig, 1960), i.e., they may require less computation.",
"In the case of hierarchical transformer models, this property can be exploited by using a subset of model layers to score a significant portion of candidates, i.e., those that can be more easily excluded from search.",
"Additionally, the hierarchical structure of transformer models intuitively enables the re-use of the computation of lower blocks to feed the upper blocks.",
"Following the intuition above, this work aims at studying how transformer models can be cascaded to efficiently find the max scoring elements among a large set of candidates.",
"More specifically, the contributions of this paper are: First, we build a sequence of rerankers SRN = { R 1 , R 2 , ..., RN } of different complexity, which process the candidates in a pipeline.",
"Each reranker at position i takes the set of candidates selected by ( i 1) -th reranker and provides top k i candidates to the reranker of position i + 1 .",
"By requiring that k i < k i 1 i = 1 , . . . , N 1 , this approach allows us to save computation time from the more expensive rerankers by progressively reducing the number of candidates at each step.",
"We build R i using transformer networks of 4, 6, 8, 10, and 12 blocks from RoBERTa pre-trained models.",
"Second, we introduce a further optimization on SRN to increase its efficiency based on the observation that models R i in SRN process their input independently.",
"In contrast, we propose the Cascade Transformer (CT), a sequence of rerankers built on top of a single transformer model.",
"Rerankers R 1 , . . . , RN are obtained by adding small feed-forward classification networks at different transformer block positions; therefore, the partial encodings of the transformer blocks are used as both input to reranker R i , as well as to subsequent transformer encoding blocks.",
"This allows us to efficiently re-use partial results consumed by R i for rankers R i +1 , . . . , RN .",
"To enable this approach, the parameters of all rerankers must be compatible.",
"Thus, we trained CT in a multi-task learning fashion, alternating the optimization for different i , i.e., the layers of R i are affected by the back-propagation of its loss as well as by the loss of R j , with j i .",
"Finally, as a test case for CT, we target Answer Sentence Selection (AS2), a well-known task in the domain of Question Answering (QA).",
"Given a question and a set of sentence candidates (e.g., retrieved by a search engine), this task consists in selecting sentences that correctly answer the question.",
"We tested our approach on two different datasets: ( i ) ASNQ, recently made available by Garg et al. (2020); and ( ii ) a benchmark dataset built from a set of anonymized questions asked to Amazon Alexa.",
"Our code, ASNQ split, and models trained on ASNQ are publicly available.",
"1 Our experimental results show that: ( i ) The selection of different k i for SRN determines different trade-off points between efficiency and accuracy.",
"For example, it is possible to reduce the overall computation by 10% with just 1.9% decrease in accuracy.",
"( ii )",
"Most importantly, the CT approach largely improves over SR, reducing the cost by 37% with almost no loss in accuracy.",
"( iii )",
"The rerankers trained through our cascade approach achieve equivalent or better performance than transformer models trained independently.",
"Finally, ( iv ) our results suggest that CT can be used with other 1 https://github.com/alexa/ wqa-cascade-transformers NLP tasks that require candidate ranking, e.g., parsing, summarization, and many other structured prediction tasks.",
"In this section, we first summarize related work for sequential reranking of passages and documents, then we focus on the latest methods for AS2, and fi-nally, we discuss the latest techniques for reducing transformer complexity.",
"Reranking in QA and IR The approach introduced in this paper is inspired by our previous work (Matsubara et al., 2020); there, we used a fast AS2 neural model to select a subset of instances to be input to a transformer model.",
"This reduced the computation time of the latter up to four times, preserving most accuracy.",
"Before our paper, the main work on sequential rankers originated from document retrieval research.",
"For example, Wang et al. (2011) formulated and developed a cascade ranking model that improved both top-k ranked effectiveness and retrieval efficiency.",
"Dang et al. (2013) proposed two stage approaches using a limited set of textual features and a final model trained using a larger set of queryand document-dependent features.",
"Wang et al. (2016) focused on quickly identifying a set of good candidate documents that should be passed to the second and further cascades.",
"Gallagher et al. (2019) presented a new general framework for learning an end-to-end cascade of rankers using back-propagation.",
"Asadi and Lin (2013) studied effectiveness/efficiency trade-offs with three candidate generation approaches.",
"While these methods are aligned with our approach, they target document retrieval, which is a very different setting.",
"Further, they only used linear models or simple neural models.",
"Agarwal et al. (2012) focused on AS2, but just applied linear models.",
"Answer Sentence Selection (AS2) In the last few years, several approaches have been proposed for AS2.",
"For example, Severyn and Moschitti (2015) applied CNN to create question and answer representations, while others proposed inter-weighted alignment networks (Shen et al., 2017; Tran et al., 2018; Tay et al., 2018).",
"The use of compare and aggregate architectures has also been extensively evaluated (Wang and Jiang, 2016; Bian et al., 2017; Yoon et al., 2019).",
"This family of approaches uses a shallow attention mechanism over the question and answer sentence embeddings.",
"Finally, Tayyar Madabushi et al. (2018) exploited fine-grained question classification to further improve answer selection.",
"Transformer models have been fine-tuned on several tasks that are closely related to AS2.",
"For example, they were used for machine reading (Devlin et al., 2019; Yang et al., 2019a; Wang et al., 2019), ad-hoc document retrieval (Yang et al., 2019b; MacAvaney et al., 2019), and semantic understanding (Liu et al., 2019b) tasks to obtain significant improvement over previous neural methods.",
"Recently, Garg et al. (2020) applied transformer models, obtaining an impressive boost of the state of the art for AS2 tasks.",
"Reducing Transformer Complexity The high computational cost of transformer models prevents their use in many real-word applications.",
"Some proposed solutions rely on leveraging knowledge distillation in the pre-training step, e.g., (Sanh et al., 2019), or used parameter reduction techniques (Lan et al., 2019) to reduce inference cost.",
"However, the effectiveness of these approaches varies depending on the target task they have been applied to.",
"Others have investigated methods to reduce inference latency by modifying how self-attention operates, either during encoding (Child et al., 2019; Guo et al., 2019b), or decoding (Xiao et al., 2019; Zhang et al., 2018).",
"Overall, all these solutions are mostly orthogonal to our approach, as they change the architecture of transformer cells rather than efficiently re-using intermediate results.",
"With respect to the model architecture, our approach is similar to probing models 2 (Adi et al., 2017; Liu et al., 2019a; Hupkes et al., 2018; Be-linkov et al., 2017), as we train classification layers based on partial encoding on the input sequence.",
"However, ( i ) our intermediate classifiers are integral part of the model, rather than being trained on frozen partial encodings, and ( ii ) we use these classifiers not to inspect model properties, but rather to improve inference throughput.",
"Our apporach also shares some similarities with student-teacher (ST) approaches for self-training (Yarowsky, 1995; McClosky et al., 2006).",
"Under this setting, a model is used both as a teacher (which makes predictions on unlabeled data to obtain automatic labels) and as a student (which learns both from gold standard and automatic la-bels).",
"In recent years, many variants of ST have 2 Also known as auxiliary or diagnostic classifiers.",
"been proposed, including treating teacher predictions as soft labels (Hinton et al., 2015), masking part of the label (Clark et al., 2018), or use multiple modules for the teacher (Zhou and Li, 2005; Ruder and Plank, 2018).",
"Unlike classic ST approaches, we do not aim at improving the teacher models or creating efficient students; instead, we trained models to be used as sequential ranking components.",
"This may be seen as a generalization of the ST approach, where the student needs to learn a simpler task than the teacher.",
"However, our approach is significantly different from the traditional ST setting, which our preliminary investigation showed to be not very effective.",
"We first formalize the problem of selecting the most likely element in a set as a reranking problem; then, we define sequential reranking (SR); finally, we contextualize AS2 task in such framework.",
"In general, a large class of NLP (and other) problems can be formulated as a max element selection task: given a query q and a set of candidates A = { a 1 ,",
".., a n } , select a j that is an optimal element for q .",
"We can model the task as a selector function : Q P ( A ) A , defined as ( q, A ) = a j , where P ( A ) is the powerset of A , j = argmax i p ( q, a i ) , and p ( q, a i ) is the probability of a i to be the required element.",
"p ( q, a i ) can be estimated using a neural network model.",
"In the case of transformers, said model can be optimized using a point-wise loss, i.e., we only use the target candidate to generate the selection probability.",
"Pairwise or list-wise approaches can still be used (Bian et al., 2017), but ( i ) they would not change the find-ings of our study, and ( ii ) point-wise methods have been shown to achieve competitive performance in the case of transformer models.",
"Assuming that no heuristics are available to preselect a subset of most-likely candidates, max element selection requires evaluating each sample using a relevance estimator.",
"Instead of a single estimator, it is often more efficient to use a sequence of rerankers to progressively reduce the number of candidates.",
"A , and returns a set of elements, R ( q, ) = { a i 1 , ..., a ik } of size k , with the highest probability to be relevant to the query.",
"That is, p ( q, a ) > p ( q, b ) a , b A .",
"Given a sequence of rerankers sorted in terms of computational efficiency, ( R 1 , R 2 , . . . , RN ) , we assume that the ranking accuracy, A (e.g., in terms of MAP and MRR), increases in reverse order of the efficiency, i.e., A ( R j ) > A ( R i ) iff j > i .",
"Then, we define a Sequential Reranker of order N as the composition of N rerankers: SRN ( A ) = RN RN 1",
"..",
"R 1 ( A ) , where RN can also be the element selector ( q, ) .",
"Each R i is associated with a different k i = | R i ( ) | , i.e., the number of elements the reranker returns.",
"Depending on the values of k i , SR models with different trade-offs between accuracy and efficiency can be obtained.",
"3 3.3 AS2 Definition The definition of AS2 directly follows from the definition of element selection of Section 3.1, where the query is a natural language question and the elements are answer sentence candidates retrieved with any approach, e.g., using a search engine.",
"In this section, we explain how to exploit the hierarchical architecture of a traditional transformer model to build an SR model.",
"First, we briefly recap how traditional transformer models (we refer to them as monolithic ) are used for sequence classification, and how to derive a set of sequential rerankers from a pre-trained transformer model (Section 4.1).",
"Then, we introduce our Cascade Transformer (CT) model, a SR model that efficiently uses partial encodings of its input to build a set of sequential rerankers R i (Section 4.3).",
"Finally, we explain how such model is trained and used for inference in sections 4.3.1 and 4.3.2, respectively.",
"We first briefly describe the use of transformer models for sequence classification.",
"We call them monolithic as, for all input samples, the computation flows from the first until the last of their layers.",
"transformer layers 4 generating contextualized representations for an input sequence; n is typically referred to as the depth of the encoder, i.e., the number of layers.",
"Typical values for n range from 12 to 24, although more recent works have experimented with up to 72 layers (Shoeybi et al., 2019).",
"T can be pre-trained on large amounts of unlabeled text using a masked (Devlin et al., 2019; Liu et al., 2019c) or autoregressive (Yang et al., 2019c; Radford et al., 2019) language modeling objective.",
"Pre-trained language models are fine-tuned for the target tasks using additional layers and data, e.g., a fully connected layer is typically stacked on top of T to obtain a sentence classifier.",
"Formally, given a sequence of input symbols 5 , X = { x 0 , x 1 , . . . , x m } , an encoding H = 4 That is, an entire transformer block, constituted by layers for multi-head attention, normalization, feed forward processing and positional embeddings.",
"{ h 0 , h 1 , . . . , h m } is first obtained by recursively applying H i to the input:",
"H 0 = E ( X ) , H i = L i ( H i 1 ) i = 1 , . . . , n, where H = H n .",
"Then, the first symbol of the input sequence 6 is fed into a sequence of dense feed-forward layers D to obtain a final output score, i.e., y = D ( h 0 ) .",
"D is fine-tuned together with the entire model on a task-specific dataset (a set of question and candidate answer pairs, in our case).",
"Monolithic transformers can be easily modified or combined to build a sequence of rerankers as described in Seciton 3.2.",
"In our case, we adapt an existing monolithic T to obtain a sequence of N rerankers R i .",
"Each R i consists of encoders from T up to layer ( i ) , followed by a classification layer D i , i.e., R i = { E ; L 1 , . . . , L ( i ) , D i } .",
"For a sequence of input symbols X , all rerankers in the sequence are designed to predict p ( q, a ) , which we indicate as R i ( X ) = y ( i ) .",
"All rerankers in SRN are trained independently on the target data.",
"In our experiments, we obtained the best performance by setting N = 5 and using the following formula to determine the architecture of each reranker R i : ( i ) = 4 + 2 ( i 1) i = { 1 , . . . , 5 } In other words, we assemble sequential reranker SR 5 using five rerankers built with transformer models of 4, 6, 8, 10 and 12 layers, respectively.",
"This choice is due to the fact that our experimental results seem to indicate that the information in layers 1 to 3 is not structured enough to achieve satisfactory classification performance for our task.",
"This observation is in line with recent works on the effectiveness of partial encoders for semantic tasks similar to AS2 (Peters et al., 2019).",
"During inference, monolithic transformer models evaluate a sequence X through the entire computation graph to obtain the classification scores Y .",
"order for the model to distinguish between the two, a special token such as [SEP] or </s> is used.",
"Some models also use a second embedding layer to represent which sequence each symbol comes from.",
"6 Before being processed by a transformer model, sequences are typically prefixed by a start symbol, such as [CLS] or <s> .",
"This allows transformer models to accumulate knowledge about the entire sequence at this position without compromising token-specific representations (Devlin et al., 2019).",
"This means that when using SRN , examples are processed multiple times by similar layers for different R i , e.g., for i = 1 , all R i compute the same operations of the first ( i ) transformer layers, for i = 2 , N 1 rerankers compute the same ( i ) ( i + 1) , layers and so on.",
"A more computationally-efficient approach is to share all the common transformer blocks between the different rerankers in SRN .",
"We speed up this computation by using one transformer encoder to implement all required R i .",
"This can be easily obtained by adding a classification layer C ( i ) after each ( i ) layers (see Figure 1).",
"Consequently, given a sample X , the classifiers C ( i ) produces scores y ( i ) only using a partial encoding.",
"To build a CT model, we use each C ( i ) to build rerankers R i , and select the top k i candidates to score with the subsequent rerankers R i +1 .",
"We use the same setting choices of N and ( i ) described in Section 4.2.",
"Finally, we observed the best performance when all encodings in H ( i ) are used as input to partial classifier C ( i ) , rather than just the partial encoding of the classification token h ( i ) , 0 .",
"Therefore, we use their average to obtain score y ( i ) = C ( i ) ( 1 m (cid:80) l =1",
",..,m h ( i ) ,l ) , In line with Kovaleva et al. (2019), we hypothesize that, at lower encoding layers, long dependencies might not be properly accounted in h ( i ) , 0 .",
"However, in our experiments, we found no benefits in further parametrizing this operation, e.g., by either using more complex networks or weighting the average operation.",
"The training of the proposed model is conducted in a multi-task fashion.",
"For every mini-batch, we randomly sample one of the rankers R i (including the final output ranker), calculate its loss against the target labels, and back-propagate its loss throughout the entire model down to the embedding layers.",
"We experimented with several more complex sampling strategies, including a round-robin selection process and a parametrized bias towards early rankers for the first few epochs, but we ultimately found that uniform sampling works best.",
"We also empirically determined that, for all classifiers C ( i ) , backpropagating the loss to the input embeddings, as opposed to stopping it at layer ( i 1) , is crucial to ensure convergence.",
"A possible explanation could be: enabling each classifier to influence the input representation during backpropagation ensures that later rerankers are more robust against variance in partial encodings, induced by early classifiers.",
"We experimentally found that if the gradient does not flow throughout the different blocks, the development set performance for later classifiers drops when early classifiers start converging.",
"Recall that we are interested in speeding up inference for classification tasks such as answer selection, where hundreds of candidates are associated with each question.",
"Therefore, we can assume without loss of generality that each batch of samples B = { X 1 , . . . , X b } contains candidate answers for the same question.",
"We use our partial classifiers to throw away a fraction of candidates, to increase throughput.",
"That is, we discard k i = (cid:98) k i 1 (cid:99) candidates, where (cid:98)(cid:99) rounds k i 1 down to the closest integer.",
"For instance, let = 0 .",
"3 , batch size b = 128 ; further, recall that, in our experiments, a CT consists of 5 cascade rerankers.",
"Then, after layer 4, the size of the batch gets reduced to 90 ( (cid:98) 0 .",
"3 128 (cid:99) = 38 candidates are discarded by the first classifier).",
"After the second classifier (layer 6), (cid:98) 0 .",
"3 90 (cid:99) = 27 examples are further removed, for an effective batch size of 63 .",
"By layer 12, only 31 samples are left, i.e., the instance number scored by the final classifier is reduced by more than 4 times.",
"Our approach has the effect of improving the throughput of a transformer model by reducing the average batch size during inference: the throughput of any neural model is capped by the maximum number of examples it can process in parallel (i.e., the size of each batch), and said number is usually ceiled by the amount of memory available to the model (e.g., RAM on GPU).",
"The monolithic models have a constant batch size at inference; however, because the batch size for a cascade model varies while processing a batch, we can size our network with respect to its average batch size, thus increasing the number of samples we initially have in a batch.",
"In the example above, suppose that the hardware requirement dictates a maximum batch size of 84 for the monolithic model.",
"As the average batch size for the cascading model is (4 128 + 2 90 + 2 63 + 2 44 + 2 28) / 12 = 80 .",
"2 < 84 , we can process a batch of 128 instances without violating memory constrains, increasing throughput by 52% .",
"We remark that using a fixed is crucial to obtain the performance gains we described: if we were to employ a score-based thresholding ap-ASNQ GPD TRECQA WikiQA TRAIN Questions 57,242 1,000 1,227 873 Avg cand.",
"proach (that is, discard all candidates with score below a given threshold), we could not determine the size of batches throughout the cascade, thus making it impossible to efficiently scale our system.",
"On the other hand, we note that nothing in our implementations prevents potentially correct candidates from being dropped when using CT.",
"However, as we will show in Section 5, an opportune choice of a threshold and good accuracy of early classifiers ensure high probability of having at least one positive example in the candidate set for the last classifier of the cascade.",
"We present three sets of experiments designed to evaluate CT.",
"In the first (Section 5.3), we show that our proposed approach without any selection produces comparable or superior results with respect to the state of the art of AS2, thanks to its stability properties; in the second (Section 5.4), we compare our Cascade Transformer with a vanilla transformer, as well as a sequence of transformer models trained independently; finally, in the third (Section 5.5), we explore the tuning of the drop ratio, .",
"TRECQA & WikiQA Traditional benchmarks used for AS2, such as TRECQA (Wang et al., 2007) and WikiQA (Yang et al., 2015), typically contain a limited number of candidates for each question.",
"Therefore, while they are very useful to compare accuracy of AS2 systems with the state of the art, they do not enable testing large scale passage reranking, i.e., inference on hundreds or thousand of answer candidates.",
"Therefore, we evaluated our approach (Sec. 4.3) on two datasets: ASNQ, which is publicly available, and our GPD dataset.",
"We still leverage TRECQA and WikiQA to show that that our cascade system has comparable performance to state-of-the-art transformer models when no filter-ing is applied.",
"ASNQ The Answer Sentence Natural Questions dataset (Garg et al., 2020) is a large collection (23M samples) of question-answer pairs, which is two orders of magnitude larger than most public AS2 datasets.",
"It was obtained by extracting sentence candidates from the Google Natural Question (NQ) benchmark (Kwiatkowski et al., 2019).",
"Samples in NQ consists of tuples (cid:104) question , answer long , answer short , label (cid:105) , where answer long contains multiple sentences, answer short is fragment of a sentence, and label is a binary value indicating whether answer long is correct.",
"The positive samples were obtained by extracting sentences from answer long that contain answer short ; all other sentences are labeled as negative.",
"The original release of ANSQ 7 only contains train and development splits; we further split the dev.",
"set to both have dev.",
"and test sets.",
"GPD The General Purpose Dataset is part of our efforts to study large scale web QA and evaluate performance of AS2 systems.",
"We built GPD using a search engine to retrieve up to 100 candidate documents for a set of given questions.",
"Then, we extracted all candidate sentences from such documents, and rank them using a vanilla transformer model, such as the one described in Sec. 4.1.",
"Finally, the top 100 ranked sentences were manually annotated as correct or incorrect answers.",
"We measure the accuracy of our approach on ASNQ and GPD using four metrics: Mean Average Precision (MAP), Mean Reciprocal Rank (MRR), Precision at 1 of ranked candidates (P@1), and Normalized Discounted Cumulative Gain at 10 of retrieved candidates (nDCG@10).",
"While the first two metrics capture the overall system performance, the latter two are better suited to evaluate systems with many candidates, as they focus more on Precision.",
"For WikiQA and TRECQA, we use MAP and MRR.",
"Our models are fine-tuned starting from a pretrained RoBERTa encoder (Liu et al., 2019c).",
"We chose this transformer model over others due to its strong performance on answer selection tasks (Garg et al., 2020).",
"Specifically, we use the BASE 7 https://github.com/alexa/wqa_tanda Model WikiQA TRECQAMAP MRR MAP MRR CA1 (WangandJiang,2016) 74.3 75.4 CA2 (Yoonetal.,2019) 83.4 84.8 87.5 94.0 TANDABASE (Gargetal.,2020) 88.9 90.1 91.4 95.2 4 layers TANDA 80.5 80.9 77.2 83.1 6 layers TANDA 82.1 82.9 78.5 88.4 8 layers TANDA 85.7 86.7 88.2 94.7 10 layers TANDA 89.0 90.0 90.5 95.9 Our TANDABASE 89.1 90.1 91.6 96.0 CT ( 4 layers, = 0 . 0 ) 60.1 60.2 67.9 74.7 CT ( 6 layers, = 0 . 0 ) 79.8 80.3 89.7 95.0 CT ( 8 layers, = 0 . 0 ) 84.8 85.4 92.3 95.3 CT ( 10 layers, = 0 . 0 ) 89.7 89.8 92.3 95.6 CT ( 12 layers, = 0 . 0 ) 89.9 91.0 92.4 96.7 Table 2: Comparison on two AS2 academic datasets.",
"variant (768-dimensional embeddings, 12 layers, 12 heads, and 3072 hidden units), as it is more appropriate for efficient classification.",
"When applicable 8 , we fine-tune our models using the two-step transfer and adapt (TANDA) technique introduced by Garg et al. (2020).",
"As mentioned in Section 4.3, we optimize our model in a multi-task setting; that is, for each mini-batch, we randomly sample one of the output layers of the CT classifiers to backpropagate its loss to all layers below.",
"While we evaluated different sampling techniques, we found that a simple uniform distribution is sufficient and allows the model to converge quickly.",
"Our models are optimized using Adam (Kingma and Ba, 2014) using triangular learning rate (Smith, 2017) with a 4 , 000 updates ramp-up 9 , and a peak learning rate l r = 1 e 6 .",
"Batch size was set to up to 2 , 000 tokens per mini-batch for CT models.",
"For the partial and final classifiers, we use 3-layers feed-forward modules with with 768 hidden units and tanh activation function.",
"Like the original BERT implementation, we use dropout value of 0.1 on all dense and attention layers.",
"We implemented our system using MxNet 1.5 (Chen et al., 2015) and GluonNLP 0.8.1 (Guo et al., 2019a) on a machine with 8 NVIDIA Tesla V100 GPUs, each with 16GB of memory.",
"8 When fine-tuning on GPD, TRECQA, and WikiQA, we perform a transfer step on ASNQ before adapting to our target dataset; for ASNQ, we directly fine-tune on it.",
"9 On ASNQ, it is roughly equivalent to 950 k samples or about 4% of the training set.",
"In oder to better assess how our training strategy for CT models compare with a monolithic transformer, we evaluated the performance of our system on two well known AS2 datasets, WikiQA and TRECQA.",
"The results of these experiments are presented in Table 2.",
"Note how, in this case, we are not applying any drop to our cascade classifier, as it is not necessary on this dataset: all sentences fit comfortably in one mini batch (see dataset statistics in Table 1), so we would not observe any advantage in pruning candidates.",
"Instead, we focus on evaluating how our training strategy affects performance of partial and final classifiers of a CT model.",
"Our experiment shows that classifiers in a CT model achieve competitive performance with respect to the state of the art: our 12-layer transformer model trained in cascade outperforms TANDABASE by 0 .",
"8 and 0 .",
"9 absolute points in MAP ( 0 . 9 and 0 . 7 in MRR).",
"10, 8, and 6 layer models are equally comparable, differing at most by 2 .",
"3 absolute MAP points on WikiQA, and outscoring TANDA by up to 11 .",
"2 absolute MAP points on TRECQA.",
"However, we observed meaningful differences between the performance of the 4-layers cascade model and its monolithic counterparts.",
"We hypothesize that this is due to the fact that lower layers are not typically well suited for classification when used as part of a larger model (Peters et al., 2019); this observation is reinforced by the fact that the 4 layers TANDA model shown in Table 2 takes four times the number of the iterations of any other model to converge to a local optimum.",
"Overall, these experiments show that our training strategy is not only effective for CT models, but can also produce smaller transformer models with good accuracy without separate fine-tuning.",
"The main results for our CT approach are presented in Table 3: we compared it with ( i ) a state-of-the-art monolithic transformer (TANDABASE ), ( ii ) smaller, monolithic transformer models with 4-10 layers, and ( iii ) a sequential ranker (SR) consisting of 5 monolithic transformer models with 4 , 6 , 8 , 10 and 12 layers trained independently.",
"For CT, we report performance of each classifier individually (layers 4 up to 12, which is equivalent to a full transformer model).",
"We test SR and CT with drop ratio 30%, 40%, 50%.",
"Finally, for each model, we report the relative cost per batch compared to a base transformer model with 12 layers.",
"Overall, we observed that our cascade models are competitive with monolithic transformers on both ASNQ and GPD datasets.",
"In particular, when no selection is applied ( = 0 . 0 ), a 12 layer cascade model performs equal or better to TANDABASE : on ASNQ, we improve P@1 by 2.1% ( 53 . 2 vs 52 . 1 ), and MAP by 1.2% ( 66 . 3 vs 65 . 5 ); on GDP, we achieve the same P@1 ( 67 . 5 ), and a slightly lower MAP ( 57 . 8 vs 58 . 0 ).",
"This indicates that, despite the multitasking setup, out method is competitive with the state of the art.",
"A drop rate > 0 .",
"0 produces a small degradation in accuracy, at most, while significantly reducing the number of operations per batch ( 37% ).",
"In particular, when = 0 .",
"3 , we achieve less than 2% drop in P@1 on GPD, when compared to TANDABASE ; on ANSQ, we slightly improve over it ( 52 . 9 vs 52 . 1 ).",
"We observe a more pronounced drop in performance for MAP, this is to be expected, as intermediate classification layers are designed to drop a significant number of candidates.",
"For larger values of , such as 0 .",
"5 , we note that we achieve significantly better performance than monolithic transformer of similar computational cost.",
"For example, CT achieves an 11 .",
"2 % improvement in P@1 over a 6-layers TANDA model ( 62 . 4 vs 56 . 1 ) on GPD; a similar improvement is obtained on ANSQ ( +11 . 0% , 52 . 4 vs 47 . 2 ).",
"Finally, our model is also competitive with respect to a sequential transformer with equivalent drop rates, while being between 1.9 to 2.4 times more efficient.",
"This is because an SR model made of independent TANDA models cannot re-use encodings generated by smaller models as CT does.",
"Finally, we examined how different values for drop ratio affect the performance of CT models.",
"In particular, we performed an exhaustive grid-search on a CT model trained on the GPD dataset for drop ratio values { p 1 , p 2 , p 3 , p 4 } , with p k { 0 .",
"1 , 0 .",
"2 , . . . , 0 .",
"6 } .",
"The performance is reported in Figure 2 with respect to the relative computational cost per batch of a configuration when compared with a TANDABASE model.",
"Overall, we found that CT models are robust with respect to the choice of { p k } 4 k =1 .",
"We observe moderate degradation for higher drop ratio values (e.g., P@1 varies from 85 . 6 to 80 . 0 ).",
"Further, as expected, performance increases for models with higher computational cost per batch, although they taper off for CT models with relative cost 70% .",
"On the other hand, the grid search results do not seem to suggest an effective strategy to pick optimal values for { p k } 4 k =1 , and, in our experiments, we ended up choosing the same values for all drop rates.",
"In the future, we would be like to learn such values while training the cascade model itself.",
"throughput.",
"Compared to a traditional monolithic stacked transformer model, our approach leverages classifiers placed at different encoding stages to prune candidates in a batch and improve model throughput.",
"Our experiments show that a CT model not only achieves comparable performance to a traditional transformer model while reducing computational cost per batch by over 37% , but also that our training strategy is stable and jointly produces smaller transformer models that are suitable for classification when higher throughput and lower latency goals must be met.",
"In future work, we plan to explore techniques to automatically learn where to place intermediate classifiers, and what drop ratio to use for each one of them."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"method",
"method",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"objective",
"method",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"abstain",
"abstain",
"other",
"other",
"other",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"objective",
"objective"
] |
[
"The recent GPT-3 model (Brown et al., 2020) achieves remarkable few-shot performance solely by leveraging a natural-language prompt and a few task demonstrations as input context.",
"Inspired by their findings, we study few-shot learning in a more practical scenario, where we use smaller language models for which fine-tuning is computationally efficient.",
"We present LM-BFFb etter few-shot fine-tuning of language models 1 a suite of simple and complementary techniques for fine-tuning language models on a small number of annotated examples.",
"Our approach includes (1) prompt-based fine-tuning together with a novel pipeline for automating prompt generation; and (2) a refined strategy for dynamically and selectively incorporating demonstrations into each context.",
"Finally, we present a systematic evaluation for analyzing few-shot performance on a range of NLP tasks, including classification and regression.",
"Our experiments demonstrate that our methods combine to dramatically outperform standard fine-tuning procedures in this low resource setting, achieving up to 30% absolute improvement, and 11% on average across all tasks.",
"Our approach makes minimal assumptions on task resources and domain expertise, and hence constitutes a strong task-agnostic method for few-shot learning.",
"2 1 Introduction The GPT-3 model (Brown et al., 2020) has made waves in the NLP community by demonstrating astounding few-shot capabilities on myriad language understanding tasks.",
"Given only a natural language prompt and a few demonstrations of the task, GPT-3 is able to make accurate predictions without updating any of the weights of its underlying lan-* The first two authors contributed equally.",
"guage model.",
"However, while remarkable, GPT-3 consists of 175B parameters, which makes it challenging to use in most real-wold applications.",
"In this work, we study a more practical scenario in which we only assume access to a moderately-sized language model such as BERT (Devlin et al., 2019) or RoBERTa (Liu et al., 2019), and a small number of examples (i.e., a few-shot setting), which we can use to fine-tune the weights of the language model.",
"This setting is appealing as (1) such models can be trained on typical research hardware; (2) few-shot settings are realistic, as it is generally both easy to acquire a few annotations (e.g., 32 examples) and efficient to train on them; and (3) updating parameters typically leads to better performance.",
"Inspired by GPT-3's findings, we propose several novel strategies for expanding its few-shot learning abilities to our setting, considering both classification andfor the first timeregression.",
"First, we follow the route of prompt-based prediction, first developed by the GPT series (Radford et al., 2018, 2019; Brown et al., 2020) for zero-shot prediction and recently studied by PET (Schick and Schutze, 2021a,b) for fine-tuning.",
"Prompt-based prediction treats the downstream task as a (masked) language modeling problem, where the model directly generates a textual response (referred to as a label word ) to a given prompt defined by a task-specific template (see Figure",
"1(c)).",
"Finding the right prompts, however, is an artrequiring both domain expertise and an understanding of the language model's inner workings.",
"Even if significant effort is invested, manual prompts are likely to be suboptimal.",
"We address this issue by introducing automatic prompt generation, including a pruned brute-force search to identify the best working label words, and a novel decoding objective to automatically generate templates using the generative T5 model (Raffel et al., 2020)all of which only require the few-shot training data.",
"This allows us MLMhead no utterly MLMhead great (label:positive) terrible (label:negative) label:positive label:negative CLShead [CLS] No reason to watch .",
"to cheaply obtain effective prompts that match or outperform our manually chosen ones.",
"Second, we adopt the idea of incorporating demonstrations as additional context.",
"GPT-3's naive in-context learning paradigm picks up to 32 randomly sampled examples, and concatenates them with the input.",
"This method is not guaranteed to prioritize the most informative demonstrations, and mixing random examples from different classes together creates long contexts which can be hard to learn from.",
"Additionally, the number of usable demonstrations is bounded by the model's maximum input length.",
"We develop a more refined strategy, where, for each input, we randomly sample a single example at a time from each class to create multiple, minimal demonstration sets .",
"We also devise a novel sampling strategy that pairs inputs with similar examples, thereby providing the model with more discriminative comparisons.",
"We present a systematic evaluation for analyzing few-shot performance on 8 single-sentence and 7 sentence-pair NLP tasks.",
"We observe that given a small number of training examples, (1) prompt-based fine-tuning largely outperforms standard fine-tuning; (2) our automatic prompt search method matches or outperforms manual prompts; and (3) incorporating demonstrations is effective for fine-tuning, and boosts few-shot performance.",
"Together, these simple-yet-effective methods contribute towards a dramatic improvement across the tasks we evaluate on, and we obtain gains up to 30% absolute improvement (11% on average) compared to standard fine-tuning.",
"For instance, we find that a RoBERTa-large model achieves around 90% accuracy on most binary sentence classification tasks, while only relying on 32 training examples.",
"We refer to our approach as LM-BFF, better few-shot fine-tuning of language models: a strong, task-agnostic method for few-shot learning.",
"Language model prompting.",
"The GPT series (Radford et al., 2018, 2019; Brown et al., 2020) fueled the development of prompt-based learning, and we follow many of its core concepts.",
"We are also greatly inspired by the recent PET work (Schick and Schutze, 2021a,b), although they mainly focus on a semi-supervised setting where a large set of unlabeled examples are provided.",
"We only use a few annotated examples as supervision, and also explore automatically generated prompts and fine-tuning with demonstrations.",
"Furthermore, we deviate from their evaluation by providing a more rigorous framework, as we will discuss in 3.",
"Finally, there is a large body of work on prompting for mining knowledge from pre-trained models (Trinh and Le, 2018; Petroni et al., 2019; Davison et al., 2019; Talmor et al., 2020, inter alia ).",
"Different from these works, we focus on leveraging prompting for fine-tuning on downstream tasks.",
"Automatic prompt search.",
"Schick and Schutze (2021a) and Schick et al. (2020) explore ways of identifying label words automatically, however, none of these results lead to better performance compared to hand-picked ones.",
"In contrast, our method searches over both templates and label words, and is able to match or outperform our manual prompts.",
"Several other attempts have been made in additionyet these approaches either operate in limited domains, such as finding patterns to express specific relations (Jiang et al., 2020), or require a large number of examples for gradient-guided search (Shin et al., 2020; Zhong et al., 2021).",
"Our approach aims to develop general-purpose search methods that rely only on a few annotations.",
"Fine-tuning of language models.",
"A number of recent studies have focused on better methods for fine-tuning language models (Howard and Ruder, 2018; Dodge et al., 2020; Lee et al., 2020; Zhang et al., 2021).",
"These works mainly focus on optimization and regularization techniques to stabilize fine-tuning.",
"Here we use standard optimization techniques, and instead mainly focus our efforts on better prompt-based fine-tuning in a more extreme few-shot setting.",
"We anticipate that results of these studies are largely complementary to ours.",
"Few-shot learning.",
"Broadly speaking, our setting is also connected to other few-shot learning paradigms in NLP, including (1) semi-supervised learning (Miyato et al., 2017; Xie et al., 2020; Chen et al., 2020), where a set of unlabeled examples are given; (2) meta-learning (Yu et al., 2018; Han et al., 2018; Bansal et al., 2020a,b; Bao et al., 2020), where a set of auxiliary tasks are given; and (3) intermediate training (Phang et al., 2018; Yin et al., 2020), where a related, intermediate task is given.",
"We deviate from these settings by making minimal assumptions about available resources: we only assume a few annotated examples and a pre-trained language model.",
"Our focus is on understanding how far we can push without any other advantages.",
"Task formulation.",
"In this work, we assume access to a pre-trained language model L that we wish to fine-tune on a task D with a label space Y .",
"For the task, we only assume K training examples per class 3 for the task's training set D train , such that the total number of examples is K tot = K |Y| , and D train = { ( x i in , y i ) } K tot i =1 .",
"Our goal is then to develop task-agnostic learning strategies that generalize well to an unseen test set ( x test in , y test ) D test .",
"For model selection and hyper-parameter tuning, we assume a development set D dev , of the same size as the few-shot training set, i.e., |D dev | = |D train | .",
"This distinction is important: using a larger development set confers a significant advantage (see our 3 For regression, we partition the data into two classes according to being above or below the median value. experiments in Appendix A), and subverts our initial goal of learning from limited data.",
"4 For all of the following experiments (unless specified other-wise), we take L = RoBERTa-large and K = 16 .",
"Evaluation datasets.",
"We conduct a systematic study across 8 single-sentence and 7 sentence-pair English tasks, including 8 tasks from the GLUE benchmark (Wang et al., 2019), SNLI (Bowman et al., 2015), and 6 other popular sentence classification tasks (SST-5, MR, CR, MPQA, Subj, TREC).",
"All of the dataset details are provided in Appendix B. For single-sentence tasks, the goal is to make a prediction based on an input sentence x in = x 1 , such as whether a movie review is positive or not.",
"For sentence-pair tasks, the goal is to take a pair of input sentences x in = ( x 1 , x 2 ) and predict the relationship between them.",
"We also interchangeably refer to the inputs as < S 1 > or ( < S 1 > , < S 2 > ).",
"Note that we mainly use SST-2 and SNLI for pilot experiments and model development, making it close to a true few-shot setting, at least for all the other datasets we evaluate on.",
"Evaluation protocol.",
"Systematically evaluating few-shot performance can be tricky.",
"It is wellknown that fine-tuning on small datasets can suffer from instability (Dodge et al., 2020; Zhang et al., 2021), and results may change dramatically given a new split of data.",
"To account for this, we measure average performance across 5 different randomly sampled D train and D dev splits.",
"This issue has also been discussed in Schick and Schutze (2021b) they suggest using a fixed set of training examples.",
"We argue that sampling multiple splits gives a more robust measure of performance, and a better estimate of the variance.",
"We also observe that hyper-parameters can make a significant difference, thus we sweep multiple hyper-parameters for each data sample, and take the best setting as measured on the D dev of that sample (see Appendix C.1).",
"Given a masked language model L , we first convert input x in to a token sequence x , and the language model L then maps x to a sequence of hidden vectors { h k R d } .",
"During standard fine-tuning, we usually take x single = [CLS] x 1 [SEP] or x pair = [CLS] x 1 [SEP] x 2 [SEP] .",
"For down-4 In contrast, Schick and Sch utze (2021a,b) do not use a development set, and adopt a set of hyper-parameters based on practical considerations.",
"This is akin to shooting in the dark on a setting that we show can have unintuitive outcomes.",
"stream classification tasks with a label space Y , we train a task-specific head, softmax( W o h [CLS] ) , by maximizing the log-probability of the correct label, where h [CLS] is the hidden vector of [CLS] , and W o R |Y| d is a set of randomly initialized parameters introduced at the start of fine-tuning.",
"Similarly, for a regression task, we can introduce w o R d and optimize the mean squared error between w o h [CLS] and the gold label.",
"In either case, the number of new parameters can be substantial for example, a simple binary classification task will introduce 2,048 new parameters for a RoBERTa-large modelmaking it challenging to learn from a small amount of annotated data (e.g., 32 examples).",
"An alternative approach to solving this problem is prompt-based fine-tuning , in which L is directly tasked with auto-completing natural language prompts.",
"For instance, we can formulate a binary sentiment classification task using a prompt with input x 1 (e.g., No reason to watch it . ) as: x prompt = [CLS] x 1 It was [MASK] .",
"and let L decide whether it is more appropriate to fill in great",
"(positive)",
"or terrible",
"(negative)",
"for [MASK] .",
"We now formalize this approach for classification and regression",
"(4.1 and 4.2), and discuss the importance of prompt selection",
"(4.3).",
"Let M : Y V be a mapping from the task label space to individual words 5 in the vocabulary",
"5 More generally, we can consider a one-to-many mapping M : Y 2 |Y| in which we map labels to sets of words.",
"However, we did not find significant gains in our experiments.",
"V of L .",
"Then for each x in , let the manipulation x prompt = T",
"( x in )",
"be a masked language modeling",
"(MLM)",
"input which contains one [MASK] token.",
"In this way, we can treat our task as an MLM, and model the probability of predicting class y Y as: p",
"where h [MASK] is the hidden vector of [MASK] and w v denotes the pre-softmax vector corresponding to v V .",
"When supervised examples {",
"( x in , y )",
"} are available, L can be fine-tuned to minimize the cross-entropy loss.",
"It is important to note that this approach re-uses the pre-trained weights w v and does not introduce any new parameters.",
"It also reduces the gap between pre-training and fine-tuning, making it more effective in few-shot scenarios.",
"We assume the same basic setup as in classification, but treat the label space Y as a bounded interval [ v l , v u ] .",
"Inspired by Mettes et al.",
"(2019), we model the problem as an interpolation between two opposing poles, { y l , y u } , with values v l and v u respectively.",
"For instance, we can formulate our previous sentiment analysis task as a regression problem in the range [0 , 1] , where we slide between terrible",
"( v l = 0 )",
"and great",
"( v u = 1 ).",
"In this way, we can express y as a mixture model : y = v l p",
"( y l | x in )",
"+ v u p",
"( y u | x in )",
",",
"(2)",
"M : { y l , y u } V , and model p",
"( y u | x in )",
"the same as Eq.",
"(1).",
"We fine-tune L to minimize the KL-divergence between the inferred p",
"( y u | x in )",
"and the observed mixture weight,",
"( y v l )",
"/",
"( v u v l )",
".",
"4.3 Manual prompts: the good and the bad The key challenge is to construct the template T and label words M",
"( Y )",
"we refer to these two together as a prompt P .",
"Previous works",
"(Schick and Schutze, 2021a,b)",
"hand-craft both the templates and label words, which usually requires domain expertise and trial-and-error.",
"Table 1 summarizes manual templates and label words chosen for each dataset in our experiments.",
"These templates and label words were designed by intuition, and by considering formats used in previous literature.",
"To better understand what constitutes a good template or label word, we conduct a pilot study on SST-2 and SNLI.",
"Table 2 shows that different prompts can lead to substantial differences in final accuracy.",
"Specifically, when a template is fixed, the better the label words match the semantic classes, the better the final accuracy is",
"( great / terrible > good / bad > cat / dog ).",
"In extreme cases where we swap plausible label words",
"(e.g., terrible / great ), we achieve the worst overall performance.",
"6 Furthermore, with the same set of label words, even a small change in the template can make a difference.",
"For example, for SNLI, if we put [MASK] at the end, or swap sentence order, we observe a > 10% drop.",
"The above evidence clearly underlines the 6 It is unclear, however, why RoBERTa thinks that cat is more positive than dog.",
"importance of selecting good templates and label words.",
"Searching for prompts, however, is hard, as the search space can be very largeespecially for the template.",
"Even worse, we only have a few examples to use to guide our search, which can easily overfit.",
"We will address these issues next.",
"We now explore principled ways of automating the search process for label words",
"(5.1)",
"and templates",
"(5.2).",
"Our goals are to reduce the human involvement required to design prompts, and to find more optimal settings than those that we manually choose.",
"Here, we assume a classification task, but the process for regression is analogous.",
"We first study how to construct a label word mapping M that maximizes accuracy on D dev after fine-tuning, given a fixed template T .",
"Naively searching all possible assignments, however, is",
"(1)",
"generally intractable, as the search space is exponential in the number of classes; and",
"(2)",
"prone to overfitting, as we will tend to uncover spurious correlations given only a few annotations.",
"As a simple solution, for each class c Y , we construct a pruned set V c V of the top k vocabulary words based on their conditional likelihood using the initial L .",
"That is, let D c train D train be the subset of all examples of class c .",
"We take V c as Top k v V",
"where PL denotes the output probability distribution of L .",
"To further narrow down the search space, we find the top n assignments over the pruned space that maximize zero-shot accuracy on D train",
"(both n and k are hyper-parameters, see Appendix C.2).",
"Then we fine-tune all top n assignments, and rerank to find the best one using D dev .",
"This approach is similar to the automatic verbalizer search methods in Schick and Schutze",
"(2021a); Schick et al.",
"(2020), except that we use a much simpler search process",
"(brute-force)",
"and also apply re-ranking which we find to be quite helpful.",
"Next, we study how to generate a diverse set of templates {T } automatically from a fixed set of label words M",
"( Y )",
".",
"To address this challenging problem, we propose to use T5",
"(Raffel et al., 2020), Best template Generated templates Training examples for label:negative T5 Training examples for label:positive Decode < S 1 > A [MASK] one.",
"a large pre-trained text-to-text Transformer.",
"T5 is pre-trained to fill in missing spans",
"(replaced by T5 mask tokens, e.g., <X> or <Y> )",
"in its input.",
"For example, given the input Thank you <X> me to your party <Y> week , T5 is trained to generate <X> for inviting <Y> last <Z> , meaning that for inviting is the replacement for <X> and last is the replacement for <Y> .",
"This is well suited for prompt generation: we can simply take input sentences from D train and let the T5 model construct the template T , without having to specify a pre-defined number of tokens for it.",
"< S 1 > <X> M",
"( y )",
"<Y> < S 1 > , < S 1 > < S 1 > <X> M",
"( y )",
"<Y> , < S 1 > , < S 2 > < S 1 > <X> M",
"( y )",
"<Y> < S 2 > .",
"As shown in Figure 2, we rely on the T5 model to fill in the placeholders.",
"When decoding, our goal here is to find an output that can work well for all examples in D train , i.e., the output template T that maximizes",
"(cid:80)",
"( x in ,y )",
"D train log P T5",
"( T | T g",
"( x in , y ))",
", where P T5 denotes the output probability distribution of T5.",
"It can be decomposed according to: |T |",
"|T |",
"We use beam search to decode multiple template candidates.",
"Concretely, we use a wide beam width",
"(e.g., 100)",
"to cheaply obtain a large set of diverse templates.",
"We then fine-tune each generated template on D train and use D dev to either pick the single template with the best performance",
"(Table 3), or 7 We consider putting the label word both before and after the input sentence for single-sentence tasks.",
"However, we find that it is always better to put the label words in the middle",
"(between the two sentences)",
"for sentence-pair tasks.",
"the top k templates to use as an ensemble",
"(Table 4).",
"Though it might appear to be expensive to fine-tune the model on each individual template, this is fast in practice due to the small size of D train , and is also fully automated: making it easy to use, compared to manually tuning prompts for each dataset.",
"In this section, we study whether we can leverage demonstrations when fine-tuning medium-sized LMs, and find better ways to exploit them.",
"GPT-3's naive approach to in-context learning simply involves concatenating the input with up to 32 examples randomly drawn from the training set.",
"This approach is suboptimal as",
"(1)",
"the number of available demonstrations is bounded by the model's maximum input length; 8 and",
"(2)",
"mixing numerous random examples from different classes together creates extremely long contexts which can be hard to leverage, especially for a smaller model.",
"To address these issues, we propose a simpler solution: at each training step, we randomly sample one 9 example",
"(cid:0)",
"x",
"( c )",
"in , y",
"( c )",
"(cid:1)",
"D train from each class, convert it into T",
"(cid:0)",
"x",
"( c )",
"in",
"(cid:1)",
"with [MASK] replaced by M",
"( y",
"( c )",
")",
"we denote this as T",
"(cid:0)",
"x",
"( c )",
"in , y",
"( c )",
"(cid:1)",
"and then concatenate them with x in",
"(Figure",
"1(c)): T",
"Here denotes concatenation of input sequences.",
"During both training and inference we sample multiple demonstration sets for each x in .",
"Note that both x in and demonstration examples are sampled from the same set D train during training.",
"At testing time, we still sample demonstration sets from D train and ensemble predictions across all sets.",
"We observe that controlling the construction of the demonstration examples {",
"( x",
"( c )",
"in , y",
"( c )",
")",
"} is crucial for good final performance.",
"For example, if the set of contrastive demonstrations x",
"( c )",
"in are all dramatically differentfrom each other, or from the query x in then it becomes challenging for the language model to decipher meaningful patterns.",
"As a result, the model may simply ignore 8 GPT-3 uses a context size of 2,048 while most smaller language models",
"(e.g., RoBERTa)",
"have a context size of 512.",
"9 We also explored sampling multiple examples per class, but did not observe any improvements.",
"the context, or even get confused by the additional examples.",
"To address this issue, we devise a simple strategy in which we only sample examples that are semantically close to x in .",
"Specifically, we use a pre-trained SBERT",
"(Reimers and Gurevych, 2019)",
"model to obtain embeddings for all input sentences",
"(for sentence-pair tasks, we use the concatenation of the two sentences).",
"Here we just feed the raw sentences without the templates into SBERT.",
"For each query x in and each label c Y , we sort all training instances with the label x D c train by their similarity score to the query cos( e",
"( x in )",
", e",
"( x ))",
", and only sample from the top r = 50% instances for each class to use as demonstrations.",
"We present our main results, and address several research questions pertaining to our LM-BFF approach.",
"Implementation details are in Appendix C. 7.1 Main results We use a RoBERTa-large model and set K = 16 in our experiments.",
"A comparison of using RoBERTa vs BERT can be found in Appendix D. For automatic prompt search, in our main table we report automatic template search only",
"(which consistently performs the best, see Table 5).",
"To put our results in perspective, we compare to a number of baselines, namely",
"(1)",
"standard fine-tuning in our few-shot setting;",
"(2)",
"standard fine-tuning using the full training set;",
"(3)",
"simply taking the most frequent class",
"(measured on the full training set);",
"(4)",
"prompt-based zero-shot prediction where we take our manual prompts and use L out-of-the-box without using any training examples; and",
"(5)",
"GPT-3 in-context learning, where we use the same prompt-based zero-shot setting, but augment the context with randomly sampled 32 demonstrations (and still use RoBERTa-large, not GPT-3).",
"Single-prompt results.",
"Table 3 shows our main results using a single prompt, either from our manually designed ones (Table 1) , or the best generated ones.",
"First, prompt-based zero-shot prediction achieves much better performance than the majority class, showing the pre-encoded knowledge in RoBERTa.",
"Also, GPT-3 in-context learning does not always improve over zero-shot prediction, likely because smaller language models are not expressive enough to use off-the-shelf like GPT-3.",
"Second, prompt-based fine-tuning can greatly outperform standard fine-tuning, both when using a manual prompt or a generated one.",
"CoLA is one interesting exception, as the input may be a nongrammatical sentence which is out of the distribution of L .",
"Generally, our automatically searched templates can achieve comparable or even higher results than manual ones, especially for tasks in which constructing strong manual templates is less intuitive (e.g., TREC, QNLI and MRPC).",
"Finally, using demonstrations in context leads to consistent gains in a majority of tasks.",
"In summary, our combined solutionfine-tuning with automatically searched templates and sampled demonstration setsachieves a 30% gain on SNLI compared to standard fine-tuning, and 11% gain on average.",
"Ensemble results.",
"An advantage of automatic prompt search is that we can generate as many prompts as we want, train individual models, and create large ensembles.",
"PET (Schick and Schutze, 2021a,b) also ensembles multiple models trained with manual prompts.",
"10 In Table 4, we make a direct comparison of our searched prompts and PET's manual prompts on MNLI and RTE (two 10 They then use unlabeled data and distillation to get a single model, which is outside of our scope.",
"datasets that we evaluate in common).",
"11 As the results show, an ensemble with multiple templates always improves performance.",
"An ensemble of the same number of automatic templates achieves comparable or better performance than the ensemble of PET's manual prompts.",
"Increasing the number of automatic templates brings further gains.",
"Table 5 gives the results of using manual vs automatic prompts.",
"For automatic prompts, we compare template search (Auto T), label word search (Auto L), and a joint variant (Auto T + L) in which we start from manual label words, apply Auto T, and then Auto L. In most cases, Auto T achieves comparable or higher performance than manual ones, and is consistently the best variant.",
"Auto L outperforms manual prompts on TREC and MRPCbut is considerably worse on SNLI.",
"Auto T + L is often better than Auto L, but only sometimes better than Auto T. Table 6 shows examples from Auto T and Auto L (A full list in Appendix E).",
"Auto T templates generally fit the context and label words well, but can contain biased peculiarities (e.g., { Yes/No } , no in SNLI).",
"For Auto L words, things are mixed: while most look intuitively reasonable, there are also some mysterious abnormalities (e.g., Hi for the entailment class in SNLI).",
"11 In the PET NLI templates, the hypothesis is put before the premise, which we actually found to be suboptimal.",
"In our experiments, we swap the two and get better results.",
"Table 7 compares the performance of demonstrations using uniform sampling to selective sampling by SBERT.",
"We acknowledge that SBERT is trained on SNLI and MNLI datasets, thus we also tried a simple sentence encoder using mean pooling of hidden representations from RoBERTa-large.",
"We find that in either case, using selective sampling outperforms uniform sampling, highlighting the importance of sampling similar examples for incorporating demonstrations in context.",
"Figure 3 illustrates how standard fine-tuning and our LM-BFF compare as K increases.",
"For a simple task such as SST-2 (also see MR, CR and MPQA in Table 3), despite using only 32 total examples, LM-BFF has already nearly saturated its performance and is comparable to standard fine-tuning over the entire dataset.",
"On the harder task of SNLI, LM-BFF continues to improve as K increases while still maintaining a performance gap over standard fine-tuning, until the two converge around K = 256 .",
"Reformulating NLP tasks as MLM has exciting implications for few-shot learning, but also has limitations.",
"First, while LM-BFF greatly outperforms standard fine-tuning, Table 3 shows that, overall, the performance still substantially lags behind fine-tuning with thousands of examples, especially for harder tasks.",
"Additionally, just like standard fine-tuning, our results also suffer from high variance.",
"As described in 2, several recent studies have tried to counter instability in few-shot fine-tuning and we expect these methods to also help here.",
"With respect to automatic prompt generation, despite its effectiveness, we still find it practically challenging to expand the search space, or generalize well based on only approximately 32 examples.",
"This is partly due to our lingering reliance on some manual designeither manual templates (for label word search) or manual label words (for template search), which allows us to get our search off the ground, but does also bias it towards areas of the search space that we might have already imagined.",
"Finally, it is important to clarify that LM-BFF favors certain tasks which (1) can be naturally posed as a fill-in-the-blank problem; (2) have relatively short input sequences; and (3) do not contain many output classes.",
"Issues (2) and (3) might be ameliorated with longer-context language models (e.g., Beltagy et al., 2020).",
"For tasks that are not straightforward to formulate in prompting, such as structured prediction, issue (1) is more fundamental.",
"We leave it as an open question for future work.",
"In this paper we presented LM-BFF, a set of simple but effective techniques for fine-tuning language models using only a few examples.",
"Our approach proposes to (1) use prompt-based fine-tuning with automatically searched prompts; and (2) include selected task demonstrations (training examples) as part of the input context.",
"We show that our method outperforms vanilla fine-tuning by up to 30% (and 11 % on average).",
"We concluded by discussing the limitations of our approach, and posed open questions for future study.",
"We thank the members of Princeton, MIT, Ts-inghua NLP groups and the anonymous reviewers for their valuable feedback.",
"TG is supported by a Graduate Fellowship at Princeton University and AF is supported by an NSF Graduate Research Fellowship.",
"This research is also partly supported by a Google Research Scholar Award."
] | [
"abstain",
"method",
"method",
"objective",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"result",
"objective",
"result",
"abstain",
"other",
"method",
"method",
"objective",
"method",
"other",
"method",
"other",
"other",
"objective",
"other",
"objective",
"other",
"other",
"other",
"method",
"abstain",
"other",
"other",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"result",
"abstain",
"other",
"other",
"other"
] |
[
"Most recent approaches use the sequence-to-sequence model for paraphrase generation.",
"The existing sequence-to-sequence model tends to memorize the words and the patterns in the training dataset instead of learning the meaning of the words.",
"Therefore, the generated sentences are often grammatically correct but semantically improper.",
"In this work, we introduce a novel model based on the encoder-decoder framework, called Word Embedding Attention Network (WEAN).",
"Our proposed model generates the words by querying distributed word representations (i.e. neural word embeddings), hoping to capturing the meaning of the according words.",
"Following previous work, we evaluate our model on two paraphrase-oriented tasks, namely text simplification and short text abstractive summarization.",
"Experimental results show that our model outperforms the sequence-to-sequence baseline by the BLEU score of 6.3 and 5.5 on two English text simplification datasets, and the ROUGE-2 F1 score of 5.7 on a Chinese summarization dataset.",
"Moreover, our model achieves state-of-the-art performances on these three benchmark datasets.",
"1 1 Introduction Paraphrase is a restatement of the meaning of a text using other words.",
"Many natural language generation tasks are paraphrase-orientated, such as text simplification and short text summarization.",
"Text simplification is to make the text easier to read and understand, especially for poor readers, while short text summarization is to generate a brief sentence to describe the short texts (e.g. posts on the social media).",
"Most recent approaches use sequence-to-sequence model for paraphrase generation (Prakash et al., 2016; Cao et al., 2017).",
"It 1 The code is available at https://github.com/ lancopku/WEAN compresses the source text information into dense vectors with the neural encoder, and the neural decoder generates the target text using the compressed vectors.",
"Although neural network models achieve success in paraphrase generation, there are still two major problems.",
"One of the problem is that the existing sequence-to-sequence model tends to memorize the words and the patterns in the training dataset instead of the meaning of the words.",
"The main reason is that the word generator (i.e. the output layer of the decoder) does not model the semantic information.",
"The word generator, which consists of a linear transformation and a softmax operation, converts the Recurrent Neural Network (RNN) output from a small dimension (e.g. 500) to a much larger dimension (e.g. 50,000 words in the vocabulary), where each dimension represents the score of each word.",
"The latent assump-tion of the word generator is that each word is in-dependent and the score is irrelevant to each other.",
"Therefore, the scores of a word and its synonyms may be of great difference, which means the word generator learns the word itself rather than the relationship between words.",
"The other problem is that the word generator has a huge number of parameters.",
"Suppose we have a sequence-to-sequence model with a hidden size of 500 and a vocabulary size of 50,000.",
"The word generator has up to 25 million parameters, which is even larger than other parts of the encoder-decoder model in total.",
"The huge size of parameters will result in slow convergence, because there are a lot of parameters to be learned.",
"Moreover, under the distributed framework, the more parameters a model has, the more bandwidth and memory it consumes.",
"To tackle both of the problems, we propose a novel model called Word Embedding Attention Network (WEAN).",
"The word generator of WEAN 196 is attention based, instead of the simple linear softmax operation.",
"In our attention based word generator, the RNN output is a query, the candidate words are the values, and the corresponding word representations are the keys.",
"In order to predict the word, the attention mechanism is used to select the value matching the query most, by means of querying the keys.",
"In this way, our model generates the words according to the distributed word representations (i.e. neural word embeddings) in a retrieval style rather than the traditional generative style.",
"Our model is able to capture the semantic meaning of a word by referring to its embedding.",
"Besides, the attention mechanism has a much smaller number of parameters compared with the linear transformation directly from the RNN output space to the vocabulary space.",
"The reduction of the parameters can increase the convergence rate and speed up the training process.",
"Moreover, the word embedding is updated from three sources: the input of the encoder, the input of the decoder, and the query of the output layer.",
"Following previous work (Cao et al., 2017), we evaluate our model on two paraphrase-oriented tasks, namely text simplification and short text abstractive summarization.",
"Experimental results show that our model outperforms the sequence-to-sequence baseline by the BLEU score of 6.3 and 5.5 on two English text simplification datasets, and the ROUGE-2 F1 score of 5.7 on a Chinese summarization dataset.",
"Moreover, our model achieves state-of-the-art performances on all of the benchmark datasets.",
"We propose a novel model based on the encoder-decoder framework, which generates the words by querying distributed word representations with the attention mechanism.",
"In this section, we first present the overview of the model architecture.",
"Then, we explain the details of the word generation, especially the way to query word embeddings.",
"Word Embedding Attention Network is based on the encoder-decoder framework, which consists of two components: a source text encoder, and a target text decoder.",
"Figure 1 is an illustration of our model.",
"Given the source texts, the encoder compresses the source texts into dense representation vectors, and the decoder generates the paraphrased texts.",
"To predict a word, the decoder uses the hidden output to query the word embeddings.",
"The word embeddings assess all the candidate words, and return the word whose embedding matches the query most.",
"The selected word is emitted as the predicted token, and its embedding is then used as the input of the LSTM at the next time step.",
"After the back propagation, the word embedding is updated from three sources: the input of the encoder, the input of the decoder, and the query of the output layer.",
"We show the details of our WEAN in the following subsection.",
"The goal of the source text encoder is to provide a series of dense representation of complex source texts for the decoder.",
"In our model, the source text encoder is a Long Short-term Memory Network (LSTM), which produces the dense representation { h 1 , h 2 , ..., h N } from the source text { x 1 , x 2 , ..., x N } : The goal of the target text decoder is to generate a series of paraphrased words from the dense representation of source texts.",
"Fisrt, the LSTM of the decoder compute the dense representation of generated words s t .",
"Then, the dense representations are fed into an attention layer (Bahdanau et al., 2014) to generate the context vector c t , which captures context information of source texts.",
"Attention vector c t is calculated by the weighted sum of encoder hidden states: c t = NX i =1 ti h i (1) ti = e g ( s t ,h i ) P Nj =1 e g ( s t ,h j ) (2) where g ( s t , h i ) is an attentive score between the decoder hidden state s t and the encoder hidden state h i .",
"In this way, c t and s t respectively represent the context information of source texts and the target texts at the t th time step.",
"For the current sequence-to-sequence model, the word generator computes the distribution of output words y t in a generative style:",
"where W R k V is a trainable parameter matrix, k is hidden size, and V is the number of words in the vocabulary.",
"When the vocabulary is large, the number of parameters will be huge.",
"Our model generates the words in a retrieval style rather than the traditional generative style, by querying the word embeddings.",
"We denote the combination of the source context vector c t and the target context vector s t as the query q t : q t = tanh( W c [ s t ; c t ]) (4) The candidate words w i and their corresponding embeddings e i are paired as the key-value pairs { w i , e i } ( i = 1 , 2 , ..., n ) , where n is the number of candidate words.",
"We give the details of how to determine the set of candidate words in Section 2.4.",
"Our model uses q t to query the key-value pairs { w i , e i } ( i = 1 , 2 , ..., n ) by evaluating the relevance between the query q t and each word vector e i with a score function f ( q t , e i ) .",
"The query process can be regarded as the attentive selection of the word embeddings.",
"We borrow the attention energy functions (Luong et al., 2015) as the relevance score function f ( q t , e i ) : f ( q t , e i ) = q Tt e i dot q Tt W a e i general v T tanh( W q q t + W e e i ) concat (5) where W q and W e are two trainable parameter matrices, and v T is a trainable parameter vector.",
"In implementation, we select the general attention function as the relevance score function, based on the performance on the validation sets.",
"The key-value pair with the highest score { w t , e t } is selected.",
"At the test stage, the decoder generates the key w t as the t th predicted word, and inputs the value e t to the LSTM unit at the t + 1 th time step.",
"At the training stage, the scores are normalized as the word probability distribution: p ( y t ) = softmax ( f ( q t , e i )) (6) 2.4 Selection of Candidate Key-value Pairs As described in Section 2.3, the model generates the words in a retrieval style, which selects a word according to its embedding from a set of candidate key-value pairs.",
"We now give the details of how to obtain the set of candidate key-value pairs.",
"We extract the vocabulary from the source text in the training set, and select the n most frequent words as the candidate words.",
"We reuse the embeddings of the decoder inputs as the values of the candidate words, which means that the decoder input and the predicted output share the same vocabulary and word embeddings.",
"Besides, we do not use any pretrained word embeddings in our model, so that all of the parameters are learned from scratch.",
"Although our generator is a retrieval style, WEAN is as differentiable as the sequence-to-sequence model.",
"The objective of training is to minimize the 198 cross entropy between the predicted word probability distribution and the golden one-hot distribution: L = X i y i log p ( y i ) (7) We use Adam optimization method to train the model, with the default hyper-parameters: the learning rate = 0 .",
"001 , and 1 = 0 .",
"9 , 2 = 0 .",
"999 , (cid:15) = 1 e 8 .",
"Following the previous work (Cao et al., 2017), we test our model on the following two paraphrase orientated tasks: text simplification and short text abstractive summarization.",
"The datasets are both from the alignments between English Wikipedia website 2 and Simple English Wikipedia website.",
"3 The Simple English Wikipedia is built for the children and adults who are learning the English language, and the articles are composed with easy words and short sen-tences.",
"Therefore, Simple English Wikipedia is a natural public simplified text corpus.",
"Parallel Wikipedia Simplification Corpus (PWKP).",
"PWKP (Zhu et al., 2010) is a widely used benchmark for evaluating text simplification systems.",
"It consists of aligned complex text from English WikiPedia (as of Aug. 22nd, 2009) and simple text from Simple Wikipedia (as of Aug. 17th, 2009).",
"The dataset contains 108,016 sentence pairs, with 25.01 words on average per complex sentence and 20.87 words per simple sentence.",
"Following the previous work (Zhang and Lapata, 2017), we remove the duplicate sentence pairs, and split the corpus with 89,042 pairs for training, 205 pairs for validation and 100 pairs for test.",
"English Wikipedia and Simple English Wikipedia (EW-SEW).",
"EW-SEW is a publicly available dataset provided by Hwang et al. (2015).",
"To build the corpus, they first align the complex-simple sentence pairs, score the semantic similarity between the complex sentence and the simple sentence, and classify 2 http://en.wikipedia.org 3 http://simple.wikipedia.org each sentence pair as a good, good partial, partial, or bad match.",
"Following the previous work (Nisioi et al., 2017), we discard the un-classified matches, and use the good matches and partial matches with a scaled threshold greater than 0.45.",
"The corpus contains about 150K good matches and 130K good partial matches.",
"We use this corpus as the training set, and the dataset provided by Xu et al. (Xu et al., 2016) as the validation set and the test set.",
"The validation set consists of 2,000 sentence pairs, and the test set contains 359 sentence pairs.",
"Besides, each complex sentence is paired with 8 reference simplified sentences provided by Amazon Mechanical Turk workers.",
"Following the previous work (Nisioi et al., 2017; Hu et al., 2015), we evaluate our model with different metrics on two tasks.",
"Automatic evaluation.",
"We use the BLEU score (Papineni et al., 2002) as the automatic evaluation metric.",
"BLEU is a widely used metric for machine translation and text simplification, which measures the agreement between the model outputs and the gold references.",
"The references can be either single or multiple.",
"In our experiments, the references are single on PWKP, and multiple on EW-SEW.",
"Human evaluation.",
"Human evaluation is essential to evaluate the quality of the model outputs.",
"Following Nisioi et al. (2017) and Zhang et al. (2017), we ask the human raters to rate the simplified text in three dimensions: Fluency, Adequacy and Simplicity.",
"Fluency assesses whether the outputs are grammatically right and well formed.",
"Adequacy represents the meaning preservation of the simplified text.",
"Both the scores of fluency and adequacy range from 1 to 5 (1 is very bad and 5 is very good).",
"Simplicity shows how simpler the model outputs are than the source text, which ranges from 1 to 5.",
"Our proposed model is based on the encoder-decoder framework.",
"The encoder is implemented on LSTM, and the decoder is based on LSTM with Luong style attention (Luong et al., 2015).",
"We 199 PWKP BLEU PBMT (Wubben et al., 2012) 46.31 Hybrid (Narayan and Gardent, 2014) 53.94 EncDecA (Zhang and Lapata, 2017) 47.93 DRESS (Zhang and Lapata, 2017) 34.53 DRESS-LS (Zhang and Lapata, 2017) 36.32 Seq2seq (our implementation) 48.26 WEAN (our proposal) 54.54 Table 1: Automatic evaluation of our model and other related systems on PWKP datasets.",
"tune our hyper-parameter on the development set.",
"The model has two LSTM layers.",
"The hidden size of LSTM is 256, and the embedding size is 256.",
"We use Adam optimizer (Kingma and Ba, 2014) to learn the parameters, and the batch size is set to be 64.",
"We set the dropout rate (Srivastava et al., 2014) to be 0.4.",
"All of the gradients are clipped when the norm exceeds 5.",
"We compare our model with several neural text simplification systems.",
"Seq2seq is our implementation of the sequence-to-sequence model with attention mechanism, which is the most popular neural model for text generation.",
"NTS and NTS-w2v (Nisioi et al., 2017) are two sequence-to-sequence model with extra mechanism like prediction ranking, and NTS-w2v uses a pretrain word2vec.",
"EncDecA is a model based on the encoder-decoder with attention, implemented by Zhang and Lapata (2017).",
"PBMT-R (Wubben et al., 2012) is a phrase based machine translation model which reranks the outputs.",
"Hybrid (Narayan and Gardent, 2014) is a hybrid approach which combines deep semantics and mono-lingual machine translation.",
"SBMT-SARI (Xu et al., 2016) is a syntax-based machine translation model which is trained on PPDB dataset (Ganitkevitch et al., 2013) and tuned with SARI.",
"We compare WEAN with state-of-the-art models for text simplification.",
"Table 1 and Table 2 summarize the results of the automatic evaluation.",
"On PWKP dataset, we compare WEAN with PBMT, Hybrid, EncDecA, DRESS and DRESS-LS.",
"WEAN achieves a BLEU score of 54.54, outperforming all of the previous systems.",
"On EWSEW dataset, we compare WEAN with PBMT-R, Hybrid, SBMT-SARI, and the neural models described above.",
"We do not find any public release code of PBMT-R and SBMT-SARI.",
"Fortunately, Xu et al. (2016) provides the predictions of PBMT-R and SBMT-SARI on EW-SEW test set, so that we can compare our model with these systems.",
"It shows that the neural models have better performance in BLEU, and WEAN achieves the best BLEU score with 94.45.",
"We perform the human evaluation of WEAN and other related systems, and the results are shown in Table 3.",
"DRESS-LS is based on the reinforcement learning, and it encourages the fluency, simplicity and relevance of the outputs.",
"Therefore, it achieves a high score in our human evaluation.",
"WEAN gains a even better score than DRESS-LS.",
"Besides, WEAN generates more adequate and simpler outputs than the reference on PWKP.",
"The predictions of SBMT-SARI are the most adequate among the compared systems on EW-SEW.",
"In general, WEAN outperforms all of the other systems, considering the balance of fluency, adequate and simplicity.",
"We conduct significance tests based on t-test.",
"The significance tests suggest that WEAN has a very significant improvement over baseline, with p 0 .",
"001 over DRESS-LS in all of the dimension on PWKP, p 0 .",
"05 over DRESS-LS in the dimension of fluency, p 0 .",
"005 over NTS-w2v in the dimension of simplicity and p 0 .",
"005 over DRESS-LS in the dimension of all.",
"Large Scale Chinese Social Media Short Text Summarization Dataset (LCSTS): LCSTS is constructed by Hu et al. (2015).",
"The dataset consists of more than 2,400,000 text-summary pairs, constructed from a famous Chinese social media website called Sina Weibo.",
"4 It is split into three parts, with 2,400,591 pairs in PART I, 10,666 pairs in PART II and 1,106 pairs in PART III.",
"All the text-summary pairs in PART II and PART III are manually annotated with relevant scores ranged from 1 to 5.",
"We only reserve pairs with scores no less than 3, leaving 8,685 pairs in PART II and 725 pairs in PART III.",
"Following the previous work (Hu et al., 2015), we use PART I as training set, PART II as validation set, and PART III as test set.",
"Our evaluation metric is ROUGE score (Lin and Hovy, 2003), which is popular for summarization evaluation.",
"The metrics compare an automatically produced summary against the reference summaries, by computing overlapping lexical units, including unigram, bigram, trigram, and longest common subsequence (LCS).",
"Following previous work (Rush et al., 2015; Hu et al., 2015), we use ROUGE-1 (unigram), ROUGE-2 (bi-gram) and ROUGE-L (LCS) as the evaluation metrics in the reported experimental results.",
"The vocabularies are extracted from the training sets, and the source contents and the summaries share the same vocabularies.",
"We tune the hyper-parameters based on the ROUGE scores on the validation sets.",
"In order to alleviate the risk of word segmentation mistakes, we split the Chinese sentences into characters.",
"We prune the vocabulary size to 4,000, which covers most of the common characters.",
"We set the word embedding size and the hidden size to 512, the number of LSTM layers of the encoder is 2, and the number of LSTM layers of the decoder is 1.",
"The batch size is 64, and we do not use dropout (Srivastava et al., 2014) on this dataset.",
"Following the previous work (Li et al., 2017), we implement a beam search optimization, and set the beam size to 5.",
"We compare our model with the state-of-the-art baselines.",
"RNN-dist (Chen et al., 2016) is a distraction-based neural model, which the attention mechanism focuses on the different parts of the source content.",
"CopyNet (Gu et al., 2016) incorporates a copy mechanism to allow part of the generated summary is copied from the source content.",
"SRB (Ma et al., 2017) is a sequence-to-sequence based neural model with improving the semantic relevance between the input text and the output summary.",
"DRGD (Li et al., 2017) is a deep recurrent generative decoder model, combining the decoder with a variational autoencoder.",
"Seq2seq is our implementation of the sequence-to-sequence model with the attention mechanism.",
"We report the ROUGE F1 score of our model and the baseline models on the test sets.",
"Table 4 summarizes the comparison between our model and the baselines.",
"Our model achieves the score of 37.8 ROUGE-1, 25.6 ROUGE-2, and 35.2 ROUGE-L, outperforming all of the previous models.",
"First, we compare our model with the sequence-to-sequence model.",
"It shows that our model significant outperforms the sequence-to-sequence baseline with a large margin of 5.7 ROUGE-1, 5.7 ROUGE-2, and 6.0 ROUGE-L.",
"Then, we compare our model with other related models.",
"The state-of-the-art model is DRGD (Li et al., 2017), which obtains the score of 37.0 ROUGE-1, 24.2 ROUGE-2, and 34.2 ROUGE-L.",
"Our model has a relative gain of 0.8 ROUGE-1, 1.4 ROUGE-2 and 1.0 ROUGE-L over the state-of-the-art models.",
"Our WEAN reduces a large number of the parameters in the output layer.",
"To analyze the parameter reduction, we compare our WEAN model with the sequence-to-sequence model.",
"Table 5 lists the number of the parameters in the output layers of two models.",
"Both PWKP and EWSEWhave the vocabulary size of 50000 words and the hidden size of 256, resulting 50000 256 = 12 , 800 , 000 parameters.",
"LCSTS has a vocabulary size of 4000 and the hidden size of 512, so the seq2seq has 4000 512 = 2 , 048 , 000 parameters in the output layers.",
"WEAN only has two parameter matrices and one parameter vector at most in Equation 5, without regard to the vocabulary size.",
"It has 256 256 2 + 256 = 131 , 328 parameters on PWKP and EWSEW, and 512 512 2+512 = 524 , 800 parameters on LCSTS.",
"Besides, WEAN does not have any extra parameters in the other part of the model.",
"Figure 2 shows the training curve of WEAN and Seq2seq on the PWKP validation set.",
"WEAN achieve near the optimal score in only 2-3 epochs, while Seq2seq takes more than 15 epochs to achieve the optimal score.",
"Therefore, WEAN has much faster convergence rate, compared with Seq2seq.",
"With the much faster training speed, WEAN does not suffer loss in BLEU, and even improve the BLEU score.",
"Table 6 shows two examples of different text simplification system outputs on EW-SEW.",
"For the first example, NTS, NTS-w2v and PBMT-R miss some essential constituents, so that the sentences are incomplete and not fluent.",
"SBMT-SARI generates a fluent sentence, but the output does not preserve the original meaning.",
"The predicted sentence of WEAN is fluent, simple, and the same as the reference.",
"For the second example, NTS-w2v omits so many words that it lacks a lot of information.",
"PBMT-R generates some irrelevant words, like 'siemens-martin', '-rrb-', and '-shurba', which hurts the fluency and adequacy of the generated sentence.",
"SBMT-SARI is able to generate a fluent sentence, but the meaning is different from the source text, and even more diffi-cult to understand.",
"Compared with the statistic model, WEAN generates a more fluent sentence.",
"Besides, WEAN can capture the semantic meaning of the word by querying the word embeddings, so the generated sentence is semantically correct, and very close to the original meaning.",
"Our work is related to the encoder-decoder framework (Cho et al., 2014) and the attention mechanism (Bahdanau et al., 2014).",
"Encoder-decoder framework, like sequence-to-sequence model, has achieved success in machine translation (Sutskever et al., 2014; Jean et al., 2015; Luong et al., 2015; Lin et al., 2018), text summarization (Rush et al., 2015; Chopra et al., 2016; Nallapati et al., 2016; Wang et al., 2017; Ma and Sun, 2017), and other natural language processing tasks (Liu et al., 2017).",
"There are many other methods to improve neural attention model (Jean et al., 2015; Luong et al., 2015).",
"Zhu et al. (2010) constructs a wikipedia dataset, and proposes a tree-based simplification model.",
"Woodsend and Lapata (2011) introduces a data-driven model based on quasi-synchronous grammar, which captures structural mismatches and complex rewrite operations.",
"Wubben et al. (2012) 203 presents a method for text simplification using phrase based machine translation with re-ranking the outputs.",
"Kauchak (2013) proposes a text simplification corpus, and evaluates language modeling for text simplification on the proposed corpus.",
"Narayan and Gardent (2014) propose a hybrid approach to sentence simplification which combines deep semantics and monolingual machine translation.",
"Hwang et al. (2015) introduces a parallel simplification corpus by evaluating the similarity between the source text and the simplified text based on WordNet.",
"Glavas and Stajner (2015) propose an unsupervised approach to lexical simplification that makes use of word vectors and require only regular corpora.",
"Xu et al. (2016) design automatic metrics for text simplification.",
"Recently, most works focus on the neural sequence-to-sequence model.",
"Nisioi et al. (2017) present a sequence-to-sequence model, and re-ranks the predictions with BLEU and SARI.",
"Zhang and Lapata (2017) propose a deep reinforcement learning model to improve the simplicity, fluency and adequacy of the simplified texts.",
"Cao et al. (2017) introduce a novel sequence-to-sequence model to join copying and restricted generation for text simplification.",
"Rush et al. (2015) first used an attention-based encoder to compress texts and a neural network language decoder to generate summaries.",
"Following this work, recurrent encoder was introduced to text summarization, and gained better performance (Lopyrev, 2015; Chopra et al., 2016).",
"Towards Chinese texts, Hu et al. (2015) built a large corpus of Chinese short text summarization.",
"To deal with unknown word problem, Nallapati et al. (2016) proposed a generator-pointer model so that the decoder is able to generate words in source texts.",
"Gu et al. (2016) also solved this issue by incorporating copying mechanism.",
"We propose a novel model based on the encoder-decoder framework, which generates the words by querying distributed word representations.",
"Experimental results show that our model outperforms the sequence-to-sequence baseline by the BLEU score of 6.3 and 5.5 on two English text simplification datasets, and the ROUGE-2 F1 score of 5.7 on a Chinese summarization dataset.",
"Moreover, our model achieves state-of-the-art performances on these three benchmark datasets.",
"This work was supported in part by National Natural Science Foundation of China (No. 61673028), National High Technology Research and Development Program of China (863 Program, No. 2015AA015404), and the National Thousand Young Talents Program.",
"Xu Sun is the corresponding author of this paper."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"objective",
"result",
"result",
"other",
"other"
] |
[
"We propose a novel transition-based algorithm that straightforwardly parses sentences from left to right by building n attachments, with n being the length of the input sentence.",
"Similarly to the recent stack-pointer parser by Ma et al. (2018), we use the pointer network framework that, given a word, can directly point to a position from the sentence.",
"However, our left-to-right approach is simpler than the original top-down stack-pointer parser (not requiring a stack) and reduces transition sequence length in half, from 2 n 1 actions to n .",
"This results in a quadratic non-projective parser that runs twice as fast as the original while achieving the best accuracy to date on the English PTB dataset (96.04% UAS, 94.43% LAS) among fully-supervised single-model dependency parsers, and improves over the former top-down transition system in the majority of languages tested.",
"Dependency parsing, the task of automatically obtaining the grammatical structure of a sentence expressed as a dependency tree, has been widely studied by natural language processing (NLP) researchers in the last decades.",
"Most of the models providing competitive accuracies fall into two broad families of approaches: graph-based (Mc-Donald et al., 2005a,b) and transition-based (Ya-mada and Matsumoto, 2003; Nivre, 2003) dependency parsers.",
"Given an input sentence, a graph-based parser scores trees by decomposing them into factors, and performs a search for the highest-scoring tree.",
"In the past two years, this kind of dependency parsers have been ahead in terms of accuracy thanks to the graph-based neural architecture developed by Dozat and Manning (2016), which not only achieved state-of-the-art accuracies on the Stanford Dependencies conversion of the English Penn Treebank (hereinafter, PTB-SD), but also obtained the best results in the majority of languages in the CoNLL 2017 Shared Task (Dozat et al., 2017).",
"This tendency recently changed, since a transition-based parser developed by Ma et al. (2018) managed to outperform the best graph-based model in the majority of datasets tested.",
"Transition-based parsers incrementally build a dependency graph for an input sentence by applying a sequence of transitions.",
"This results in more efficient parsers with linear time complexity for parsing projective sentences, or quadratic for handling non-projective structures, when implemented with greedy or beam search.",
"However, their main weakness is the lack of access to global context information when transitions are greedily chosen.",
"This favours error propagation, mainly affecting long dependencies that require a larger number of transitions to be built (McDonald and Nivre, 2011).",
"Many attempts have been made to alleviate the impact of error propagation in transition-based dependency parsing, but the latest and most successful approach was developed by Ma et al. (2018).",
"In particular, they make use of pointer networks (Vinyals et al., 2015) to implement a new neural network architecture called stack-pointer network .",
"The proposed framework provides a global view of the input sentence by capturing information from the whole sentence and all the arcs previously built, crucial for reducing the effect of error propagation; and, thanks to an attention mechanism (Bahdanau et al., 2014; Luong et al., 2015), is able to return a position in that sentence that corresponds to a word related to the word currently on top of the stack.",
"They take advantage of this and propose a novel transition system that follows a top-down depth-first strategy to perform the syntactic analysis.",
"Concretely, it considers the word pointed by the neural network as the child of the word on top of the stack, and builds the corresponding dependency relation between them.",
"This results in a transition-based algorithm that can process unrestricted non-projective sentences in O ( n 2 ) time complexity and requires 2 n -1 actions to successfully parse a sentence with n words.",
"We also take advantage of pointer network capabilities and use the neural network architecture introduced by Ma et al. (2018) to design a nonprojective left-to-right transition-based algorithm, where the position value pointed by the network has the opposite meaning: it denotes the index that corresponds to the head node of the current focus word.",
"This results in a straightforward transition system that can parse a sentence in just n actions, without the need of any additional data structure and by just attaching each word from the sentence to another word (including the root node).",
"Apart from increasing the parsing speed twofold (while keeping the same quadratic time complexity), it achieves the best accuracy to date among fully-supervised single-model dependency parsers on the PTB-SD, and obtains competitive accuracies on twelve different languages in comparison to the original top-down version.",
"Ma et al. (2018) propose a novel neural network architecture whose main backbone is a pointer network (Vinyals et al., 2015).",
"This kind of neural networks are able to learn the conditional probability of a sequence of discrete numbers that correspond to positions in an input sequence (in this case, indexes of words in a sentence) and, by means of attention (Bahdanau et al., 2014; Luong et al., 2015), implement a pointer that selects a position from the input at decoding time.",
"Their approach initially reads the whole sentence, composed of the n words w 1 , . . . , w n , and encodes each w i one by one into an encoder hidden state e i .",
"As encoder, they employ a combination of CNNs and bi-directional LSTMs (Chiu and Nichols, 2016; Ma and Hovy, 2016).",
"For each word, CNNs are used to obtain its character-level representation that is concatenated to the word and PoS embeddings to finally be fed into BiLSTMs that encode word context information.",
"As decoder they present a top-down transition system, where parsing configurations use the classic data structures (Nivre, 2008): a buffer (that contains unattached words) and a stack (that holds partially processed words).",
"The available parser actions are two transitions that we call Shift Attach p and Reduce .",
"Given a configuration with word w i on top of the stack, as the pointer network just returns a position p from a given sentence, they proceed as follows to determine which transition should be applied: If p (cid:54) = i , then the pointed word w p is considered as a child of w i ; so the parser chooses a Shift Attach p transition to move w p from the buffer to the stack and build an arc w i w p .",
"On the other hand, if p = i , then w i is considered to have found all its children, and a Reduce transition is applied to pop the stack.",
"The parsing process starts with a dummy root $ on the stack and, by applying 2 n -1 transitions, a dependency tree is built for the input in a top-down depth-first fashion, where multiple children of a same word are forced during training to be created in an inside-out manner.",
"More in detail, for each parsing configuration c t , the decoder (implemen-ted as a uni-directional LSTM) receives the encoder hidden state e i of the word w i on top of the stack to generate a decoder hidden state d t .",
"After that, d t , together with the sequence s i of encoder hidden states from words still in the buffer plus e i , are used to compute the attention vector a t as follows: v ti = score ( d t , s i ) (1) a t = softmax ( v t ) (2) As attention scoring function ( score () ), they adopt the biaffine attention mechanism described in (Luong et al., 2015; Dozat and Manning, 2016).",
"Finally, the attention vector a t will be used to return the highest-scoring position p and choose the next transition.",
"The parsing process ends when only the root remains on the stack.",
"As extra high-order features, Ma et al. (2018) add grandparent and sibling information, whose encoder hidden states are added to that of the word on top of the stack to generate the corresponding decoder hidden state d t .",
"They prove that these additions improve final accuracy, especially when children are attached in an inside-out fashion.",
"According to the authors, the original stack-pointer network is trained to maximize the likelihood of choosing the correct word for each possible top-down path from the root to a leaf.",
"More in detail, a dependency tree can be represented as a sequence of top-down paths p 1 , . . . , p k , where each path p i corresponds to a sequence of words $ , w i, 1 , w i, 2 , . . . , w i,l i from the root to a leaf.",
"Thus, the conditional probability P ( y | x ) of the dependency tree y for an input sentence x can be factorized according to this top-down structure as: P ( y | x ) = k (cid:89) i =1 P ( p i | p <i , x ) = k (cid:89) i =1 l i (cid:89) j =1 P ( w i,j | w i,<j , p <i , x ) where represents model parameters, p <i stands for previous paths already explored, w i,j denotes the j th word in path p i and w i,<j represents all the previous words on p i .",
"For more thorough details of the stack-pointer network architecture and the top-down transition system, please read the original work by Ma et al. (2018).",
"We take advantage of the neural network architecture designed by Ma et al. (2018) and introduce a simpler left-to-right transition system that requires neither a stack nor a buffer to process the input sentence and where, instead of selecting a child of the word on top of the stack, the network points to the parent of the current focus word.",
"In particular, in our proposed approach, the parsing configuration just corresponds to a focus word pointer i , that is used to point to the word currently being processed.",
"The decoding process starts with i pointing at the first word of the sentence and, at each parsing configuration, only one action is available: the parameterized Attach p transition, that links the focus word w i to the head word w p in position p of the sentence (producing the dependency arc w p w i ) and moves i one position to the right.",
"Note that, in our algorithm, p can equal 0, attaching, in that case, w i to the dummy root node.",
"The parsing process ends when the last word from the sentence is attached.",
"This can be easily represented as a loop that traverses the input sentence from left to right, linking each word to another from the same sentence or to the dummy root.",
"Therefore, we just need n steps to process the n words of a given sentence and build a dependency tree.",
"While our novel transition system intrinsically holds the single-head constraint (since, after attaching the word w i , i points to the next word w i +1 in the sentence), it can produce an output with cycles.",
"1 Therefore, in order to build a well-formed dependency tree during decoding, attachments that generate cycles in the already-built dependency graph must be forbidden.",
"Please note that the need of a cycle-checking extension does not increase the overall quadratic runtime complexity of the original implementation by Ma et al. (2018) since, as in other transition-based parsers such as (Covington, 2001; Gomez-Rodrguez and Nivre, 2010), cycles can be incrementally identi-fied in amortized constant time by keeping track of connected components using path compression and union by rank.",
"Therefore, the left-to-right algorithm requires n steps to produce a parse.",
"In addition, at each step, the attention vector a t needs to be computed and cycles must be checked, both in O ( n ) + O ( n ) = O ( n ) runtime.",
"This results in a O ( n 2 ) time complexity for decoding.",
"2 On the other hand, while in the top-down decoding only available words in the buffer (plus the word on top of the stack) can be pointed to by the network and they are reduced as arcs are created (basically to keep the single-head constraint); our proposed approach is less rigid: all words from the sentence (including the root node and excluding w i ) can be pointed to, as long as they satisfy the acyclicity constraint.",
"This is necessary because two different words might be attached to the same head node and the latter can be located in the sentence either before or after w i .",
"Therefore, the sequence s i , required by the attention score function",
"(Eq.(1)), is composed of the encoder hidden states of all words from the input, excluding e i , and prepending a special vector representation denoting the root node.",
"We also add extra features to represent the current focus word.",
"Instead of using grandparent and sibling information (more beneficial for a top-down approach), we just add the encoder hidden 1 In practice, even with the cycle detection mechanism disabled, the presence of cycles in output parses is very uncommon (for instance, just in 1% of sentences in the PTB-SD dev set) since our system seems to adequately model well-formed tree structures.",
"2 A practically faster version of the left-to-right parser might be implemented by just ignoring the presence of cycles during decoding, and destroying the cycles generated as a post-processing step that simply removes one of the arcs involved.",
"states of the previous and next words in the sentence to generate d t , which seems to be more suitable for a left-to-right decoding.",
"In dependency parsing, a tree for an input sentence of length n can be represented as a set of n directed and binary links l 1 , . . . , l n .",
"Each link l i is characterized by the word w i in position i in the sentence and its head word w h , resulting in a pair ( w i , w h ) .",
"Therefore, to train this novel variant, we factorize the conditional probability P ( y | x ) to a set of head-dependent pairs as follows: P ( y | x ) = n (cid:89) i =1 P ( l i | l <i , x ) = n (cid:89) i =1 P ( w h | w i , l <i , x ) Therefore, the left-to-right parser is trained by maximizing the likelihood of choosing the correct head word w h for the word w i in position i , given the previous predicted links l <i .",
"Finally, following a widely-used approach (also implemented in (Ma et al., 2018)), dependency labels are predicted by a multiclass classifier, which is trained in parallel with the parser by optimizing the sum of their objectives.",
"We use the same implementation as Ma et al. (2018) and conduct experiments on the Stanford Dependencies (de Marneffe and Manning, 2008) conversion (using the Stanford parser v3.3.0) 3 of the English Penn Treebank (Marcus et al., 1993), with standard splits and predicted PoS tags.",
"In addition, we compare our approach to the original top-down parser on the same twelve languages from the Universal Dependency Treebanks 4 (UD) that were used by Ma et al. (2018).",
"5 Following standard practice, we just exclude punctuation for evaluating on PTB-SD and, for each experiment, we report the average Labelled and Unlabelled Attachment Scores (LAS and UAS) over 3 and 5 repetitions for UD and PTB-SD, respectively.",
"http://universaldependencies.org 5 Please note that, since they used a former version of UD datasets, we reran also the top-down algorithm on the latest treebank version (2.2) in order to perform a fair comparison.",
"Finally, we use the same hyper-parameter values, pre-trained word embeddings and beam size (10 for PTB-SD and 5 for UD) as Ma et al. (2018).",
"By outperforming the two current state-of-the-art graph-based (Dozat and Manning, 2016) and transition-based (Ma et al., 2018) models on the PTB-SD, our approach becomes the most accurate fully-supervised dependency parser developed so far, as shown in Table 1.",
"6 In addition, in Table 2 we can see how, under the exactly same conditions, the left-to-right algorithm improves over the original top-down variant in nine out of twelve languages in terms of LAS, obtaining competitive results in the remaining three datasets.",
"Finally, in spite of requiring a cycle-checking procedure, our approach proves to be twice as fast as the top-down alternative in decoding time, 6 It is worth mentioning that all parsers reported in this section make use of pre-trained word embeddings previously learnt from corpora beyond the training dataset.",
"However, it is common practice in the literature that systems that only use standard pre-trained word embeddings are classed as fully-supervised models, even though, strictly, they are not trained exclusively on the official training data.",
"achieving, under the exact same conditions, a 23.08-sentences-per-second speed on the PTB-SD compared to 10.24 of the original system.",
"7 5 Related work There is previous work that proposes to implement dependency parsing by independently selecting the head of each word in a sentence, using neural networks.",
"In particular, Zhang et al. (2017) make use of a BiLSTM-based neural architecture to compute the probability of attaching each word to one of the other input words, in a similar way as pointer networks do.",
"During decoding, a postprocessing step is needed to produce well-formed trees by means of a maximum spanning tree algorithm.",
"Our approach does not need this postprocessing, as cycles are forbidden during parsing instead, and achieves a higher accuracy thanks to the pointer network architecture and the use of information about previous dependencies.",
"Before Ma et al. (2018) presented their top-down parser, Chorowski et al. (2017) had already employed pointer networks (Vinyals et al., 2015) for dependency parsing.",
"Concretely, they developed a pointer-network-based neural architecture with multitask learning able to perform preprocessing, tagging and dependency parsing exclusively by reading tokens from an input sen-7 Please note that the implementation by Ma et al. (2018), also used by our novel approach, was not optimized for speed and, therefore, the reported speeds are just intended for comparing algorithms implemented under the same framework, but not to be considered as the best speed that a pointer-network-based system can potentially achieve.",
"tence, without needing POS tags or pre-trained word embeddings.",
"Like our approach, they also use the capabilities provided by pointer networks to undertake the parsing task as a simple process of attaching each word as dependent of another.",
"They also try to improve the network performance with POS tag prediction as auxiliary task and with different approaches to perform label prediction.",
"They do not exclude cycles, neither by forbidding them at parsing time or by removing them by post-processing, as they report that their system produces parses with a negligible amount of cycles, even with greedy decoding (matching our observation for our own system, in our case with beam-search decoding).",
"Finally, the system developed by Chorowski et al. (2017) is constrained to projective dependencies, while our approach can handle unrestricted non-projective structures.",
"We present a novel left-to-right dependency parser based on pointer networks.",
"We follow the same neural network architecture as the stack-pointer-based approach developed by Ma et al. (2018), but just using a focus word index instead of a buffer and a stack.",
"Apart from doubling their system's speed, our approach proves to be a competitive alternative on a variety of languages and achieves the best accuracy to date on the PTB-SD.",
"The good performance of our algorithm can be explained by the shortening of the transition sequence length.",
"In fact, it has been proved by several studies (Fernandez-Gonzalez and Gomez-Rodrguez, 2012; Qi and Manning, 2017; Fernandez-Gonzalez and Gomez-Rodrguez, 2018) that by reducing the number of applied transitions, the impact of error propagation is alleviated, yielding more accurate parsers.",
"Our system's source code is freely available at https://github.com/danifg/ Left2Right-Pointer-Parser .",
"This work has received funding from the European Research Council (ERC), under the European Union's Horizon 2020 research and innovation programme (FASTPARSE, grant agreement No 714150), from MINECO (FFI2014-51978-C2-2-R, TIN2017-85160-C2-1-R) and from Xunta de Galicia (ED431B 2017/01)."
] | [
"objective",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"other",
"other"
] |
[
"In this paper, we propose I nverse A dversarial T raining (IAT) algorithm for training neural dialogue systems to avoid generic responses and model dialogue history better.",
"In contrast to standard adversarial training algorithms, IAT encourages the model to be sensitive to the perturbation in the dialogue history and therefore learning from perturbations.",
"By giving higher rewards for responses whose output probability reduces more significantly when dialogue history is perturbed, the model is encouraged to generate more diverse and consistent responses.",
"By penalizing the model when generating the same response given perturbed dialogue history, the model is forced to better capture dialogue history and generate more informative responses.",
"Experimental results on two benchmark datasets show that our approach can better model dialogue history and generate more diverse and consistent responses.",
"In addition, we point out a problem of the widely used maximum mutual information (MMI) based methods for improving the diversity of dialogue response generation models and demonstrate it empirically.",
"In recent years, neural end-to-end dialogue response generation models (Sordoni et al., 2015; Serban et al., 2016; Bordes et al., 2016) has gained increasing popularity with the recent advancements of neural sequence-to-sequence (seq2seq) learning models (Sutskever et al., 2014; Vaswani et al., 2017).",
"While neural dialogue models can generate seemingly fluent responses, due to the over-simplified maximum likelihood estimation (MLE) training objective and the high frequency of generic responses in training corpora, they tend to produce dull and generic responses such as I Equal contribution. Corresponding author don't know much more often than that humans generally do (Li et al., 2015), which makes dialogue agents less engaging and ineffective.",
"In addition, recent research on whether neural dialogue systems use dialogue history effectively (Sankar et al., 2019) shows that most neural dialogue agents fail to take the dialogue history into account when generating responses.",
"This problem makes neural dialogue systems tend to generate responses irrelevant to the current topic of the conversation and are not consistent with the dialogue history.",
"This problem may also intensify the generic response problem, as dull responses are generally off-topic and irrelevant to the dialogue history.",
"To address the above issues, in this paper, we propose I nverse A dversarial T raining (IAT) algorithm for training neural dialogue systems to avoid generic responses and model dialogue history better, thus generating diverse and informative responses.",
"Conventional adversarial training methods generally generate label-preserving adversarial inputs with carefully designed methods and train the model to generate the same output to enhance the model's robustness.",
"In contrast, our approach perturbs in input dialogue history such that a good dialogue model should not generate the same output if the output is non-generic and relevant to the dialogue history.",
"We name our proposed method as inverse adversarial training because it is related to conventional adversarial training methods which aim to improve the model's adversarial robustness but our proposed objective is motivated in the opposite direction.",
"Note that our work is not directly related to TextGANs as well as their applications on dialogue response generation.",
"Specifically, the proposed inverse adversarial training assigns higher rewards to generated responses or ground-truth responses if their likelihood decreases more when the dialogue history is perturbed, and penalize the model when it generates responses whose likelihood is almost unchanged given either original or perturbed dialogue history as input.",
"This encourages the model to generate more relevant and informative responses and capture dialogue history better.",
"The proposed IAT algorithm can be used in both supervised and self-supervised fashion (with/without reference response), which can be viewed as a form of reward-augmented maximum likelihood (RAML) method (Norouzi et al., 2016) that improves the original MLE objective or a rewarding scheme for RL-based text generation algorithms.",
"The inverse adversarial learning framework is also conceptually related to self-adversarial learning (Zhou et al., 2020) where the the comparison is made between different checkpoints of the same model to provide reward for RL training of the NLG model.",
"In addition, we identify a limitation of the widely-used maximum mutual information (MMI) based methods for improving the diversity of dialogue response generation models.",
"This will be discussed in detail in section 2.1 and empirically demonstrated in section 4.2.",
"We conduct experiments on two dialogue datasets, OpenSubtitiles and DailyDialog, to demonstrate the effectiveness of the proposed approach.",
"Experimental results show IAT helps neural dialogue systems model dialogue history better and generate more diverse and informative responses.",
"Neural dialogue models tend to generate generic or dull responses such as I don't know which are not engaging for the users (Sordoni et al., 2015).",
"This behavior can be ascribed to the high frequency of generic responses in the training corpus and the over-simplified MLE training objective.",
"How to avoid generic responses and to make the dialogue agent more engaging has been a long-standing problem.",
"Previous work attempts to address this problem with different approaches: 1) Li et al. (2015) propose a diversity-promoting objective based on Maximum Mutual Information (MMI).",
"Given source S and target T , their approach first generates N-best lists based on P ( T | S ) and then rerank the list by combining p ( T | S ) and p ( S | T ) ; 2) Zhang et al. (2018b) propose to directly optimize p ( S | T ) together with p ( T | S ) with an Adversarial Information Maximization objective; and 3) adversarial learning (Li et al., 2017a) and dual adversarial learning (Cui et al., 2019) based on the intuition that real responses are of high diversity, thus can be distinguished from generated responses which are often dull and generic.",
"There are also other methods using distributional constraints of the target responses (Baheti et al., 2018; Csaky et al., 2019) or commonsense knowledge (Wu et al., 2020).",
"While shown to be effective in several datasets, these approaches suffer from several drawbacks.",
"For the first two approaches, while the MMI objective may lead to larger mutual information, it often does not actually result in more informative and engaging responses according to our observations.",
"For example, given a dialog context: What have you done with him in the bar last night?",
"The top response re-ranked by the MMI objective is I have done nothing with him in the bar last night., which is non-informative and less natural compared with the response Nothing at all. generated by a standard seq2seq dialogue model.",
"This is also confirmed in the experiment section.",
"We suspect this phenomenon is caused by the term p ( S | T ) in the MMI objective.",
"It encourages generating responses that make the last utterance in the dialogue history have a high likelihood given the generated responses.",
"While a truly informative response may yield a high p ( S | T ) , the model can easily find a shortcut to cheat this objective by simply copying a portion of tokens in the last utterance, which is likely to have high p ( S | T ) as well as p ( T | S ) .",
"The adversarial learning based dialogue model is notoriously hard to train and may suffer from the problem of mode collapse, which decreases the diversity of generated responses.",
"In contrast, our proposed IAT approach is based on the intuition that a diverse, relevant, and consistent response should be sensitive to the perturbation in the dialogue history, which is from a different perspective and may be complementary with the aforementioned approaches.",
"Recently, Sankar et al. (2019) evaluated whether existing neural dialogue systems use dialogue history effectively by perturbing dialogue history and observing the variation of model output.",
"output.",
"They corrupted the dialogue history with both utterance-level and word-level perturbation and see whether and how much the output perplexity decreases.",
"Their experimental results show that end-to-end neural dialogue systems are generally non-sensitive to the perturbation of dialogue history, suggesting that they may perform poorly in modeling dialogue history.",
"Previous work (Serban et al., 2016; Zhao et al., 2017) improves the context modeling ability with modification in model architectures.",
"In contrast, our approach employs a novel training objective to enhance the dialogue history modeling ability, which is orthogonal and may be complementary with them.",
"In this section, we describe the proposed inverse adversarial training algorithm in detail.",
"We first describe how we perturb the dialogue history and then formally introduce the inverse adversarial training algorithm.",
"Following previous study (Sankar et al., 2019), we perturb the dialogue history in both utterance and word level and apply them jointly during training.",
"Utterance-level Perturbations We consider the following operations 1) Shuf that shuffles the sequence of utterances in the dialog history, 2) Rev that reverses the order of utterances in the history (but maintains word order within each utterance) 3) Drop that completely drops certain utterances, 4) Truncate that truncates the dialog history to contain only the k most recent utterances where k n, where n is the length of dialog history, and 5) Repl that randomly replaces each utterance in the dialogue history by another utterance in the dataset with a probability of 30%, which resembles the negative sampling (Mikolov et al., 2013) approach 1 .",
"Word-level perturbations We consider similar operations but at the word level within every utterance 1) word-shuffle that randomly shuffles the words within an utterance 2) reverse that reverses the ordering of words, 3) word-drop that drops 30% of the words uniformly 4) noun-drop that drops all nouns, 5) verb-drop that drops all verbs, 1 The first four kinds of perturbation is originally proposed in (Sankar et al., 2019) and the last is proposed in this paper.",
"and 6) word-repl that replace 30% of words with a random word in the vocabulary uniformly.",
"We explain the role of different perturbations and their potential effects briefly.",
"The Shuf and Rev perturbations change the chronological order of utterances.",
"Inverse adversarial training with these kinds of perturbation may help the model to capture some common-senses about the chronological order of utterances.",
"The Drop and Repl perturbations may help the model to capture some kinds of casual effects.",
"Finally, the Truncate perturbation may help the model capture long-term and multi-turns dialogue history better.",
"In contrast to the adversarial training objective which maximize the likelihood of generating the same output given perturbed input, the inverse adversarial training objective maximizes the reduction of the likelihood of generating the same output when the input is perturbed, which is opposite to the conventional adversarial training.",
"A straightforward approach is to maximize the likelihood of generating ground-truth responses given original dialogue history while minimizing this likelihood when given perturbed dialogue history.",
"However, this approach suffers from several problems: First, as a previous study (Sankar et al., 2019) has shown, neural dialogue models generally capture the perturbation in the dialogue history poorly, which is suggested by the fact that the output embeddings of the encoder are very similar when given original and perturbed input dialogue histories.",
"This results in training the decoder to simultaneously maximize and minimize the likelihood of the same output given very similar input, which is undesirable and makes the training ineffective.",
"The second problem is that this training objective does not capture the variation of likelihood and thus treats relevant and engaging responses equally with dull and generic responses.",
"This is undesirable as we only want to maximize/minimize the likelihood for relevant and engaging responses when conditioning on origi-nal/perturbed dialogue history and dull responses should be avoided in both cases.",
"In this paper, we propose a sequence-level objective which is able to capture the variation of the likelihood of responses given original or perturbed input.",
"This makes it possible to model dialogue history better and avoid generic response problem Figure 1: Illustration of IAT.",
"at the same time.",
"The idea is to evaluate generated sentences based on the variation of the likelihood of responses given original or perturbed dialogue history and use this variation as rewards for training the dialogue model.",
"Given original dialogue history X and perturbed dialogue history X (cid:48) , the reward R ( Y | X, X (cid:48) ) of generating response Y , which is a sequence of n tokens y i , i 1 , 2 , ..., n , is measured by how much Y is more likely to be generated by the dialogue model given X compared with that given X (cid:48) , which is computed by the difference of negative log-likelihood losses (NLL) in two cases, as described below.",
"NLL orig = n (cid:88) i =1 log P ( y i | y <i , X ) (1) NLL adv = n (cid:88) i =1 log P ( y i | y <i , X (cid:48) ) (2) R ( Y | X, X (cid:48) ) = NLL adv NLL orig (3) Intuitively, the reward R would be high when the response Y is engaging and relevant to the dialogue history.",
"A generic response should be assigned with a low or even negative reward as it is irrelevant to the dialogue history.",
"The inverse adversarial training objective is to generate responses to maximize its reward.",
"With likelihood ratio (Sutton et al., 2000), we can formulate the gradient of the objective function for dialogue response generator G as: J ( ) = (cid:88) Y n (cid:88) i =1 log G ( y i | y <i , X ) R ( Y | X, X (cid:48) ) (4) The above training objective encourages the dialogue model to generate non-generic responses and model dialogue history better by giving higher rewards when generating good responses based on original dialogue history.",
"Similarly, we would also want to penalize the dialogue model when it generates the same response given perturbed dialogue history to explicitly force the dialogue system to effectively model the dialogue history.",
"We propose to model this penalty with a max-margin reward scheme.",
"Given margin M , the penalty P ( Y | X, X (cid:48) ) of generating Y is computed by P ( Y | X, X (cid:48) ) = min(0 , NLL adv NLL orig M ) (5) The insight behind equation 5 is that when the variation of likelihood of generating Y given X and X (cid:48) is large enough (i.e. NLL orig NLL adv M > 0 ), the model should be considered successfully captured the perturbation in the dialogue history and should not be penalized.",
"In contrast, when the variation is not large enough, we penalize the dialogue agent for generating Y giving X (cid:48) because a small variation of likelihood implicates: (1) the dialogue agent models dialogue history poorly and (2) the generated responses Y may be irrelevant to the dialogue history X and thus be generic and non-informative.",
"(6) The penalty and reward are combined by directly summing up the gradient in Eq (4) and Eq (6).",
"The proposed inverse adversarial training algorithm can be applied in both supervised fashion where responses Y are ground-truth responses in the dataset and self-supervised fashion where Y is generated by the dialogue model itself.",
"The only difference between the self-supervised and supervised version is whether the reference responses are generated (self-supervised) or ground-truth responses (supervised).",
"The supervised inverse adversarial training can be viewed as a reward function algorithm for RAML (Norouzi et al., 2016) training that assigns higher rewards for good training examples that help our model to generate relevant responses and learn to model dialogue history better.",
"The self-supervised inverse adversarial training, in contrast, allows the model to explore freely and train the model with policy gradient (Sutton et al., 2000), a reinforcement learning approach.",
"To validate the effectiveness of the proposed inverse adversarial training algorithm, we conduct experiments in order to answer the following two research questions:",
"(1) Do inverse adversarial training help neural dialogue systems model dialogue history better?",
"(2) Do inverse adversarial training help neural dialogue models generate more diverse, engaging, and informative dialogue responses?",
"Datasets We employ two datasets in our experiments.",
"The first dataset is the OpenSubtitles corpus (Lison and Tiedemann, 2016) which is a large, open-domain dataset containing scripts of movie characters.",
"Following previous work, we consider each turn in the dataset as the target response and the two previous sentences as the dialogue history.",
"We remove the pairs whose response is shorter than 5 words and randomly sample 1,800K, 500K, and 12K dialogue turns for training, validation, and testing, respectively.",
"We employ the DailyDialog dataset (Li et al., 2017b) as the second dataset which consists of dialogues that resemble daily conversations across multiple topics.",
"It comprises of 13k dialogues, which is much smaller compared with the OpenSubtitles dataset.",
"However, it has an average of 7.9 turns per dialog, which is more suitable for evaluating whether the proposed approach is able to improve the model's ability of modeling long-term dialogue history.",
"Compared Models We build dialogue systems with seq2seq (Sutskever et al., 2014) models.",
"Following previous work (Li et al., 2017a, 2015), we employ LSTM-based seq2seq model for the OpenSubtitles dataset.",
"For the DailyDialog dataset, we employ the transformer (Vaswani et al., 2017) model which yields superior results in preliminary experiments while shown to perform poorly in modeling dialogue history (Sankar et al., 2019).",
"Specifically, following previous work (Xu et al., 2018), we set the hidden size to 256, embedding size to 128, vocabulary size to 50K, and batch size to 64 for the proposed models and the baselines.",
"We use the Adam optimizer with the initial learning rate 0.1 for model training.",
"We compare the dialogue model trained with the proposed inverse adversarial learning algorithm with the following baseline methods (all compared models are using the same backbone ar-chitecture): Seq2Seq : The vanilla seq2seq dialogue model trained with MLE objective.",
"Seq2Seq + MMI : The dialogue model using mutual information method (Li et al., 2015), which substracts the score of the target sequence log p ( T | S ) by its language model score log p ( T ) (MMI-anti) or by a backward generation score log p ( S | T ) (MMI-bidi) for decoding.",
"Seq2Seq + Adversarial Learning : A dialogue model trained with adversarial learning objective (Li et al., 2017a).",
"The model is pretrained with MLE objective and then fine-tuned with adversarial learning.",
"CVAE : A dialogue response generation model using conditional VAE (Zhao et al., 2017) to improve the discourse-level diversity of generated responses.",
"Our models are pretrained with the MLE objective until the validation perplexity stops decreasing.",
"We then apply the inverse adversarial training algorithm for continual training.",
"During training, reference responses are either generated responses or ground-truth responses in self-supervised and supervised inverse-adversarial training respectively.",
"We combine both supervised and self-supervised inverse adversarial training by alternatively switching between these two objectives for each training iteration.",
"Evaluation Metrics We employ different automated evaluation metrics to respectively answer the three research questions introduced at the beginning of this section.",
"To evaluate how well dialogue systems are able to model dialogue history, we adopt the approach proposed by Sankar et al. (2019), which measures the increases in perplexity when the model is fed with perturbed dialogue history instead of original dialogue history.",
"We report the result in both utterance-level and word-level perturbation.",
"To evaluate if inverse adversarial learning can effectively reduce the generic response problem, following Li et al. (2015), we evaluate the diversity of generated responses by calculating the number of distinct unigrams, bigrams, and trigrams in generated responses.",
"The value is scaled by the total number of generated tokens to avoid favoring long sentences, which are shown as distinct-1, distinct-2, and distinct-3 in Table",
"2. Lastly, we compare the percentage of stop-words 2 of the responses generated by each model (smaller values that are closer to the distribution of human conversations are preferred).",
"We also report the token-level overlap between the generated response and the last utterance in the dialog history to demonstrate the shortcutproblem of MMI-based methods decribed in Section 2.1.",
"As our approach is training in an opposite direction compared to conventional adversarial training employed to enhance the robustness of trained models, we also conduct experiments to evaluate the robustness of the dialogue response generation models with respect to non label-changing adversarial dialogue history.",
"Similar to the method of evaluating the dialogue his-2 Stopword List from https://www.ranks.",
"tory modeling ability, we measure the perplexity changes when the model is given a different but meaning-preserving dialogue history, which is constructed by performing word substitution with a BERT-based lexical substitution method (Zhou et al., 2019) and paraphrase generation (Kumar et al., 2020) as word-level and utterance-level perturbation respectively on the original dialogue history, as the input.",
"In addition, as demonstrated by Liu et al. (2016); Zhou and Xu (2020), automated metrics are notoriously poor for evaluating dialogue systems.",
"We thus conduct a human evaluation to better evaluate the effectiveness of the proposed algorithm.",
"For human evaluation, we invite 20 human annotators which are all graduate students with good English proficiency to evaluate the quality of the model.",
"Following Zhang et al. (2018a), we ask human annotators to interact with compared models for 50 utterances with each compared dialogue system and evaluate the fluency, consistency, and diversity of the model (scored between 15).",
"Fluency measures how likely the generated text is produced by human.",
"Consistency measures how likely the generated text is related to the input dialogue history, which corresponds to the first research question.",
"Diversity measures how much the generated text provides specific information, rather than dull and repeated information, which corresponds to the second research question.",
"Results on dialogue history modeling We first present the results on dialogue history modeling ability.",
"The results are shown in Table",
"1. We can see that the dialogue model trained with the proposed inverse adversarial training algorithm per-Method DailyDialog OpenSubtitles Dist-1 Dist-2 Dist-3 overlap stop-word Dist-1 Dist-2 Dist-3 overlap stop-word Seq2Seq base model 2.32 6.28 9.43 15.6 67.4 1.72 5.37 7.64 22.5 77.8 + MMI-anti 4.15 11.27 19.61 26.7 62.4 3.45 11.35 18.12 30.1 74.2 + MMI-bidi 3.52 9.29 17.43 31.5 63.1 3.52 12.11 18.56 37.8 74.7 + AL 2.25 6.01 9.39 16.1 66.8 2.97 5.44 7.46 23.5 76.4 + DS 3.19 7.84 11.61 18.4 61.5 3.05 6.30 11.59 21.3 71.2 + CVAE 3.59 9.41 12.93 17.7 61.1 3.35 10.13 17.02 22.5 71.4 + IAT 3.72 9.81 14.93 15.4 60.9 3.29 10.16 17.30 20.8 70.9 Table 2: Results of the diversity of generated responses of compared models.",
"forms significantly better than the compared baselines as the perplexity dramatically increases when the input dialogue history is perturbed.",
"This is not surprising as our approach is the first learning objective which explicitly forces the dialogue system to better model dialogue history.",
"In contrast, the MMI criterion and the adversarial learning objective do not significantly influence the dialogue history modeling ability of dialogue systems.",
"The dialogue model based on CVAE models dialogue history better than other baselines while still under-performs our approach.",
"Reults on diversity The results of the diversity of responses generated by compared models are shown in Table",
"2. We can see that both the Maximum Mutual Information objective and the proposed inverse adversarial learning succeed in improving the diversity of generated responses.",
"In contrast, the adversarial learning objective hardly improves the diversity, which may be due to the instability of adversarial learning on text generation.",
"While the MMI objective yields slightly larger improvements on distinct n-gram based metrics, their Method Fluency Consistency Diversity Seq2Seq base model 2.83 2.69 3.05 + MMI-anti 2.73 2.78 3.10 + MMI-bidi 2.80 2.82 3.02 + AL 2.77 2.69 2.91 + DS 2.85 2.88 3.12 + CVAE 2.93 2.91 3.19 + IAT 3.02 3.05 3.34 Table 4: Human evaluation results of compared model on the DailyDialog dataset.",
"approach is used only for re-ranking during inference, which is orthogonal and may be complementary to the proposed approach.",
"In addition, as described in section 2.1, the MMI objective may favor non-engaging responses that simply repeats the last utterance in the dialogue history.",
"This is empirically demonstrated by their high overlap with the last utterance in the dialog history, as measured by the overlap metric.",
"In contrast, our approach does not suffer from this problem and also generate fewer stop-words compared to the MMI-based methods.",
"In addition, our approach also outperforms the strong baselines including that using distributional constraint and CVAE, demonstrating its effectiveness in improving the diversity of generated responses.",
"Results on adversarial robustness We also conduct experiments to test the robustness of the dialogue model trained with the proposed inverse adversarial training objective.",
"The results are shown in Table",
"3. We see that the increase in the perplexity of ground-truth responses under our model is roughly the same with the baseline transformer model and the other compared mod-Source how long will it take us to drive to London ?",
"els.",
"This suggests that our proposed IAT objective does not harm the adversarial robustness.",
"Human evaluation We conduct a human evaluation of compared models on the DailyDialog dataset.",
"The results are shown in Table",
"4. We can see that the proposed inverse adversarial training objective substantially improves the consistency of the dialogue model over all compared baselines, which confirms its ability to train dialogue agents to model dialogue history better.",
"As for the diversity of generated responses, we find that human annotators do not prefer the responses selected by the MMI objective over that generated by the baseline model with a large margin.",
"We find that this is mainly because the MMI objective prefers repeating tokens which appear in the last utterance and human annotators find it non-informativeness.",
"In contrast, our approach yields even larger improvements in the diversity of the generated responses.",
"We do not find the adversarial learning method improves the diversity of dialogue models, which may be due to the problem of mode collapse in adversarial learning.",
"The over-all fluency of compared models is roughly the same, which may be because they are all trained or pretrained with MLE objective.",
"To better compare and analyze the inverse adversarial training objective, we conduct a qualitative",
"analysis of dialogue responses generated by different compared models.",
"The samples are presented in Table",
"5. We can see that the vanilla transformer-based dialogue response generation model tends to generate irrelevant and generic responses.",
"Applying the MMI objective for re-ranking successfully avoids those generic responses.",
"However, it leads to another kind of non-informative response that repeats the majority of tokens in the latest utterance, which is also quite unnatural.",
"In contrast, dialogue models trained with the proposed inverse adversarial training objective tend to generate more diverse responses which are also more relevant to the dialogue history.",
"To better understand the relative importance of different components in the proposed inverse adversarial",
"adversarial training objective, we conduct an ablation study with human evaluation to compare different model variants against the full model.",
"The results are shown in Table",
"6. We can find that both supervised-only and self-supervised-only variant of the proposed inverse adversarial training algorithm can improve the consistency and the diversity of dialogue models.",
"However, self-supervised inverse adversarial training seems to sacrifice the fluency of generated responses for better diversity and consistency as the model trained without the self-supervised objective are considered to be more fluent by human annotators.",
"The usefulness of the reward and the penalty objectives is also demonstrated by human evaluation.",
"Concretely, we find that the reward described in",
"Eq.(3) contributes more to the diversity of generated responses.",
"This may be because it assigns high rewards for relevant and specific responses and negative rewards for generic responses.",
"In contrast, the penalty in",
"Eq.(5) helps the dialogue system model dialogue history better and leads to more consistent responses by punishing the dialogue model when generating the same responses given perturbed dialogue history.",
"As for different perturbation approaches, we find that both utterance-level and token-level contributes to the performance improvements.",
"Also, we find that utterance-level perturbation may be more effective for improving the consistency of generated responses.",
"We suspect this may be because the ability of the dialogue model to distinguish utterance-level perturbation is more important for better dialogue history modeling.",
"In this work, we introduce inverse adversarial training (IAT) algorithm that is able to simultaneously reduce the dull response problem and help neural dialogue systems model dialogue history better.",
"IAT measures the relevance and consistency of responses by the difference of their likelihood conditioning on either original and perturbed dialogue history.",
"In this way, it is able to prevent the dialogue system from preferring generic responses, even they are often of high frequency in the training corpora.",
"Our method also encourages the dialogue agent to model dialogue history better by penalizing the model when generating the same responses given perturbed dialogue history.",
"Experimental results on two benchmark datasets show that the proposed inverse adversarial training algorithm helps dialogue models capture dialogue history better and generate more diverse and consistent responses.",
"We also identify a limitation of the widely-used MMI based methods for improving the diversity of dialogue response generation models and empirically demonstrate the existence of this problem through our experimetns.",
"This work does not involve collection and release of data, nor inference of information or judgments about individuals.",
"However, dialogue systems may have a social impact and we believe that making dialogue agent able to generate more meaningful and consistent responses are beneficial.",
"We also agree that general control on the bias or unfairness of neural dialogue agents is important.",
"We believe this can be done from both the perspective of data collection and training algorithms.",
"We believe our proposed training algorithm will likely not contribute to any ethical concern of chat robots.",
"We thank the anonymous reviewers for their valuable comments."
] | [
"objective",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"other",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other"
] |
[
"Neural discrete reasoning (NDR) has shown remarkable progress in combining deep models with discrete reasoning.",
"However, we find that existing NDR solution suffers from large performance drop on hypothetical questions, e.g., what the annualized rate of return would be if the revenue in 2020 was doubled .",
"The key to hypothetical question answering (HQA) is counterfactual thinking, which is a natural ability of human reasoning but difficult for deep models.",
"In this work, we devise a Learning to Imagine (L2I) module, which can be seamlessly incorporated into NDR models to perform the imagination of unseen counterfactual.",
"In particular, we formulate counterfactual thinking into two steps: 1) identifying the fact to intervene, and 2) deriving the counterfactual from the fact and assumption, which are designed as neural networks.",
"Based on TAT-QA, we construct a very challenging HQA dataset with 8,283 hypothetical questions.",
"We apply the proposed L2I to TAGOP, the state-of-the-art solution on TAT-QA, validating the rationality and effectiveness of our approach.",
"Neural discrete reasoning (Dua et al., 2019) is an emerging technique for machine reading comprehension (Rajpurkar et al., 2016) which aims at answering numerical questions from textual (Dua et al., 2019) or hybrid (Zhu et al., 2021) context 1 .",
"NDR combines deep neural network with discrete and symbolic reasoning ( e.g., addition, sorting, or counting) (Dua et al., 2019) and enables the comprehension of complex contexts and compositional questions, which is critical for many practical applications such as automatic diagnosis (Wei et al., 2018) and robo-advisor (Fisch et al., 2019).",
"Existing state-of-the-art NDR models implement the nuCorresponding author.",
"1 where hybrid includes textual and tabular data in this work merical reasoning process as neural network modules (Ran et al., 2019; Herzig et al., 2020; Zhu et al., 2021), e.g., a graph neural network for sorting (Ran et al., 2019; Chen et al., 2020a).",
"In this work, we extend NDR to hypothetical question answering (HQA), where the question consists of an assumption beyond the context (Fig-ure 1).",
"The ability of HQA will undoubtedly enhance the practical use of NDR due to the universality of hypothetical questions.",
"However, current NDR models face severe generalization failure on hypothetical questions.",
"An empirical evidence on such vulnerability is that the state-of-the-art model (Zhu et al., 2021) encounters a sharp performance drop (F1 score drops from 68.6% to 3.8%) on the TAT-QA dataset when changing the questions to be hypothetical by adding a related assumption (see details in Section 2, Table 3).",
"We postulate that the failure is due to unable of imagining the counterfactual context according to the assumption (Figure 1).",
"To pursue such reasoning ability, we resort to the concept of counterfactual thinking (Pearl, 2019) from the theory of causality, which is the ability to imagine and reason over unseen cases based on the seen facts and counterfactual assumptions.",
"In this light, we consider modeling counterfactual thinking as neural network modules that can be seamlessly incorporated into existing NDR models.",
"One straightforward solution is to model counterfactual thinking as a generation procedure with the fact and assumption as inputs by using a generation model such as GPT (Brown et al., 2020).",
"However, such uncontrollable model (Zou et al., 2021) can hardly generate high-quality context for two reasons: 1) the context is more complex than plain text, which can include a table (Figure 1); and 2) NDR requires a precise context with the correct numbers (Figure 1, $132,935 for the finished goods in 2019 ).",
"Therefore, we resort to an alternative approach: constructing the counterfactual 57 What was the change in finished goods from 2018 to 2019?",
"by intervening on the factual context.",
"As shown in Figure 1, the assumption changes one entry in the table, e.g., $133,682 to $132,935 .",
"This is coherent with the causal inference theory (Pearl, 2009) where the target variable is intervened according to the hypothetical condition to infer a counterfactual.",
"We propose Learning to Imagine, where the counterfactual thinking is implemented with two intervening steps: 1) identifying the facts to intervene, and 2) deriving the result of intervention.",
"To pursue accurate context, we derive the intervention with a set of discrete operators such as SWAP and ADD for imagination.",
"To evaluate the counterfactual thinking ability, we recruit volunteers with domain expertise to construct an HQA dataset based on TAT-QA (Zhu et al., 2021) by posting an assumption for each original question, named TAT-HQA.",
"We apply L2I to TAGOP (Zhu et al., 2021), and obtain a promising solution for HQA.",
"In summary, the main contributions are as follows: We highlight the importance of counterfactual thinking in NDR and formulate counterfactual thinking as an intervening procedure to achieve precise imagination.",
"We devise the L2I module, which is designed as neural network operations and can be seamlessly incorporated into the NDR model for answering hypothetical questions.",
"We construct a challenging HQA dataset and conduct extensive experiments on the dataset, where the performance validates the rationality and effectiveness of the proposed L2I.",
"In the general setting of machine reading comprehension, the task is to answer a question according to the facts in a given context.",
"Formally, it is to learn a function y = f ( q , c ) , where y , q , and c are the word list representing the answer, the question, and the context 2 respectively.",
"This work studies a new and more challenging task that focuses on hypothetical question.",
"As shown in Figure 1, a hypothetical question includes an assumption, e.g., if the amount in 2019 was $132,935 thousand instead .",
"The target of HQA is to learn y = f ( q , c , a ) where a denotes the assumption.",
"The existence of an assumption calls for the imagination of a counterfactual context before inferring the answer, pushing the NDR model to grasp both semantic understanding and counterfactual thinking.",
"To facilitate the evaluation of HQA and diagnose counterfactual thinking, we construct an HQA dataset based on TAT-QA (Zhu et al., 2021), which is a QA dataset with a mix of tabular and textual context extracted from financial reports.",
"Inspired by previous work on constructing counterfactual samples (Kaushik et al., 2019), we recruit college students with finance-related majors to imagine an intervention based on the factual question and context from TAT-QA which involves numerical thinking, e.g., a change of number.",
"Then they phrase the intervention into an assumption, forming a what if type of question, and calculate the answer (see an example in Figure 1).",
"To ensure the diversity of the phrasing, annotators are free to generate various phrasing of the assumption, and there is no restriction on the position of the assumption.",
"Usually, the assumption appears either before of after the factual question.",
"Each hypothetical question is related to one factual question from TAT-QA, but each factual question in TAT-QA is not guaranteed to have one hypothetical question.",
"We follow the quality control approaches of annotator training and two-round validation in TAT-QA to guarantee the quality of the hypothetical questions.",
"Following TAT-QA, the hypothetical questions are also labeled with four answer types: arithmetic , span , count , and multi-span , three types of answer sources: table, text and table-text, and a derivation on how the answer is derived from the context.",
"In total, we obtain 8,283 hypothetical questions, naming it as TAT-HQA.",
"The statistics of TAT-HQA are shown in Table",
"1. We follow the split of training, testing and validation set of TAT-QA as shown in Table",
"2. We conduct a pilot study on the generalization ability of existing NDR models on hypothetical questions.",
"In particular, we evaluate TAGOP (Zhu et al., 2021), which is the state-of-the-art model on TAT-QA (see detailed settings in Section 4.1) by training on TAT-QA and testing on TAT-HQA.",
"In Table 3, the huge performance drop shows that even the state-of-the-art NDR model lacks counterfactual thinking ability.",
"We aim to empower NDR models with counterfactual thinking ability.",
"Firstly, we decide to choose the approach of explicitly modeling discrete operations, since existing NDR solutions have demonstrated its superiority (Dua et al., 2019; Ran et al., 2019; Herzig et al., 2020; Zhu et al., 2021).",
"We devise a Learning to Imagine module to model counterfactual thinking (Section 3.1), and then incorporate the L2I module (Section 3.2) into existing NRD methods, followed by a discussion about potential extensions (Section 3.3).",
"Functionally speaking, the L2I module aims to construct a counterfactual context based on the factual",
"context and the assumption.",
"We formulate it as: c (cid:48) = g ( c , a ) , where the counterfactual context c (cid:48) is the status of the context c after the assumption a is executed.",
"Resorting to the language of causality, it can be expressed as the do -operation that intervenes a variable to execute the assumption and the action to derive the outcome of the intervention 3 (Pearl, 2009).",
"The key to achieving counterfactual thinking in NDR lies in: 1) parsing the assumption to identify the target fact to intervene; and 2) deriving the assumed value to construct the counterfactual context.",
"Taking the hypothetical question in Figure 1 as an example, an ideal L2I should recognize the target variable ( finished goods in 2019 ), identify the corresponding fact ( $133,682 ), and replace the fact with the assumed value ( $132,935 ).",
"Two-step Formulation.",
"To this end, we propose a two-step formulation of counterfactual thinking for HQA to perform the identification and derivation.",
"Formally, Step 1: i = r ( c , a , q ) (1) Step 2: c (cid:48) i = d ( c i , c , a ) , c (cid:48) j = (cid:40) c (cid:48) j , j = i, c j , otherwise .",
"Step 1: Identifying the target fact.",
"r ( ) denotes the tagging function which scans the factual context c to recognize the fact related to the assumption a and the question q .",
"i is the word position of the identified fact c i .",
"Step 2: Deriving intervention result.",
"d ( ) denotes the deriving function that parses the assumption a to infer the discrete operation and the premise to derive the assumed value c (cid:48) i .",
"As to the assumption in Figure 1, the derivation requires a SWAP operation and a premise $132,935 .",
"This step then calls for an editing operation to construct the counterfactual context c (cid:48) .",
"Module Design.",
"Based on the two-step formulation, we then design the L2I module as neural network operations.",
"We have two considerations for the module design: 1) the module should recognize the semantic connection between the assumption and the context, and 2) the module should uniformly support various discrete operations to 3 Note that we adopt the do -expression (Pearl, 2009) of counterfactual.",
"enable accurate derivation.",
"To this end, we devise four key building blocks for the L2I module: Encoder .",
"It projects the raw content into latent representation.",
"Inspired by the recent research on NDR, we employ a pre-trained language model (PLM), i.e., RoBERTa (Liu et al., 2019), as the encoder to learn an overall representation of the context, question, and assumption; H = PLM ([ CLS , c , SEP , { q , a } , SEP ]) (2) where L and M are the length of the tokenized inputs.",
"CLS and SEP denote the beginning and the separation token of the input.",
"{ q , a } represents that the relevant position of a to p can vary.",
"We do not assume q to always precede a due to the various location of a in the annotation.",
"Matching block .",
"It distills the semantic connection between the factual question, the factual context and the hypothetical assumption (Figure 1, amount in 2019 and $132,935 ).",
"After applying the token-level self-attention of PLM, we aim to further distill the sequence-level semantic connection between the factual part (the question and the context) and the hypothetical part (the assumption).",
"We obtain the factual and assumption representations by masking H according to the position of the question, the context and the assumption, which splits H into 2 nonoverlapping parts.",
"Inspired by the success of cross-attention (Kim et al., 2018) in associating different sources, e.g., image-image (Hou et al., 2019) and image-text (Lu et al., 2019), we adopt cross-attention between the factual representation and the assumption representation, followed by self-attention respectively.",
"Formally, the calculation of the k -th layer is, H f = mask ( H , pos ( { c, q } )) H a = mask ( H , pos ( a )) H kf = MHA (cid:16) H k 1 f , H k 1 a , H k 1 a (cid:17) H ka = MHA (cid:16) H k 1 a , H k 1 f , H k 1 f (cid:17) H kf = MHA (cid:16) H kf , H kf , H kf (cid:17) H ka = MHA (cid:16) H ka , H ka , H ka (cid:17) where MHA ( ) denotes the multi-head attention (Vaswani et al., 2017) with a triple of query, key, and value as the input.",
"The residual connection and batch normalization are applied as the default choice.",
"mask ( ) denotes the masking operation, and pos ( x ) is a binary vector with the same length of H denoting the positions of x in the input of PLM.",
"Tagging head .",
"It models the identification of target fact as a token-wise tagging.",
"Formally, t i = (cid:40) 1 , ( j ) , argmax ( p j ) = 1 h Kj (cid:55) c i , 0 , otherwise .",
"p j = softmax ( MLP (cid:16) h Kj (cid:17) ) (3) where t i is a binary tag for the fact c i .",
"c i will be a target as at least one of its tokens is tagged.",
"We use h Kj (cid:55) c i to represent the mapping between token and fact, which is true if token j belongs to fact c i .",
"For each token, we employ a 2-way classifier MLP (cid:16) h Kj (cid:17) to predict its probability of being tagged as p j where argmax ( p j ) = 1 means positive (see Appendix A for more de-tails).",
"Deriving head .",
"It derives the intervention result for the target fact.",
"To calculate the intervention result, we select a set of commonly used discrete operators such as SWAP , ADD , and MINUS ( cf. Appendix B).",
"Then, we model the derivation as making a choice across the operators and tagging the premise for executing the operator.",
"In particular, we adopt a tagging head to identify the premise and a multi-way classifier for choosing operators, which is formulated as: o = softmax ( MLP ( h CLS ) ).",
"o RO is a distribution over the operators where O denotes the number of operators.",
"h CLS corresponds to the CLS token in H .",
"Most recent NDR models (Ran et al., 2019; Andor et al., 2019; Chen et al., 2020a; Herzig et al., 2020; Zhu et al., 2021) consist of two main modules: 1) a PLM to encode the context and the question into latent representations, and 2) a reasoning module that chooses the discrete operator and identifies the operands according to the latent representations.",
"As shown in Figure 2, we can seamlessly incorporate the proposed L2I into such NDR model as an intermediate module, which performs imagination before discrete reasoning.",
"In particular, we simply let the reasoning module conduct operand look-up within the counterfactual context constructed by L2I.",
"Besides, we let L2I reuse the PLM in the NDR model to reduce the model complexity and training time.",
"Model training.",
"Existing NDR methods typically follow the supervised learning paradigm to optimize the model parameters (Dua et al., 2019).",
"Suppose we have a set of labeled questions D = { < y , ( q , c , a ) > } , the training objective can be abstracted as min (cid:80) DQA ( y , f ( q , c , a )) where denotes model parameters.",
"Note that QA ( ) measures the discrepancy between the ground-truth and the predicted answers which can have different formats.",
"For instance, it can be a combination of the cross-entropy (CE) loss over the operand look-up and the CE loss over the choice of discrete operation (Herzig et al., 2020; Yin et al., 2020; Zhu et al., 2021).",
"When applying L2I to an existing NDR method, we keep its question-answering objective unchanged.",
"To optimize the L2I module, we incorporate supervision on the classifiers in the tagging head and deriving head.",
"Formally, min (cid:88) D (cid:16) QA (cid:0) y , f ( q , c , a ) (cid:1) + 1 L (cid:88) j<L CE (cid:0) p j , MLP ( h Kj ) (cid:1) + CE (cid:0) o , MLP ( h CLS ) (cid:1)(cid:17) , (4) where p j { 0 , 1 } denotes the label of the target fact (token j in context) or the premise (token j in assumption); and o RO is the label of the deriving operator (see Appendix C for the details of label construction).",
"Readers might have raised the following two concerns for L2I: 1) the operators defined are limited, and 2) the operators are tailored to one step of derivation on one target fact.",
"Actually, it is a common approach for current state-of-the-art NDR models to apply a set of defined operators (Ran et al., 2019; Chen et al., 2020a; Zhu et al., 2021).",
"For the first concern, by doing more fine-grained classification on the numerical reasoning process in the dataset, we can derive new operators and simply plug them into L2I.",
"Note that the annotation of numerical intervention of TAT-HQA does not follow the defined operators in Appendix C, but the operators are summarized from the data.",
"Our defined operators can cover over 90% of the training data.",
"For the second concern, we discuss two potential solutions by our L2I framework, and we leave the implementation as future work.",
"Multi-fact intervention.",
"The assumption a can include intervening multiple facts, e.g., if the Finished goods in 2018 and 2019 were both doubled .",
"Apparently, if the target facts are independent, we can easily handle such an assumption by executing L2I in multiple iterations.",
"In other cases, L2I needs to recognize the relationship among the target facts.",
"If such relationship is available, L2I should be able to handle such cases as the corresponding multivariable operator is added to the deriving head.",
"Multi-iteration derivation.",
"In causal inference, a rigorous derivation of an intervention considers the successors of the target variable, e.g., finished goods in 2019 affects total inventories in 2019 .",
"Currently, we omit the following iterations in Step 2 of L2I ( cf. Eq 1).",
"This is because not all successors are necessary for answering the question.",
"For instance, answering the question in Figure 1 does not require the post-intervention value of total inventories in 2019 .",
"In conventional causal inference, such successors will also be omitted according to the local surgery principle (Pearl, 2009).",
"Moreover, we believe that the following iterations can be achieved by the current L2I module in an iterative manner.",
"Assume that NDR model equipped with L2I can answer the hypothetical questions requiring one-iteration derivation ( i.e., c i c (cid:48) i ).",
"We can thus derive the value of successors ( e.g., c (cid:48) i c (cid:48) j ) by forming a simple hypothetical question: What c j would be if c i is c (cid:48) i ? and answering it with the NDR model.",
"We conduct experiments on TAT-HQA dataset to answer the following questions: RQ1: How does L2I perform on HQA?",
"RQ2: What factors influ-61 Table 4: Performance of compared methods on the TAT-HQA dataset.",
"ence the effectiveness of L2I?",
"Following Dua et al. (2019) and Zhu et al. (2021), we evaluate the performance with two commonly used metrics: Exact Match (EM) and numerically-focused F 1 score, where higher value (in [0, 100]) means better performance.",
"We tune the hyper-parameters on the validation set, and report the average test performance of five different runs.",
"Compared methods.",
"To validate the effectiveness of our proposed L2I module, we apply it to TAGOP, obtaining an NDR model for HQA, named TAGOP-L2I.",
"In addition to the vanilla TAGOP, we compare our method against representative methods of traditional QA, numerical QA, tabular QA, and hybrid QA.",
"Besides, we want to select baselines that are effective for learning counterfactual samples.",
"The baselines are: BERT-RC (Devlin et al., 2019), a traditional QA method that selects answer spans from the context.",
"NumNet+ V2 (Ran et al., 2019), a numerical QA method with numerically-aware graph neural network.",
"TAPAS-WTQ (Herzig et al., 2020), a tabular QA method that focuses on parsing and understanding tables, pre-trained over tables collected from Wikipedia before training on TAT-HQA.",
"HyBrider (Chen et al., 2020c), a hybrid QA method that considers the connection between the table and text.",
"TAGOP , a hybrid QA method that performs discrete reasoning over both the tabular and textual contexts.",
"It is the state-of-the-art method on TAT-QA dataset.",
"TAGOP-CLO , incorporating the Contrastive Learning Objective (CLO) into the training objective of TAGOP, which is shown to be effective in learning the relationship between factual and counterfactual samples (Liang et al., 2020).",
"Parameter settings.",
"We implement TAGOP-L2I based on TAGOP 4 .",
"We set the number of cross-attention layers to 3, and fine tune from TAGOP trained on TAT-QA with a learning rate of 5e-5, batch size of 32, and gradient accumulation step of 4.",
"All compared methods are initialized with 4 https://github.com/NExTplusplus/ TAT-QA .",
"the model trained on TAT-QA and then fine-tuned on TAT-HQA.",
"For TAGOP-CLO, we conduct max pooling for H and adopt cosine similarity as the distance metric.",
"We select the corresponding factual question as the positive sample and a randomly selected factual question as the negative sample.",
"The weight for the contrastive loss is 0.1.",
"Overall performance.",
"Table 4 shows the performance of the compared methods on the TAT-HQA dataset.",
"We can observe that: 1) TAGOP-L2I achieves the best performance among all the compared methods.",
"In particular, it outperforms the best baselines by 19.8% and 19.7% on EM and F 1 , respectively.",
"Such significant performance gain validates the effectiveness of the L2I module and reveal the rationality of modeling counterfactual thinking as a neural network module.",
"2) TAGOP-CLO outperforms TAGOP by 10.5% and 10.4% on EM and F 1 .",
"The only difference between these two methods is that TAGOP-CLO incorporates an extra CLO.",
"The improvement indicates that learning the relationship between the factual and counterfactual samples with CLO provides some clue for counterfactual imagination, yet it is still worse than directly learning to imagine with neural network modules.",
"3) As to the remaining methods, their performance has a clear gap between TAGOP, which is consistent with the result on the TAT-QA dataset (Zhu et al., 2021).",
"This is because both datasets have textual and tabular texts, where the ability of TAGOP to perform discrete reasoning across hybrid contexts brings significant advantages.",
"4) The performance achieved is still low w.r.t. the two metrics ( e.g., 54.4 100), showing a large space for future exploration on the challenging TAT-HQA dataset.",
"Detailed performance.",
"To further investigate the effectiveness of the proposed L2I module, we perform a detailed comparison between TAGOP-L2I and TAGOP w.r.t. the discrete operation required in answering the question or counterfactual thinking.",
"We group the questions according to 1) the answer type and 2) the operator to derive the intervention.",
"Table 5 shows the group-wise 62 Table 5: Detailed performance of TAGOP-L2I and TAGOP w.r.t. answer type and deriving operator type.",
"performance.",
"As to answer type (the left half), we have the following observations: 1) TAGOP-L2I outperforms TAGOP on all groups, showing the superior ability of learning to imagine to all types of questions.",
"2) Particularly, on the arithmetic group, which is also the largest group ( cf. Table 1), TAGOP-L2I largely outperforms TAGOP.",
"For this group, the key difference between TAGOP-L2I and TAGOP is whether the derivation of intervention and calculation of the answer are achieved by separate modules.",
"The superior performance of TAGOP-L2I validates the rationality of modeling counterfactual thinking as a separate module.",
"It should be noted that the separation also facilitates the generalization to new operations since the modules can be separately updated.",
"3) The performance of TAGOP on arithmetic has a large gap with other types, showing that arithmetic questions are more difficult to conduct imagination and reasoning even though arithmetic makes up the majority of TAT-HQA data.",
"As to TAGOP-L2I, the gap between arithmetic question and other types of question largely reduces, validating the effectiveness of learning intervention with discrete operators and neural network modules.",
"As to operator types (the right half), we observe that: 1) TAGOP-L2I achieves imagination on the majority of operator types with better performance than TAGOP, yet TAGOP can only achieve imagination on a few operator types.",
"The better performance of TAGOP-L2I is attributed to modeling the deriving operations as specific operators.",
"We thus believe that TAGOP-L2I can generalize well to more deriving operations by simply incorporating the operators, as long as the corresponding training questions are not rare.",
"This result thus reflects the advantage of the unified operator framework adopted by the L2I module, which is consistent with previous work (Andor et al., 2019).",
"2) Across the groups, TAGOP achieves relatively good performance on the SWAP group, which replaces the target fact with a number in the assumption.",
"It corresponds to the simplest imagination since the assumed value ( i.e., c (cid:48) i ) is explicitly mentioned in the assumption.",
"Therefore, the result shows that the NDR model can achieve simple counterfactual thinking by learning to answer hypothetical questions.",
"However, such indirect guidance on imagination fails on the groups requiring more complex imagination, e.g., requiring add or minus.",
"3) TAGOP-L2I achieves the worst performance on SWAP MIN NUM , which is merely comparable to TAGOP.",
"We suspect the reason is that the operation of SWAP MIN NUM is very close to SWAP , which may confuse the deriving head when making classification over the operators.",
"To address this issue, it is worth considering the operator relation in the deriving head in the future.",
"Study on L2I module design.",
"We then explore the influence of network architecture on the effectiveness of the L2I module from three perspectives: 1) module depth; 2) configuration of the matching block; and 3) the setting of PLM.",
"Figure",
"3(a) shows the validation result of TAGOP-L2I as increasing the matching block from 1 to 4 layers.",
"We can observe that: 1) Stacking more layers does not always bring performance gain.",
"2) In particular, three layers of matching block achieve the best performance on TAGOP-L2I.",
"The result indicates that three layers should be sufficient to capture the semantic connection across the context, question and assumption.",
"This is reasonable since the average length of both assumption and question are only around 10 words ( cf. Table 2).",
"As to the architecture of the matching block, we evaluate three variants from the default choice ps, self-a which enables parameter sharing across layers ( i.e., ps ) and applies both cross-MHA on the factual and assumption representations and self-MHA for each of them ( i.e., self-a ).",
"The three variants are: 1) p-s, w/o self-a , which removes self-MHA; 2) w/o p-s, self-a , which disables parameter sharing; and 3) w/o p-s, w/o self-a , which adopts both changes.",
"Figure",
"3(b) shows the performance of the four versions of TAGOP-L2I with 63 Figure 3: Performance of TAGOP-L2I under difference module configurations.",
"three layers of the matching block.",
"From the figure, we can observe that: 1) The default choice largely outperforms the variants, validating the rationality of our module design.",
"2) Disabling parameter sharing hinders the counterfactual thinking, which indicates that keeping the same parameters through the process of matching factual and assumption representations is beneficial for extracting the semantic correlation.",
"3) Removing self-MHA also leads to sharp performance drop, which justifies the contribution of self-MHA in the L2I module.",
"It is thus essential to also separately process the semantic information of the factual and the assumption representations in the matching block.",
"We also conduct experiments on fixing the parameter of PLM during training on TAT-HQA as initialized by TAT-QA.",
"The performance drops to EM 48.5 and F 1 49.0.",
"Fixing the parameter of PLM largely impedes the performance of TAGOP-L2I on TAT-HQA, showing that encoding factual and hypothetical questions requires different mechanisms.",
"To further investigate the difference in answering factual and hypothetical questions, we test TAGOP-L2I on TAT-QA.",
"The result in Figure 4 shows that training on TAT-HQA causes a performance drop in counting , span and multi-span groups of TAT-QA, and performs similar on the in arithmetic group.",
"We conjecture the performance drop in the first three groups is because the question-answering label in TAT-HQA under the same c and q is different from TAT-QA.",
"However, for arithmetic questions, the question-answering label for one pair of c and q remains the same between TAT-HQA and TAT-QA, and the intervention is achieved explicitly by Figure 5: Group-wise performance of TAGOP-L2I and TAGOP-L2I-T w.r.t. operator type.",
"deriving operators and tagging head.",
"Study on L2I training objective.",
"We then investigate the influence of imagination-oriented training objectives on the effectiveness of L2I.",
"In particular, we evaluate a variant TAGOP-L2I-T trained only with the question-answering objective ( i.e., QA ( ) ).",
"That is, TAGOP-L2I-T learns to implicitly imagine the final answer.",
"Figure 5 shows the group-wise performance of TAGOP-L2I and TAGOP-L2I-T w.r.t. the type of operator for deriving the intervention.",
"We can observe the followings.",
"1) On most groups, TAGOP-L2I largely outperforms TAGOP-L2I-T, demonstrating the rationality of learning to imagine explicitly.",
"2) On SWAP group TAGOP-L2I-T achieves comparable result to TAGOP-L2I.",
"As SWAP is the simplest deriving operator, the result shows that the implicit guidance can achieve simple imagination, yet is still less effective than the explicit manner.",
"3) TAGOP-L2I-T achieves better performance on SWAP MIN NUM group.",
"As SWAP MIN NUM is a rare operator ( cf. Table 6) and involves the most complex imagination process ( cf. Appendix B), we conjecture that learning complex operators is more difficult than implicitly learning.",
"This may shed light on the rules of deriving new operators that simple operators with ample training data is preferred over complex operators with less training data.",
"Counterfactual thinking.",
"Existing research incorporates counterfactual thinking into deep models from two main perspectives: counterfactual training and counterfactual inference .",
"Counterfactual sample has become an emerging data augmentation technique in computer vision (Chen et al., 2020b) and natural language processing (Kaushik et al., 2019) to enhance model robustness.",
"For instance, the technique is applied in visual QA (Chen et al., 2020b; Agrawal et al., 2018; Agarwal et al., 2020; Gokhale et al., 2020), vision-language navigation (Fu et al., 2020; Par-vaneh et al., 2020), table entailment (Eisenschlos et al., 2020), sentiment analysis (Kaushik et al., 2019; Yang et al., 2020), natural language inference (Kaushik et al., 2019), named entity recognition (Zeng et al., 2020), and dialogue system (Zhu et al., 2020).",
"Along this line, a series of studies explore how to maximize the effect of counterfactual samples by combining with different learning paradigms, such as adversarial training (Zhu et al., 2020; Fu et al., 2020; Teney et al., 2020), contrastive learning (Liang et al., 2020), causal graph (Gokhale et al., 2020), posterior regularization (Ramakrishnan et al., 2018), and designing new learning paradigms (Gokhale et al., 2020).",
"A few studies along this line also generate counterfactual samples with neural networks (Sauer and Geiger, 2021; Yue et al., 2021).",
"They are inherently different from our work due to their reliance on causal graph and the causal expression of the hypothetical condition for improving robustness.",
"Moreover, they supervise the generation with other related tasks such as image classification.",
"In contrast, we formulate imagination as an explicit learning objective, i.e., learning to imagine.",
"Additionally, in commonsense reasoning, counterfactual samples are also utilized through hyperbole generation (Tian et al., 2021), story generation (Qin et al., 2019) and commonsense QA(Huang et al., 2019), which is also a related yet different strand of research.",
"Another line of research performs counterfactual inference over the predictions of deep model to incorporate counterfactual thinking (Yue et al., 2021; Wang et al., 2021; Niu et al., 2021; Tang et al., 2020).",
"However, they perform counterfactual inference according to causal graph which is not available in NDR tasks.",
"Neural discrete reasoning.",
"Recent research on NDR focuses on enhancing the discrete reasoning ability of deep models in two main directions: reasoning with more discrete operations (Dua et al., 2019; Ran et al., 2019; Chen et al., 2020a) and reasoning over more complex context .",
"For instance, NumNet (Ran et al., 2019) and QDGAT (Chen et al., 2020a) leverage graph neural network to enhance comparison oriented operations.",
"GenBERT (Geva et al., 2020) uses pre-trained language models to generate the numerical answer, which breaks the limitation of fixed operators.",
"NMN (Gupta et al., 2019) and FinQA (Chen et al., 2021) model the discrete reasoning process as executing programs.",
"As to extending the context, several studies try to enable the NDR model to operate on context with semi-structured tabular data and hybrid data (Chen et al., 2020c; Herzig et al., 2020; Chen et al., 2021).",
"Our paper studies the hybrid data, yet extends the scope of NDR to hypothetical questions.",
"Moreover, beyond the ability of discrete operations, the main idea is to endow NDR models with the ability to think counterfactually.",
"In this work, we pointed out a key issue of existing NDR models: lacking counterfactual thinking.",
"We proposed an L2I module, which can imagine the counterfactual according to a textual assumption.",
"By applying the proposed module in the NDR model, we enable the model to answer hypothetical questions.",
"We constructed a HQA dataset and conducted extensive experiments on the dataset, which validates the effectiveness of our method.",
"This work opens up a new research direction about modeling counterfactual thinking through neural network.",
"In the future, we will further extend the L2I from the following perspectives: 1) handling of multiple interventions; 2) rigorous derivation of intervention with consideration of successors; 3) incorporation of the relations across the deriving operators; and 4) construction of complex operators by dynamically combining basic operators.",
"Moreover, we will explore the translation between assumptions in natural language and causal expression to further connect the L2I framework with conventional causal theory, and facilitate automatic causal inference with neural network.",
"This work is supported by Sea-NExT Joint Lab, Singapore MOE AcRF T2, and Natural Science Foundation of China (Grant No. U21B2026)."
] | [
"abstain",
"result",
"abstain",
"method",
"method",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"result",
"objective",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"abstain",
"objective",
"objective",
"method",
"objective",
"objective",
"objective",
"other"
] |
[
"End-to-end models for speech translation (ST) more tightly couple speech recognition (ASR) and machine translation (MT) than a traditional cascade of separate ASR and MT models, with simpler model architectures and the potential for reduced error propagation.",
"Their performance is often assumed to be superior, though in many conditions this is not yet the case.",
"We compare cascaded and end-to-end models across high, medium, and low-resource conditions, and show that cascades remain stronger baselines.",
"Further, we introduce two methods to incorporate phone features into ST models.",
"We show that these features improve both architectures, closing the gap between end-to-end models and cascades, and outperforming previous academic work by up to 9 BLEU on our low-resource setting.",
"End-to-end models have become the common approach for speech translation (ST), but the performance gap between these models and a cascade of separately trained speech recognition (ASR) and machine translation (MT) remains, particularly in low-resource conditions.",
"Models for low-resource ASR leverage phone 1 information, but this information is not typically leveraged by current sequence-to-sequence ASR or speech translation models.",
"We propose two methods to incorporate phone features into current neural speech translation models.",
"We explore the existing performance gap between end-to-end and cascaded models, and show that incorporating phone features not only closes this gap, but greatly improves the performance and training efficiency of both model architectures, particularly in lower-resource conditions.Thesequences of speech features used as input for ST are 10 times longer than the equivalent sequence of characters in e.g. a text-based MT model.",
"This impacts memory usage, the number of model parameters, and 1 The term phone' refers to segments corresponding to a collection of fine-grained phonetic units, but which may separate allophonic variation: see Jurafsky and Martin (2000).",
"training time.",
"Multiple consecutive feature vectors can belong to the same phone, but the exact number depends on the phone and local context.",
"Further, these speech features are continuously valued rather than discrete, such that a given phone will have many different instantiations across a corpus.",
"Neural models learn to associate ranges of similarly valued feature vectors in a data-driven way, impacting performance in lower-resource conditions.",
"Using phoneme-level information provides explicit links about local and global similarities between speech features, allowing models to learn the task at hand more efficiently and yielding greater robustness to lower-resource conditions.",
"We propose two simple heuristics to integrate phoneme-level information into neural speech translation models: (1) as a more robust intermediate representation in a cascade; and (2) as a concatenated embedding factor.",
"We use the common Fisher SpanishEnglish dataset to compare with previous work, and simulate high-, mid-, and low-resource conditions to compare model performance across different data conditions.",
"We compare to recent work using phone segmentation for end-to-end speech translation (Salesky et al., 2019), and show that our methods outperform this model by up to 20 BLEU on our lowest-resource condition.",
"2 Further, our models outperform all previous academic work on this dataset, achieving similar performance trained on 20 hours as a baseline end-to-end model trained on the full 160 hour dataset.",
"Finally, we test model robustness by varying the quality of our phone features, which may indicate which models will better generalize across differently-resourced conditions.",
"3 2 Models with Phone Supervision We add higher-level phone features to low-level speech features to improve our models' robustness across data conditions and training efficiency.",
"We propose two methods to incorporate phone information into cascaded and end-to-end models, depicted in Figure 1.",
"Our phone cascade uses phone labels as the machine translation input, in place of the output transcription from a speech recognition model.",
"Our phone end-to-end model uses 2 4-reference BLEU scores are used for this dataset.",
"phone labels to augment source speech feature vectors in end-to-end models.",
"We call these end-to-end or di-rect' because they utilize a single model with access to the source speech features, though they additionally use phone features generated by an external model.",
"We additionally compare to a recent end-to-end model proposed by Salesky et al. (2019).",
"Model 1: Phone Cascade.",
"In a cascade, the intermediate representation between ASR and MT is the final output of a speech recognition model, e.g. characters, subwords, or words.",
"Using separate models for ASR and MT means that errors made in ASR are likely to propagate through MT. Common errors include substitution of phonetically similar words, or misspellings due to irregularities in a language's orthography, the latter of which may be addressed by using phone labels in place of ASR output.",
"By not committing to orthographic targets, we believe this model will propagate fewer errors to downstream MT. Model 2: Phone End-to-End.",
"Our final model uses phone-factored embeddings, where trainable embeddings for phone features are concatenated to typical speech feature vector input.",
"Because phone durations are variable and typically span more than one filterbank feature (or frame), adjacent filterbank features may have the predicted phone label; in the example shown in Figure 1, /R/ spans three frames or filterbank features.",
"We note that this method maintains the same source sequence length as the original speech feature sequence.",
"This method associates similar feature vectors at the corpus level, because all filterbank features with the same phone alignment (e.g. /OH/) will have the same trainable phone embedding concatenated.",
"In MT and NER, concatenating trainable embeddings for linguistic features to words, such as morphemes and phones, has improved models' ability to generalize (Sennrich and Haddow, 2016; Chaudhary et al., 2018).",
"While these works appended finer-grained information to associate words with similar lower-level structure, we use phone embeddings to associate higher-level structure to similar but unique speech feature vectors globally across a corpus.",
"Model 3: Phone Segmentation.",
"We compare to the method from Salesky et al. (2019) as a strong end-to-end baseline.",
"Here, phone boundaries are used to segment and compress speech feature vector sequences.",
"Within each utterance, the feature vectors of consecutive speech frames with the same phone label are averaged to produce one feature vector for translation from a variable number of frames.",
"This significantly reduces source sequence lengths (by 80%), reducing the number of model parameters and memory.",
"Rather than having a variable number of feature vectors per phone-like unit, each has one representation, more similar in granularity to character-based MT. The averaged feature vectors remain continuously-valued, and are locally summarized: a given phone across the corpus will still have different representations in each instance.",
"We use the Fisher Spanish-English corpus, 4 which consists of parallel speech, transcripts, and translations, enabling comparisons between cascaded and direct models on the same data and allowing us to generate phone supervision using matched data.",
"The dataset contains 160 hours of Spanish telephone speech, split into 138K utterances, which were translated via crowdsourcing by Post et al. (2013).",
"We use the standard dev and test sets, each with 4k utterances.",
"Because we are particularly interested in how our methods will affect training across differently-resourced conditions, we compare results using randomly selected 40 hour and 20 hour subsets of the data.",
"4 joshua.incubator.apache.org/data/fisher-callhome-corpus 4 Generating Phone Supervision To generate phoneme-level labels for sequences of speech features, we generate frame-level alignments using a trained speech recognizer.",
"Specifically, we extract 40-dimensional Mel filterbank features with per-speaker mean and variance normalization using Kaldi (Povey et al., 2011).",
"We train an HMM/GMM system on the full Fisher Spanish dataset with the Kaldi recipe (Povey et al., 2011), using the Spanish CALLHOME Lexicon (LDC96L16), and compute per-frame phone alignments with the triphone model (tri3a) with LDA+MLLT features.",
"This yields 50 phone labels, including silence ( < sil > ), noise, and laughter.",
"Producing phone alignments uses supervision from a transcript, which inherently does not exist at inference time.",
"While phones can be extracted from Kaldi lattices at inference time, we found that our HMM/GMM model was not our best performing ASR model on this dataset by greater than 10 WER.",
"To leverage our better-performing neural ASR models for phone generation, we create essentially a 2-pass' alignment procedure: first, generating a transcript, and second, using this transcript to force align phones.",
"Table 1 shows the mapping between phone quality and the ASR models used for phone feature generation.",
"This procedure enables us to both improve phone Alignment Quality WER ASR Supervision Gold Gold transcript High 23.2 Salesky et al. (2019) Med 30.4 Seq2Seq ASR Low 35.5 Kaldi HMM/GMM Table 1: Mapping between phone quality and the ASR models used for alignment generation, with the models' WER on Fisher Spanish test.",
"alignment quality and also match training and inference procedures for phone generation for our translation models.",
"In Section 8, we compare the impact of phone alignment quality on our translation models utilizing phone features, and show higher quality phone features can improve downstream results by > 10 BLEU.",
"Producing phone features in this way uses the same data (source speech and transcripts) as the ASR task in a cascade, and auxiliary ASR tasks from multi-task end-to-end models, but as we show, to far greater effect.",
"Further, auxiliary tasks as used in previous work rely on three-way parallel data, while it is possible to generate effective phoneme-level supervision using a recognizer trained on other corpora or languages (Salesky et al., 2019), though we do not do this here.",
"As in previous academic work on this corpus (Bansal et al., 2018; Sperber et al., 2019; Salesky et al., 2019), we use a sequence-to-sequence architecture inspired",
"by Weiss et al. (2017) modified to train within lower resources; specifically, each model converges within 5 days on one GPU.",
"We build encoder-decoder models with attention in xnmt (Neubig et al., 2018) with 512 hidden units.",
"Our pyramidal encoder uses 3-layer BiLSTMs with linear network-in-network (NiN) projections and batch normalization between layers (Sperber et al., 2019; Zhang et al., 2017).",
"The NiN projections are used to downsample by a factor of 2 between layers, resulting in the same total 4 downsampling in time as the additional convolutional layers from Weiss et al. (2017); Bansal et al. (2019): They give us the benefit of added depth with fewer additional parameters.",
"We use single layer MLP attention (Bahdanau et al., 2015) with 128 units and 1 decoder layer as opposed to 3 or 4 in previous work we did not see consistent benefits from additional depth.",
"In line with previous work on this dataset, all experiments preprocess target text by lowercasing and removing punctuation aside from apostrophes.",
"We use 40-dimensional Mel filterbank features as previous work did not see significant difference with higher-dimensional features (Salesky et al., 2019).",
"We use 1k BPE units for translation text, shown in Salesky et al. (2019) to have both better performance and training efficiency than characters (Weiss et al., 2017; Sperber et al., 2019) or words (Bansal et al., 2018).",
"For both text and phones, we use 64-dimensional embeddings.",
"For the MT component in cascaded speech translation models, we compared using the pyramidal speech architecture above (3 encoder, 1 decoder layers) to the traditional BiLSTM text model (2 layers each for encoder and decoder).",
"Using the pyramidal architecture resulted in the same performance as the BiLSTM model when translating BPE transcriptions from ASR, but gave us consistent improvements of up to 1.5 BLEU when instead translating phone sequences; we posit this is because phone sequences are longer than BPE equivalents.",
"Accordingly, we use the same model architecture for all our ASR, MT, and ST models.",
"We use layer dropout with p = 0 .",
"2 and target embedding dropout with p = 0 .",
"1 (Gal and Ghahramani, 2016).",
"We apply label smoothing with p = 0 .",
"1 (Szegedy et al., 2016) and fix the target embedding norm to 1 (Nguyen and Chiang, 2018).",
"For inference, we use beam of size 15 and length normalization with exponent 1.5.",
"We set the batch size dynamically depending on the input sequence length with average batch size was 36.",
"We use Adam (Kingma and Ba, 2015) with initial learning rate 0.0003, decayed by 0.5 when validation BLEU did not improve for 10 epochs initially and subsequently 5 epochs.",
"We do not use L2 weight decay or Gaussian noise, and use a single model replica.",
"We use input feeding (Luong et al., 2015), and exclude utterances longer than 1500 frames in training for memory.",
"The large body of research on the Fisher Spanish-English dataset, including both cascaded and end-to-end models, makes it a good benchmark to compare these architectures.",
"Not all previous work has compared across multiple resource settings or compared to cascaded models, which we address in this section.",
"We summarize best previous results on this dataset on high, medium, and low-resource conditions in Table",
"2. Best Results.",
"The cascade of traditional HMM/DNN ASR and Joshua MT models from Kumar et al. (2014) set a competitive baseline on the full dataset (40.4 test BLEU) which no subsequent academic models have been able to match until this work; subsequent exploration of end-to-end models has produced notable relative improvements but the best end-to-end academic number (Salesky et al., 2019) remains 1.6 BLEU behind this traditional cascade.",
"Industry models from Weiss et al. (2017) achieved exceptional performance with very deep end-to-end models on the full dataset (47.3 test BLEU), exceeding a cascade for the first time.",
"They additionally show results with an updated cascade using neural models, improving over Kumar et al. (2014).",
"Their results have been previously unmet by the rest of the community.",
"This is likely in part due to the computational resources required to fully explore training schedules and hyperparameters with models of their depth.",
"While their ASR models took 4 days to converge, their ST models took an-other 2 weeks, compared to the lighter-weight models of recent academic work which converged in < 5 days (Sperber et al., 2019; Salesky et al., 2019; Bansal et al., 2019).",
"This dataset is challenging: improving ASR WER from 35 (Post et al.) to 23 (Kumar et al.) only resulted in 4 BLEU ST improvement: see Components in Table",
"2. We believe this to be in part because the multi-reference scoring masks some model differences, and the conversational phenomena (like disfluencies) are challenging.",
"Lower-Resource.",
"While deep end-to-end models have become competitive at higher-resource conditions, previous work on this dataset has showed they are not as data-efficient as cascades under lower-resource conditions.",
"While some works have tested multiple resource conditions, only Sperber et al. (2019) compared against cascades across multiple conditions.",
"Their end-to-end baseline outperformed their cascades on the full dataset, but not under lower-resource conditions, while their end-to-end but multi-stage attention-passing model is more data-efficient than previous models and shows the best previous results under lower-resource condition.",
"Sperber et al. do not report results without auxiliary ASR, MT, and autoencoding tasks, which they state add up to 2 BLEU.",
"Additional Data.",
"Stoian et al. (2020); Bansal et al. (2019); Sperber et al. (2019) investigate speech translation performance using additional corpora through transfer learning from ASR and auxiliary MT tasks.",
"The ability to leverage non-parallel corpora was previously a strength of cascades and had not been explored with end-to-end models.",
"We do not use additional data here, but show these numbers as context for our results with phone supervision, and refer readers to Sperber et al. for discussion of cascaded and end-to-end models' capacity to make use of more data.",
"Parameter Tuning.",
"We find cascaded model performance can be impacted significantly by model settings such as beam size and choice of ASR target preprocessing.",
"While Weiss et al. (2017); Sperber et al. (2019) use character targets for ASR, we use BPE, which gave us an average increase of 2 BLEU.",
"Further, we note that search space in decoding has significant impact on cascaded model performance.",
"In cascaded models, errors produced by ASR can be unrecoverable, as the MT component has access only to ASR output.",
"While Sperber et al. (2019) use a beam of size 1 for the ASR component of their cascade to compare with their two-stage end-to-HIGH (160hr) MID (40hr) LOW (20hr) Components Model Source dev test dev test dev test ASR MT Cascaded Weiss et al. (2017) 45.1 45.5 23.2 57.9 Kumar et al. (2014) 40.4 25.3 62.9 Sperber et al. (2019) 32.5 16.8 6.6 40.9 58.1 End-to-End Weiss et al. (2017) 46.5 47.3 Salesky et al. (2019) 37.6 38.8 21.0 19.8 11.1 10.0 Sperber et al. (2019) 36.7 31.9 22.8 Stoian et al. (2020) 34.1 34.6 10.3 10.2 + Add'l Data Sperber et al. (2019) 38.8 Stoian et al. (2020) 37.9 37.8 20.1 20.2 Table 2: End-to-end vs cascaded speech translation model performance in BLEU on Fisher Spanish-English data from the literature.",
"end models, we find that using equal beam sizes of 15 for both ASR and MT improves cascaded performance with the same model by 4-8 BLEU; combining these two parameter changes makes the same cascaded model a much more competitive baseline (compare lines 3 in both Table 2 and Table 3).",
"In contrast, widening beam size to yield an equivalent search space for end-to-end models has diminishing returns after a certain point; we did not see further benefits with a larger beam ( > 15 ).",
"Our Baselines.",
"We report best numbers from previous work in Table 2 for comparison (which may use multi-task training), but use single-task models in our work.",
"We report our baseline results in Table",
"3. On the full dataset, our baseline cascade improves slightly over Kumar et al. (2014) with 41.0 compared to 40.4 on test, a mark most recent work has not matched primarily due to model choices noted above, with component ASR performance of WER 30.4 and 58.6 BLEU for MT. Our end-to-end baseline is comparable to the baselines in Salesky et al. (2019); Sperber et al. (2019); Stoian et al. (2020).",
"This suggests we have competitive baselines for both end-to-end and cascaded models.",
"We compare our two ways to leverage phone features to our cascaded and end-to-end baselines across three resource conditions.",
"Table 3 shows our results; following previous work, all BLEU scores are multi-reference.",
"Average single reference scores may be found in Appendix A. All models using phone supervision outperform the end-to-end baseline on all three resource conditions, while our proposed models also exceed the cascaded baseline and previous work at lower-resource conditions.",
"Phone features.",
"Salesky et al. (2019) performs most similarly to the end-to-end baseline, but nonetheless represents an average relative improvement of 13% across the three data sizes with a significant reduction in training time.",
"Our phone featured models use not just the phone segmentation, but also the phone labels, and perform significantly better.",
"Our phone end-to-end model not only shows less of a decrease in performance across Figure 2: Performance of all models relative to Baseline Cascade ' ( = 0 ) across our 3 resource conditions.",
"resource conditions than Salesky et al. (2019), but further improves by 4 BLEU over the baseline cascade on our two lower-resource conditions.",
"This suggests augmenting embeddings with discrete phone features is more effective than improved downsampling.",
"The phone cascade performs still better, with marked improvements across all conditions over all other models (see Figure 2).",
"On the full dataset, using phones as the source for MT in a cascade performs 2 BLEU better than using BPE, while at 40 and 20 hours this increases to up to 10 BLEU.",
"We analyze the robustness of phone models further in Section 8.",
"Hybrid cascade.",
"We additionally use a hybrid cas-cade' model to compare using phone features to improving ASR.",
"Our hybrid cascade uses an ASR model with phone-informed downsampling and BPE targets (Salesky et al., 2019).",
"This improves the WER of our ASR model to 28.1 on dev and 23.2 on test, matching Weiss et al. (2017)'s state-of-the-art on test (23.2) and approaching it on dev (25.7).",
"Our hybrid cascade performs more similarly to Weiss et",
"al.'s cascade on the full dataset, with 45.0 to their 45.5 on test, and is our best-performing ST model on the full dataset.",
"However, at lower-resource conditions, it does not perform as favor-HIGH (160hr) MID (40hr) LOW (20hr) Model dev test dev test dev test B a s e li n e Baseline End-to-End 32.4 33.7 19.5 17.4 9.8 9.8 Salesky et al. (2019) 37.6 38.8 +5.2 21.0 19.8 +2.0 11.1 10.0 +0.8 Baseline Cascade 39.7 41.0 +7.3 29.8 27.1 +10.0 22.6 20.2 +11.6 P r o po s e d Phone End-to-End 40.5 42.1 +8.3 34.5 33.0 +15.3 26.7 26.2 +16.7 Phone Cascade 41.6 43.3 +9.4 37.2 37.4 +18.9 32.2 31.5 +22.1 Hybrid Cascade 42.9 45.0 +10.9 33.3 31.2 +13.8 23.2 21.5 +12.6 Table 3: Results in BLEU comparing our proposed phone featured models to baselines.",
"ably compared to phone featured models as shown in Figure 2, both the phone cascade and phone end-to-end models outperform the hybrid cascade at lower-resource conditions, by up to 10 BLEU at 20 hours.",
"This suggests improving ASR may enable cascades to perform better at high-resource conditions, but under lower-resource conditions it is not as effective as utilizing phone features.",
"Training time.",
"In addition to performance improvements, our models with phone features are typically more efficient with respect to training time, shown in Table 4.",
"The fixed time to produce phone labels, which must be performed before translation, becomes a greater proportion of overall training time at lower-resource settings.",
"In particular, the phone end-to-end model offers similar training time reduction over the baseline to Salesky et al. (2019), where downsampling reduces sequence lengths by up to 60%, with unreduced sequence lengths through earlier convergence; this model offers a better trade-off between time and performance.",
"Previous work used the parallel speech transcripts in this dataset for auxiliary tasks with gains of up to 2 BLEU; we show using the same data to generate phone supervision is far more effective.",
"We note that our phone models further outperform previous work trained with additional corpora.",
"The attention-passing model of Sperber et al. (2019) trained on additional parallel Spanish-English text yields 38.8 on test on the full dataset, which Salesky et al. (2019) matches on the full dataset and our proposed models exceed, with the phone cascade yielding a similar result (37.4) trained on only 40 hours.",
"Pre-training with 300 hours of English ASR data and fine-tuning on 20 hours of Spanish-English data, Stoian et al. (2020); Bansal et al. (2019) improve their end-to-end models from 10 BLEU to 20.2.",
"All three of our proposed models exceed this mark trained on 20 hours of Fisher.",
"In this section, we analyze the robustness of each of our models by varying the quality of our phone features, and further explore the strengths and limitations of each model.",
"Phone cascades use a representation for translation which may be more robust to non-phonetic aspects of orthography.",
"However, as a cascaded model, this still requires hard decisions between ASR and MT, and so we may expect lower phone quality to lead to unrecoverable errors.",
"Figure 3 compares the impact of phone quality on the performance of phone cascades trained on our high, medium, and low-resource conditions.",
"We use alignments produced with gold transcripts as an upper bound on performance.",
"We note that with gold alignments, translation performance is similar to text-based translation (see Section 6).",
"We see that phone quality does have a significant impact on performance, with the MT model trained on low phone quality yielding similar translation performance using the full 160 hour dataset to the MT model with the highest quality phones trained on only 20 hours.",
"However, we also see significantly more data-efficiency with this model, with less reduction in performance between 160 hr 40 hr 20 hr training conditions than previous models.",
"Redundancy.",
"For the phone cascade models compared in Figure 3, we collapse adjacent consecutive phones with the same label, i.e. when three consecutive frames have been aligned to the same phone label B B B' we have reduced the sequence to a single phone B' for translation.",
"We additionally compared translating non-uniqued phone sequences (e.g. the same sequence length as the number of frames) as a more controlled proxy for our model's handling of longer frame-based feature vector sequences compared to Salesky et al. (2019)'s downsampled feature vector sequences.",
"The redundant phones caused consistent decreases in BLEU, with much greater impact in lower-resource conditions.",
"Translating the full sequence of redundant frame-level phone labels, for the full 160hr dataset, all models performed on average 0.6 BLEU worse; for 40hr, 1.8 BLEU worse; and with 20 hours, 4.1 BLEU worse a 13% decrease in performance solely from non-uniqued sequences .",
"Phones correspond to a variable-length number of speech frames depending on context, speaker, and other semantic information.",
"When translating speech feature vectors, speech features within a phone are similar but uniquely valued; using instead phone labels in a phone cascade, the labels are identical though still redundant.",
"These results suggest our LSTM-based models are better able to handle redundancy and variable phone length at higher resource conditions with sufficient examples, but are less able to handle redundancy with less training data.",
"Our phone end-to-end model concatenates trainable embeddings for phone labels to frame-level filterbank features, associating similar feature vectors globally across the corpus, as opposed to locally within an utterance as with the phone-averaged embeddings (Section 8.3).",
"Figure 4 compares the results of these factored models using phone features of differing qualities, with gold' alignments as an upper bound.",
"The phone end-to-end models compared do not reach the same upper performance as the phone cascades: comparing gold phone labels, the phone end-to-end model performs slightly worse at 160hr with more degradation in performance at 40hr and 20hr.",
"While this comparison is even more pronounced for low' phone quality than gold,' the phone end-to-end model has more similar performance between gold' and high' phone quality than the cascade.",
"This model's input contains both the phone features used in the phone cascade and speech features of the baseline end-to-end model, but unlike the phone cas-Figure 4: Phone End-to-End Robustness : trainable embeddings for phone labels are concatenated to frame-level filterbank features.",
"Comparing performance across three data conditions and phone label qualities.",
"cade or Salesky et al. (2019) the input sequence has not been reduced in length.",
"That the end-to-end phone model achieves top performance and converges much faster than end-to-end baseline is unsurprising, as access to both speech feature vectors and phone labels mitigates the effects of long noisy input sequences.",
"The significant performance improvements over Salesky et al. (2019), however, are more interesting, as these models make use of the similar information in different ways the use of discrete embeddings seems to aid the phone end-to-end model, though the sequence length is not reduced.",
"The model's performance degradation compared to the phone cascade in lower-resource conditions is likely due in part to these sequence lengths, as shown by our additional experiments with input redundancy for the cascade.",
"The greater reduction in performance here using lower quality phones suggests the noise of the labels and concatenated filterbank features compound, further detracting from performance.",
"Perhaps further investigation into the relative weights placed on the two embedding factors over the training process could close this additional gap.",
"We also compare to the models from Salesky et al. (2019) as a strong end-to-end baseline.",
"That work introduced downsampling informed by phone segmentation unlike our other models, the value of the phone label is not used, but rather, phone alignments are used only to determine the boundary between adjacent phones for variable-length downsampling.",
"Their model provides considerable training and decoding time improvements due to the reduced source sequence length, and shows consistent improvements over the baseline end-to-end model using the original filterbank feature sequences which increase with the amount of training data.",
"However, their model has lower overall performance and with much smaller performance improvements over our baselines in lower-resource conditions than the phone featured models we propose here.",
"We hypothesize that the primary reason for their BLEU improvements is the reduction in local redundancy between similar frames, as discovered in the previous section.",
"We refer readers to their paper for further analysis.",
"We show two examples of phone sequences produced with each overall model quality in Figure 5, uniqued within consecutive frame sequences with the same label for space constraints.",
"Individual phones are typically 5-20 frames.",
"We see the primary difference in produced phones between different models is the label values, rather than the boundaries.",
"While we do see some cases where the boundaries shift, they chiefly vary by only 1-3 frames.",
"It is not the case that there are significantly more or fewer phone segments aligned per utterance by quality, though there are outlying utterances (Example 2 Low').",
"Relating our observed trends to the differences between our phone cascades and phone end-to-end models, we note that differences in frame-level phone boundaries would not affect our phone cascaded models, where the speech features are discarded, while they would affect our phone end-to-end models, where the phone labels are concatenated to speech feature vectors and associate them across the corpus.",
"While errors in phone labels may be seen as unrecoverable' in a cascade, for the end-to-end model, they add noise to distribution of filterbank feature associated with each phone label embedding, which appears to have a more negative impact on performance than the hard decisions in cascades.",
"Though the concatenated filterbank features may allow our end-to-end models to recover from discrete label errors, our results testing various phone qualities suggest this may only be the case under higher-resource settings with sufficient examples.",
"Speech translation was initially performed by cascading separately trained ASR and MT models, allowing each model to be trained on larger data sources without parallel speech, transcriptions, and translations, but potentially yielding unrecoverable errors between models.",
"Linking models through lattices with both phrase-based (Kumar et al., 2014) and neural MT (Sperber et al., 2017) reduced many such errors.",
"Using one model to directly translate speech was later enabled by attentional encoder-decoder models.",
"Direct end-to-end speech translation was first explored as a way to reduce both error propagation, and also the need for high quality intermediate transcriptions (e.g. for unwritten languages).",
"The first such models were investigated in Berard et al. (2016); Duong et al. (2016), but these used, respectively, a small synthetic corpus and evaluated on speech-to-text alignments rather than translation.",
"Subsequently Weiss et al. (2017) extended these neural attentional models to deep, multitask models with excellent results on Fisher Spanish English, exceeding a cascade for the first time.",
"However, efforts from the community have not yet replicated their success (Stoian et al., 2020; Sperber et al., 2019; Salesky et al., 2019).",
"End-to-end models have performed inconsistently compared to cascades on other corpora: Berard et al. (2018) perform well on high-resource audiobooks but do not exceed a cascade; Anastasopoulos and Chiang (2018) found triangle' models performed better than cascades for 2 of 3 very low-resource language pairs; and in the most recent IWSLT evaluation campaigns, cascades have remained the highest-performing systems (Niehues et al., 2018, 2019).",
"Similarly-motivated work exists in speech translation.",
"In addition to Salesky et al. (2019); Sperber et al. (2019) addressed above, preliminary cascades using phone-like units have been explored for low-resource speech translation, motivated by translation of unwritten languages where a traditional cascade would not be possible.",
"To this end, Bansal et al. (2018) utilized unsupervised term discovery, and Wilkinson et al. (2016) synthesized speech; but these approaches were only evaluated in terms of precision and recall and were not tested on both higher-resource' and natural speech data conditions.",
"We show that phone features significantly improve the performance and data efficiency of neural speech translation models.",
"We study the existing performance gap between cascaded and end-to-end models, and introduce two methods to use phoneme-level features in both architectures.",
"Our improvements hold across high, medium, and low-resource conditions.",
"Our greatest improvements are seen in our lowest-resource settings (20 hours) , where our end-to-end model outperforms a strong baseline cascade by 5 BLEU, and our cascade outperforms prior work by 9 BLEU.",
"Generating phone features uses the same data as auxiliary speech recognition tasks from prior work; our experiments suggest these features are a more effective use of this data, with our models matching the performance from previous works' performance without additional training data.",
"We hope that these model comparisons and results inform development of more robust end-to-end models, and provide a stronger benchmark for performance on low-resource settings.",
"The authors thank Andrew Runge, Carlos Aguirre, Carol Edwards, Eleanor Chodroff, Florian's cluster, Huda Khayrallah, Matthew Wiesner, Nikolai Vogler, Rachel Wicks, Ryan Cotterell, and the anonymous reviewers for helpful feedback and resources."
] | [
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"result",
"result",
"abstain",
"result",
"objective",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"result",
"method",
"abstain",
"result",
"method",
"result",
"other"
] |
[
"While there is a large amount of research in the field of Lexical Semantic Change Detection, only few approaches go beyond a standard benchmark evaluation of existing models.",
"In this paper, we propose a shift of focus from change detection to change discovery, i.e., discovering novel word senses over time from the full corpus vocabulary.",
"By heavily fine-tuning a type-based and a token-based approach on recently published German data, we demonstrate that both models can successfully be applied to discover new words undergoing meaning change.",
"Furthermore, we provide an almost fully automated framework for both evaluation and discovery.",
"There has been considerable progress in Lexical Semantic Change Detection (LSCD) in recent years (Kutuzov et al., 2018; Tahmasebi et al., 2018; Hengchen et al., 2021), with milestones such as the first approaches using neural language models (Kim et al., 2014; Kulkarni et al., 2015), the introduction of Orthogonal Procrustes alignment (Kulkarni et al., 2015; Hamilton et al., 2016), detecting sources of noise (Dubossarsky et al., 2017, 2019), the formulation of continuous models (Fr-ermann and Lapata, 2016; Rosenfeld and Erk, 2018; Tsakalidis and Liakata, 2020), the first uses of contextualized embeddings (Hu et al., 2019; Giulianelli et al., 2020), the development of solid annotation and evaluation frameworks (Schlechtweg et al., 2018, 2019; Shoemark et al., 2019) and shared tasks (Basile et al., 2020; Schlechtweg et al., 2020).",
"However, only a very limited amount of work applies the methods to discover novel instances of semantic change and to evaluate the usefulness of such discovered senses for external fields.",
"That is, the majority of research focuses on the introduction of novel LSCD models, and on analyzing and evaluating existing models.",
"Up to now, these preferences for development and analysis vs. application represented a well-motivated choice, because the quality of state-of-the-art models had not been established yet, and because no tuning and testing data were available.",
"But with recent advances in evaluation (Basile et al., 2020; Schlechtweg et al., 2020; Kutuzov and Pivovarova, 2021), the field now owns standard corpora and tuning data for different languages.",
"Furthermore, we have gained experience regarding the interaction of model parameters and modelling task (such as binary vs. graded semantic change).",
"This enables the field to more confidently apply models to discover previously unknown semantic changes.",
"Such discoveries may be useful in a range of fields (Hengchen et al., 2019; Jatowt et al., 2021), among which historical semantics and lexicography represent obvious choices (Ljubesic, 2020).",
"In this paper, we tune the most successful models from SemEval-2020 Task 1 (Schlechtweg et al., 2020) on the German task data set in order to obtain high-quality discovery predictions for novel semantic changes.",
"We validate the model predictions in a standardized human annotation procedure and visualize the annotations in an intuitive way supporting further analysis of the semantic structure relating word usages.",
"In this way, we automatically detect previously described semantic changes and at the same time discover novel instances of semantic change which had not been indexed in standard historical dictionaries before.",
"Our approach is largely automated, by relying on unsupervized language models and a publicly available annotation system requiring only a small set of judgments from annotators.",
"We further evaluate the usability of the approach from a lexicographer's viewpoint and show how intuitive visualizations of human-annotated data can benefit dictionary makers.",
"State-of-the-art semantic change detection models are Vector Space Models (VSMs) (Schlechtweg et al., 2020).",
"These can be divided into type-based (static) (Turney and Pantel, 2010) and token-based (contextualized) (Schutze, 1998) approaches.",
"For our study, we use both a static and a contextualized model.",
"As mentioned above, previous work mostly focuses on creating data sets or developing, evaluating and analyzing models.",
"A common approach for evaluation is to annotate target words selected from dictionaries in specific corpora (Tahmasebi and Risse, 2017; Schlechtweg et al., 2018; Perrone et al., 2019; Basile et al., 2020; Rodina and Kutuzov, 2020; Schlechtweg et al., 2020).",
"Contrary to this, our goal is to find undiscovered' changing words and validate the predictions of our models by human annotators.",
"Few studies focus on this task.",
"Kim et al. (2014), Hamilton et al. (2016), Basile et al. (2016), Basile and Mcgillivray (2018), Takamura et al. (2017) and Tsakalidis et al. (2019) evaluate their approaches by validating the top ranked words through author intuitions or known historical data.",
"The only approaches applying a systematic annotation process are Gulordava and Baroni (2011) and Cook et al. (2013).",
"Gulordava and Baroni ask human annotators to rate 100 randomly sampled words on a 4-point scale from 0 (no change) to 3 (changed significantly), however without relating this to a data set.",
"Cook et al. work closely with a professional lexicographer to inspect 20 lemmas predicted by their models plus 10 randomly selected ones.",
"Gulordava and Baroni and Cook et al. evaluate their predictions on the (macro) lemma level.",
"We, however, annotate our predictions on the (micro) usage level, enabling us to better control the criteria for annotation and their inter-subjectivity.",
"In this way, we are also able to build clusters of usages with the same sense and to visualise the annotated data in an intuitive way.",
"The annotation process is designed to not only improve the quality of the annotations, but also lessen the burden on the annotators.",
"We additionally seek the opinion of a professional lexicographer to assess the usefulness of the predictions outside the field of LSCD.",
"In contrast to previous work, we obtain model predictions by fine-tuning static and contextualized embeddings on high-quality data sets (Schlechtweg et al., 2020) that were not available before.",
"We provide a highly automated general framework for evaluating models and predicting changing words on all kinds of corpora.",
"We use the German data set provided by the SemEval-2020 shared task (Schlechtweg et al., 2020, 2021).",
"The data set contains a diachronic corpus pair for two time periods to be compared, a set of carefully selected target words as well as binary and graded gold data for semantic change evaluation and fine-tuning purposes.",
"Corpora The DTA corpus (Deutsches Textarchiv, 2017) and a combination of the BZ (Berliner Zeitung, 2018) and ND (Neues Deutschland, 2018) corpora are used.",
"DTA contains texts from different genres spanning the 16th20th centuries.",
"BZ and ND are newspaper corpora jointly spanning 19451993.",
"Schlechtweg et al. (2020) extract two time specific corpora C 1 (DTA, 18001899) and C 2 (BZ+ND 19461990) and provide raw and lemmatized versions.",
"Target Words A list of 48 target words, consisting of 32 nouns, 14 verbs and 2 adjectives is provided.",
"These are controlled for word frequency to minimize model biases that may lead to artifi-cially high performance (Dubossarsky et al., 2017; Schlechtweg and Schulte im Walde, 2020).",
"Type-based models generate a single vector for each word from a pre-defined vocabulary.",
"In contrast, token-based models generate one vector for each usage of a word.",
"While the former do not take into account that most words have multiple senses, the latter are able to capture this particular aspect and are thus presumably more suited for the task of LSCD (Martinc et al., 2020).",
"Even though contextualized approaches have indeed significantly outperformed static approaches in several NLP tasks over the past years (Ethayarajh, 2019), the field of LSCD is still dominated by type-based models (Schlechtweg et al., 2020).",
"Kutuzov and Giulianelli (2020) yet show that the performance of token-based models (especially ELMo) can be increased by fine-tuning on the target corpora.",
"Laicher et al. (2020, 2021) drastically improve the performance of BERT by reducing the influence of target word morphology.",
"In this paper, we compare both families of approaches for change discovery.",
"4.1 Type-based approach Most type-based approaches in LSCD combine three sub-systems:",
"(i) creating semantic word representations,",
"(ii) aligning them across corpora, and",
"(iii) measuring differences between the aligned representations (Schlechtweg et al., 2019).",
"Motivated by its wide usage and high performance among participants in SemEval-2020 (Schlechtweg et al., 2020) and DIACR-Ita (Basile et al., 2020), we use the Skip-gram with Negative Sampling model (SGNS, Mikolov et al., 2013a,b) to create static word embeddings.",
"SGNS is a shallow neural language model trained on pairs of word co-occurrences extracted from a corpus with a symmetric window.",
"The optimized parameters can be interpreted as a semantic vector space that contains the word vectors for all words in the vocabulary.",
"In our case, we obtain two separately trained vector spaces, one for each subcorpus ( C 1 and C 2 ).",
"Following standard practice, both spaces are length-normalized, mean-centered (Artetxe et al., 2016; Schlechtweg et al., 2019) and then aligned by applying Orthogonal Procrustes (OP), because columns from different vector spaces may not correspond to the same coordinate axes (Hamilton et al., 2016).",
"The change between two time-specific embeddings is measured by calculating their Cosine Distance (CD) (Salton and McGill, 1983).",
"The strength of SGNS+OP+CD has been shown in two recent shared tasks with this sub-system combination ranking among the best submissions (Arefyev and Zhikov, 2020; Kaiser et al., 2020b; Pomsl and Lyapin, 2020; Prazak et al., 2020).",
"Bidirectional Encoder Representations from Transformers (BERT, Devlin et al., 2019) is a transformer-based neural language model designed to find contextualized representations for text by analyzing left and right contexts.",
"The base version processes text in 12 different layers.",
"In each layer, a contextualized token vector representation is created for every word.",
"A layer, or a combination of multiple layers (we use the average), then serves as a representation for a token.",
"For every target word we extract usages (i.e., sentences in which the word appears) by randomly sub-sampling up to 100 sentences from both subcorpora C 1 and C 2 .",
"1 These are then fed into BERT to create contex-1 We sub-sample as some words appear in 10,000 or more sentences.",
"tualized embeddings, resulting in two sets of up to 100 contextualized vectors for both time periods.",
"To measure the change between these sets we use two different approaches:",
"(i) We calculate the Average Pairwise Distance (APD).",
"The idea is to randomly pick a number of vectors from both sets and measure their mutual distances (Schlechtweg et al., 2018; Kutuzov and Giulianelli, 2020).",
"The change score corresponds to the mean average distance of all comparisons.",
"(ii) We average both vector sets and measure the Cosine Distance (COS) between the two resulting mean vectors (Kutuzov and Giulianelli, 2020).",
"SemEval-2020 Task 1 consists of two subtasks:",
"(i) binary classification: for a set of target words, decide whether (or not) the words lost or gained sense(s) between C 1 and C 2 , and",
"(ii) graded ranking: rank a set of target words according to their degree of LSC between C 1 and C 2 .",
"These require to detect semantic change in a small pre-selected set of target words.",
"Instead, we are interested in the discovery of changing words from the full vocabulary of the corpus.",
"We define the task of lexical semantic change discovery as follows.",
"Given a diachronic corpus pair C 1 and C 2 , decide for the intersection of their vocabularies which words lost or gained sense(s) between C 1 and C 2 .",
"This task can also be seen as a special case of Se-mEval's Subtask 1 where the target words equal the intersection of the corpus vocabularies.",
"Note, however, that discovery introduces additional diffi-culties for models, e.g. because a large number of predictions is required and the target words are not preselected, balanced or cleaned.",
"Yet, discovery is an important task, with applications such as lexicography where dictionary makers aim to cover the full vocabulary of a language.",
"We start the discovery process by generating optimized graded value predictions using high-performing parameter configurations following previous work and fine-tuning.",
"Afterwards, we infer binary scores with a thresholding technique (see below).",
"We then tune the threshold to find the best-performing typeand token-based approach (cid:120) 4: Identical 3: Closely Related 2: Distantly Related 1: Unrelated Table 1: DURel relatedness scale (Schlechtweg et al., 2018).",
"for binary classification.",
"These are used to generate two sets of predictions.",
"2 Evaluation metrics We evaluate the graded rankings in Subtask 2 by computing Spearman's rank-order correlation coefficient .",
"For the binary classification subtask we compute precision, recall and F 0 .",
"5 .",
"The latter puts a stronger focus on precision than recall because our human evaluation cannot be automated, so we decided to weigh quality (precision) higher than quantity (recall).",
"Parameter tuning Solving Subtask 2 is straightforward, since both the type-based and token-based approaches output distances between representations for C 1 and C 2 for every target word.",
"Like many approaches in SemEval-2020 Task 1 and DIACR-Ita we use thresholding to binarise these values.",
"The idea is to define a threshold parameter, where all ranked words with a distance greater or equal to this threshold are labeled as changing words.",
"For cases where no tuning data is available, Kaiser et al. (2020b) propose to choose the threshold according to the population of CDs of all words in the corpus.",
"Kaiser et al. set the threshold to + , where is the mean and is the standard deviation of the population.",
"We slightly modify this approach by changing the threshold to + t .",
"In this way, we introduce an additional parameter t , which we tune on the SemEval-2020 test data.",
"We test different values ranging from 2 to 2 in steps of 0 .",
"1 .",
"Population Since SGNS generates type-based vectors for every word in the vocabulary, measuring the distances for the full vocabulary comes with low additional computational effort.",
"Unfortunately, this is much more difficult for BERT.",
"Creating up to 100 vectors for every word in the vocabulary drastically increases the computational burden.",
"We choose a population of 500 words for our work allowing us 2 Find the code used for each step of the prediction process at https://github.com/seinan9/ LSCDiscovery .",
"to test multiple parameter configurations.",
"3 We sample words from different frequency areas to have predictions not only for low-frequency words.",
"For this, we first compute the frequency range (highest frequency lowest frequency) of the vocabulary.",
"This range is then split into 5 areas of equal frequency width.",
"Random samples from these areas are taken based on how many words they contain.",
"For example: if the lowest frequency area contains 50% of all words from the vocabulary, then 0 .",
"5 500 = 250 random samples are taken from this area.",
"The SemEval-2020 target words are excluded from this sampling process.",
"The resulting population is used to create predictions for both models.",
"Filtering The predictions contain proper names, foreign language and lemmatization errors, which we aim to filter out, as such cases are usually not considered as semantic changes.",
"We only allow nouns, verbs and adjectives to pass.",
"Words where over 10% of the usages are either non-German or contain more than 25% punctuation are filtered out as well.",
"The model predictions are validated by human annotation.",
"For this, we apply the SemEval-2020 Task 1 procedure, as described in Schlechtweg et al. (2020).",
"Annotators are asked to judge the semantic relatedness of pairs of word usages, such as the two usages of Aufkommen in (1) and (2), on the scale in Table",
"1. (1) Es ist richtig, dass mit dem Aufkommen der Manufaktur im Unterschied zum Handwerk sich Spuren der Kinderexploitation zeigen.",
"It is true that with the emergence of the manufactory, in contrast to the handicraft, traces of child labor are showing.' (2) Sie wissen, da wir fur das Vieh mehr Futter aus eigenem Aufkommen brauchen.",
"They know that we need more feed from our own production for the cattle.' The annotated data of a word is represented in a Word Usage Graph (WUG), where vertices represent word usages, and weights on edges represent 3 In a practical setting where predictions have to be generated only once, a much larger number may be chosen.",
"Also, possibilities to scale up BERT performance can be applied (Montariol et al., 2021).",
"the (median) semantic relatedness judgment of a pair of usages such as (1) and (2).",
"The final WUGs are clustered with a variation of correlation clustering (Bansal et al., 2004; Schlechtweg et al., 2020) (see Figure 1, left) and split into two subgraphs representing nodes from subcorpora C 1 and C 2 , respectively (middle and right).",
"Clusters are then interpreted as word senses and changes in clusters over time as lexical semantic change.",
"In contrast to Schlechtweg et al. we use the openly available DURel interface for annotation and visualization.",
"4 This also implies a change in sampling procedure, as the system currently implements only random sampling of use pairs (without SemEval-style optimization).",
"For each target word we sample | U 1 | = | U 2 | = 25 usages (sentences) per subcorpus ( C 1 , C 2 ) and upload these to the DURel system, which presents use pairs to annotators in randomized order.",
"We recruit eight German native speakers with university level education as annotators.",
"Five have a background in linguistics, two in German studies, and one has an additional professional background in lexicography.",
"Similar to Schlechtweg et al., we ensure the robustness of the obtained clusterings by continuing the annotation of a target word until all multi-clusters (clusters with more than one usage) in its WUG are connected by at least one judgment.",
"We fi-nally label a target word as changed (binary) if it gained or lost a cluster over time.",
"For instance, Aufkommen in Figure 1 is labeled as change as it gains the orange cluster from C 1 to C 2 .",
"Following Schlechtweg et al. (2020) we use k and n as lower frequency thresholds to avoid that small random fluctuations in sense frequencies caused by sampling variability or annotation error be misclas-4 https://www.ims.uni-stuttgart.de/ data/durel-tool .",
"sified as change.",
"As proposed in Schlechtweg and Schulte im Walde (submitted) for comparability across sample sizes we set k = 1 0 .",
"01 | U i | 3 and n = 3 0 .",
"1 | U i | 5 , where | U i | is the number of usages from the respective time period (after removing incomprehensible usages from the graphs).",
"This results in k = 1 and n = 3 for all target words.",
"Find an overview over the final set of WUGs in Table",
"2. We reach a comparably high inter-annotator agreement (Krippendorf's = . 58 ).",
"5 7 Results We now describe the results of the tuning and discovery procedures.",
"SGNS is commonly used (Schlechtweg et al., 2020) and also highly optimized (Kaiser et al., 2020a,b, 2021), so it is difficult to further increase the performance.",
"We thus rely on the work of Kaiser et al. (2020a) and test their parameter configurations on the German SemEval-2020 data set.",
"6 We obtain three slightly different parameter configurations (see Table 3 for more details), yielding competitive = .",
"690 , = .",
"710 and = .",
"710 , respectively.",
"In order to improve the performance of BERT, we test different layer combinations, pre-processings and semantic change measures.",
"Following Laicher et al. (2020, 2021), we are able to drastically increase the performance of BERT 5 We provide WUGs as Python NetworkX graphs, descriptive statistics, inferred clusterings, change values and interactive visualizations for all target words and the respective code at https://www.ims.uni-stuttgart.de/ data/wugs .",
"6 All configurations use w = 10 , d = 300 , e = 5 and a minimum frequency count of 39 .",
"on the German SemEval-2020 data.",
"In a preprocessing step, we replace the target word in every usage by its lemma.",
"In combination with layer 12+1, both APD and COS perform competitively well on Subtask 2 ( = . 690 and = . 738 ).",
"After applying thresholding as described in Section 5 we obtain F 0 .",
"5 -scores for a large range of thresholds.",
"SGNS achieves peak F 0 .",
"5 -scores of .",
"692 , .",
"738 and .",
"685 , respectively (see Table 3).",
"Interestingly, the optimal threshold is at t = 1 .",
"0 in all three cases.",
"This corresponds to the threshold used in Kaiser et al. (2020b).",
"While the peak F 0 .",
"5 of BERT+APD is marginally worse ( . 598 at t = 0 . 2 ), BERT+COS is able to outperform the best SGNS configuration with a peak of .",
"741 at t = 0 .",
"1 .",
"In order to obtain an estimate on the sampling variability that is caused by sampling only up to 100 usages per word for BERT+APD and BERT+COS (see Section 4.2), we repeat the whole procedure 9 times and estimate mean and standard deviation of performance on the tuning data.",
"In the beginning of every run the usages are randomly sampled from the corpora.",
"We observe a mean of .",
"657 for BERT+APD and .",
"743 for BERT+COS with a standard deviation of .",
"015 and .",
"012 , respectively, as well as a mean F 0 .",
"5 of .",
"576 for BERT+APD and .",
"684 for BERT+COS with a standard deviation of .",
"013 and .",
"038 , respectively.",
"This shows that the variability caused by sub-sampling word usages is negligible.",
"We use the top-performing configurations (see Table 3) to generate two sets of large-scale predictions.",
"While we use the lemmatized corpora for SGNS, in BERT's case we choose the raw corpora with lemmatized target words instead.",
"The latter choice is motivated by the previously described performance increases.",
"After the filtering as described in Section 6, we obtain 27 and 75 words labeled as changing, respectively.",
"We further sample 30 targets from the second set of predictions to obtain a feasible number for annotation.",
"We call the first set SGNS targets and the second one BERT targets, with an overlap of 7 targets.",
"Additionally, we randomly sample 30 words from the population (with an overlap of 5 with the SGNS and BERT targets) in order to have an indication of what the change distribution underlying the corpora is.",
"We call these baseline (BL) targets.",
"This baseline will help us to put the results of the predictions in context and to find out whether the predictions of the two models can be explained by pure randomness.",
"Following the annotation process, binary gold data is generated for all three target sets, in order to validate the quality of the predictions.",
"The evaluation of the predictions is presented in Table",
"3. We achieve a F 0 .",
"5 -score of .",
"714 for SGNS and .",
"620 for BERT.",
"Out of the 27 words predicted by the SGNS model, 18 (67 %) were actually labeled as changing words by the human annotators.",
"In comparison, only 17 out of the 30 (57 %) BERT predictions were annotated as such.",
"The performance of SGNS for prediction (SGNS targets) is even higher than on the tuning data (SemEval targets).",
"In contrast, BERT's performance for prediction drops strongly in comparison to the performance on the tuning data ( . 741 vs. . 620 ).",
"This reproduces previous results and con-firms that (off-the-shelf) BERT generalises poorly for LSCD and does not transfer well between data sets (Laicher et al., 2020).",
"If we compare these results to the baseline, we can see that both models perform much better than the random baseline (F 0 . 5 of . 349 ).",
"Only 10 out of the 30 (30 %) randomly sampled words are annotated as changing.",
"This indicates, that the performance of SGNS and BERT is likely not a cause of randomness.",
"Both models considerably increase the chance of finding changing words compared to a random model.",
"Figure 2 shows the detailed F 0 .",
"5 developments parameters t tuning predictions F 0 .",
"across different thresholds on the SemEval targets and the predicted words.",
"Increasing the threshold on the predicted words improves the F 0 .",
"5 for both the type-based and token-based approach.",
"A new high-score of .",
"783 at t = 1 .",
"3 is achievable for SGNS.",
"While BERT's performance also increases to a peak of .",
"714 at t = 1 .",
"0 , it is still lower than in the tuning phase.",
"For further insights into sources of errors, we take a close look at the false positives, their WUGs and the underlying usages.",
"Most of the wrong predictions can be grouped into one out of two error sources (cf. Kutuzov, 2020, pp. 175182).",
"Context change The first category includes words where the context in the usages shifts between time periods, while the meaning stays the same.",
"The WUG of Angriffswaffe (offensive weapon') (see Figure 5 in Appendix A) shows a single cluster for both C 1 and C 2 .",
"In the first time period Angriffswaffe is used to refer to a hand weapon (such as sword', spear').",
"In the second period, however, the context changes to nuclear weaponry.",
"We can see a clear contextual shift, while the meaning did not change.",
"In this case both models are tricked by the change of context.",
"Further false positives in this category are the SGNS targets Achtung (ostracism') and aussterben (to die out') and the COS targets K onigreich (kingdom') and Waffen-ruhe (ceasefire').",
"Context variety Words that can be used in a large variety of contexts form the second group of false positives.",
"SGNS falsely predicts neunjahrig as a changing word.",
"We take a closer look at its WUG (see Figure 6 in Appendix A).",
"We observe that there is only one and the same cluster in both time periods, and the meaning of the target does not change, even though a large variety of contexts exists in both C 1 and C 2 .",
"For example: which bears oats at nine years fertilization', courageously, a nine-year-old Spaniard did something' and after nine years of work'.",
"Both models are misguided by this large context variety.",
"Examples include the SGNS targets neunjahrig (9-year-old') and vorjahrig (of the previous year') and the COS targets bemerken (to notice') and durchdenken (to think through').",
"We now evaluate the usefulness of the proposed semantic change discovery procedure including the annotation system and WUG visualization from a lexicographer's viewpoint.",
"The advantage of our approach lies in providing lexicographers and dictionary makers the choice to take a look into predictions they consider promising with respect to their research objective (disambiguation of word senses, detection of novel senses, detection of archaisms, describing senses in regard to specific discourses etc.) and the type of dictionary.",
"Visualized predictions for target words may be analyzed in regard to single senses, clusters of senses, the semantic proximity of sense clusters and a stylized representation of frequency.",
"Random sampling of usages also offers the opportunity to judge underrepresented senses in a sample that might be infrequent in a corpus or during a specific period of time (although currently a high number of overall annotations would be required in order to do so).",
"Most importantly, the use of a variable number of human annotators has the potential to ensure a more objective analysis of large amounts of corpus Figure 2: F 0 .",
"data.",
"In order to evaluate the potential of the approach for assisting lexicographers with extending dictionaries, we analyze statistical measures and predictions of the models provided for the two sets of predictions (SGNS, BERT) and compare them to existing dictionary contents.",
"We consider overall inter-annotator agreement ( > = . 5 ) and annotated binary change label to select 21 target words for lexicographical analysis.",
"In this way, we exclude unclear cases and non-changing words.",
"The target words are analyzed by inspecting cluster visualizations of WUGs (such as in Figure 1) and comparing them to entries in general and specialized dictionaries in order to determine: whether a candidate novel sense is already included in one of the reference dictionaries, whether a candidate novel sense is included in one of the two reference dictionaries that are consulted for C 1 (covering the period between 18001899) and C 2 (covering the period between 19461990), indicating the rise of a novel sense, the archaization of older senses or a change in frequency.",
"Three dictionaries are consulted throughout the analysis:",
"(i) the Dictionary of the German language (DWB) by Jacob und Wilhelm Grimm (digitized version of the 1st print published between 1854 1961),",
"(ii) the Dictionary of Contemporary German (WGD), published between 19641977, now curated and digitized by the DWDS and",
"(iii) the Duden online dictionary of German language (DU-DEN), reflecting usage of Contemporary German up until today.",
"7 Additionally, lemma entries in the Wiktionary online dictionary (Wiktionary) are consulted to verify genuinely novel senses described in Section 8.1.",
"In the case of 17 target words, all senses identified by the system are included in at least one of the three dictionaries consulted for the analysis.",
"In the four remaining cases, at least one novel sense of a word is neither paraphrased nor given as an example of semantically related senses in the dictionaries: einbinden Reference to the integration or embedding of details on a topic, event, person in respect to a chronological order within written text or visual presentation (e.g. for an exhibition on an author) is judged as a novel sense in close semantic proximity to the old sense to bind sth. into sth.', e.g. flowers into a bundle of flowers.",
"einbinden is also used in technical contexts, meaning to (physically) implement parts of a construction or machine into their intended slots'.",
"niederschlagen In cases where the verb niederschlagen co-occurs with the verb particle auf and the noun Fl ugel , the verb refers to a bird's action of repeatedly moving its wings up and down in order to fly.",
"regelrecht Used as an adverb, regelrecht may refer to something being the usual outcome that ought 7 Only the fully-digitized version of the DWB's first print was consulted for this evaluation, since a revised version has not been completed yet and is only available for lemmas starting with letters af.",
"to be expected due to scientific principles, with an emphasis on the actual result of an action (such as the dyeing of fiber of a piece of clothing following the bleaching process), whereas senses included in dictionaries for general language emphasize either the intended accordance with a rule or something usually happening (the latter being colloquial use).",
"Zehner (see Figure 3 in Appendix A) The meaning a winning sequence of numbers in the national lottery', predicted to have risen as a novel sense between C 1 and C 2 , is not included in any of the reference dictionaries.",
"In most of these cases, senses identified as novel reflect metaphoric use, indicating that definitions in existing dictionary entries may need to be broadened, or example sentences would have to be added.",
"Some of the senses described in this section might be included in specialized dictionaries, e.g. technical usage of einbinden .",
"For 12 target words, semantic change predicted by the models (innovative, reductive or a salient change of frequency of a sense) correlates with the addition or non-inclusion of senses in dictionary entries consulted for the respective period of time (DWB for C 1 , WGD for C 2 ).",
"It should be noted though, that lemma lists of the two dictionaries might be lacking lemmas in the headword list, and lemma entries might be lacking paraphrases or examples of senses of the lemma, simply because corpus-based lexicography was not available at the time of their first print and revisions of the dictionaries are currently work in progress.",
"Additionally, we consult a dictionary for Early New High German (FHD) in order to check whether discovered novel senses existed at an earlier stage and may be discovered due to low frequency or sampling error.",
"In two cases, discovered novel senses that are not included in the DWB (for C 1 ) are found to be included in the FHD.",
"Interestingly, one sense paraphrased for Ausru-fung (a loud wording, a shout') is included in neither of the two dictionaries consulted to judge senses from C 1 and C 2 , but in the FHD (earlier) and DUDEN (as of now).",
"These findings suggest that it might be reasonable to use more than two reference corpora.",
"This would also alleviate the corpus bias stemming from idiosyncratic data sampling procedures.",
"We used two state-of-the-art approaches to LSC detection in combination with a recently published high-quality data set to automatically discover semantic changes in a German diachronic corpus pair.",
"While both approaches were able to discover various semantic changes with above-random probability, some of them previously undescribed in etymological dictionaries, the type-based approach showed a clearly better performance.",
"We validated model predictions by an optimized human annotation process yielding high inter-annotator agreement and providing convenient ways of visualization.",
"In addition, we evaluated the full discovery process from a lexicographer's point of view and conclude that we obtained high-quality predictions, useful visualizations and previously unreported changes.",
"On the other hand, we discovered some issues with respect to the reliability of predictions for semantic change and number and composition of reference corpora that are going to be dealt with in the future.",
"The results of the analyses endorse that our approach might aid lexicographers with extending and altering existing dictionary entries.",
"We thank the three reviewers for their insightful feedback and Pedro Gonzalez Bascoy for setting up the DURel annotation tool.",
"Dominik Schlechtweg was supported by the Konrad Adenauer Foundation and the CRETA center funded by the German Ministry for Education and Research (BMBF) during the conduct of this study."
] | [
"abstain",
"objective",
"objective",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"other",
"other",
"method",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"result",
"abstain",
"objective",
"other",
"other"
] |
[
"Multilingual pre-trained models are able to zero-shot transfer knowledge from rich-resource to low-resource languages in machine reading comprehension (MRC).",
"However, inherent linguistic discrepancies in different languages could make answer spans predicted by zero-shot transfer violate syntactic constraints of the target language.",
"In this paper, we propose a novel multilingual MRC framework equipped with a Siamese Semantic Disentanglement Model (S 2 DM) to disassociate semantics from syntax in representations learned by multilingual pre-trained models.",
"To explicitly transfer only semantic knowledge to the target language, we propose two groups of losses tailored for semantic and syntactic encoding and disentanglement.",
"Experimental results on three multilingual MRC datasets (i.e., XQuAD, MLQA, and TyDi QA) demonstrate the effectiveness of our proposed approach over models based on mBERT and XLM-100.",
"Multilingual pre-trained language models (PLMs) (Devlin et al., 2019; Conneau and Lample, 2019; Conneau et al., 2020) have been widely explored in cross-lingual understanding tasks.",
"However, zero-shot transfer method based on multilingual PLMs does not work well for low-resource language MRC.",
"Such multilingual MRC models could roughly detect answer spans but may fail to predict the precise boundaries of answers (Yuan et al., 2020).",
"In order to address this issue, existing methods mainly resort to external resources.",
"Based on the finding that 70% of answer spans are language-specific phrases (e.g., named entities, noun phrases) in MLQA (Lewis et al., 2020), Yuan et al. (2020) propose an additional language-specific knowledge * These authors contributed equally to this work and should be considered co-first authors.",
"[Passage] [XQuAD] A growing cause of concern are that attacks on teachers in Welsh schools which reached an all-time high between 2005 and 2010.[...] [Question] When were attacks on teachers the highest?",
"[Answer ground truth] between 2005 and 2010 [the answer is a prepositional phrase containing a noun phrase] ROOTVPNP PP DT JJ IN NP CD an high between2005 [Passage] [BiPaR] [...] As Trinket watched, the gleam of reflected light from the stiletto began to jump and",
"waver.[...] [Question]",
"What's shaking?",
"[Answer ground truth] the gleam of reflected light [Answer model prediction] [the answer violating syntactic constituent boundaries]",
"(a)",
"(b) ROOT IPNP VP NN ADVP AD VP VV IP VP VV NP CP DEC IP VP VV CD 2010 VBD JJ all-time reached CC and Figure 1: Relations between answer spans and syntactic constituents.",
"phrase masking (LAKM) task to enhance boundary detection for low-resource languages.",
"Liang et al. (2021) present a boundary calibration model stacked over a base sequence labeling module, introducing a phrase boundary recovery task to pretrain the calibration module on large-scale multilingual datasets synthesized from Wikipedia documents.",
"These two methods rely on external resources, which are not always easily available.",
"As illustrated in Figure",
"1(b), the transfer model may violate syntactic constraints for answer spans in the target language (e.g., the predicted answer \" \" crossing the boundaries of two sub-trees).",
"An intuitive assumption is that the majority of answer spans respect syntactic constituency boundaries (i.e., syntactic constraint, illustrated by the case in Figure",
"1(a)).",
"On four multilingual MRC datasets, we use Stanford CoreNLP 1 to collect syn-1 https://stanfordnlp.github.io/ CoreNLP/ 991 XQuAD MLQA TyDi QA-GoldP BiPaR English 89.08% 90.11% 89.12% 90.99% Chinese 88.05% 87.57% -95.73% Table 1: The percentages of answer spans that respect syntactic constituent boundaries in four multilingual MRC datasets in both English and Chinese.",
"tax parse trees and calculate the percentages of ground-truth answers that respect syntactic constituent boundaries.",
"As shown in Table 1, over 87% of answer spans respect the syntactic constraint.",
"On the bilingual parallel MRC corpus BiPaR (Jing et al., 2019), we have compared two MRC models: a monolingual MRC model trained on the Chinese data of BiPaR vs. an mBERT-based MRC model trained on the English data of BiPaR and adapted to Chinese via zero-shot transfer.",
"For questions where the monolingual model correctly predicts the answer and respect syntactic constraint, 23.15% of them are incorrectly predicted by the transfer model, and the predicted answers violate the syntactic constraint, illustrated by the case in Figure",
"1(b).",
"This suggests that the source language syntax may have a negative impact on the answer boundary detection in the target language during zero-shot transfer, due to the linguistic discrepancies between the two languages.",
"However, linguistic discrepancies are diverse and it is difficult to learn them.",
"We hence propose to decouple semantics from syntax in pre-trained models for multilingual MRC, transforming the learning of linguistic discrepancies into universal semantic information.",
"Specifically, we propose a Siamese Semantic Disentanglement Model (S 2 DM) that utilises two latent variables to learn semantic and syntactic vectors in multilingual pre-trained representations.",
"As shown in Figure",
"2(a), stacking a linear output layer for MRC over the disentangled semantic representation layer, we can fine-tune the multilingual PLMs on the rich-resource source language and transfer only disentangled semantic knowledge into the target language MRC.",
"Our model aims to reduce the negative impact of the source language syntax on answer boundary detection in the target language.",
"To disassociate semantic and syntactic information in PLMs well, we introduce objective functions of learning cross-lingual reconstruction and semantic discrimination together with losses of incorporating word order information and syntax structure information (Part-of-Speech tags and syntax parse trees).",
"We use a publicly available multilingual sentence-level parallel corpus with syntactic labels to train S 2 DM.",
"We propose a multilingual MRC framework that explicitly transfers semantic knowledge of the source language to the target language to reduce the negative impact of source syntax on answer span detection in the target language MRC.",
"We propose a siamese semantic disentanglement model that can effectively separate semantic from syntactic information of multilingual PLMs with semantics/syntax-oriented losses.",
"Experimental results on three multilingual MRC datasets ( XQuAD, MLQA, and TyDi QA) demonstrate that our model can achieve significant improvements of 3.13 and 2.53 EM points over two strong baselines, respectively.",
"Cross-lingual/Multilingual Machine Reading Comprehension Recent advances in multilingual MRC evaluation datasets (Artetxe et al., 2020; Lewis et al., 2020; Clark et al., 2020) trigger research interests in multilingual and cross-lingual MRC (Hsu et al., 2019; Cui et al., 2019; Yuan et al., 2020; Liu et al., 2020; Huang et al., 2021; Wu et al., 2021).",
"Hsu et al. (2019) investigate cross-lingual transfer capability of multilingual BERT (mBERT) on MRC tasks and find that zero-shot learning based on PLM is feasible, even between distant languages, such as English and Chinese.",
"Various approaches have been proposed on top of multilingual MRC based on PLMs.",
"Cui et al. (2019) propose a method that combines multilingual BERT and back-translation for cross-lingual MRC.",
"In order to effectively leverage translation data and reduce the impact of noise in translations, Liu et al. (2020) propose a cross-lingual training approach based on knowledge distillation for multilingual MRC.",
"Yuan et al. (2020) present two auxiliary tasks: mixMRC and LAKM to introduce additional phrase boundary supervision into the fine-tuning stage.",
"Liang et al. (2021) propose a pre-trained boundary calibration module based on the output 992 Multilingual PLM Multilingual PLM ...",
"Different from the above studies, we mainly consider the impact of syntactic divergences between the source and target language in zero-shot cross-lingual transfer based on multilingual PLMs, and attempt to disassociate semantics from syntax and only transfer semantics to the target language.",
"Disentangled Representation Learning Recently, there has been a growing amount of work on learning disentangled latent representations in NLP tasks (Zhang et al., 2019; Hu et al., 2017; Yin et al., 2018).",
"In this aspect, the most related work to our syntax-semantics decoupling method is the vMF-Gaussian Variational Autoencoder (VGVAE) model proposed by Chen et al. (2019).",
"It is a generative model using two latent variables to represent semantics and syntax of the sentence, developed for monolingual setting and trained with paraphrases.",
"It uses paraphrase reconstruction loss and a discriminative paraphrase loss to learn semantic representations and word order information for syntactic representations.",
"We adapt this model to multilingual syntax-semantics disentanglement.",
"We use bilingual sentence pairs to train our model with a cross-lingual reconstruction loss and semantic discrimination loss.",
"To better disentangle semantics from complex and diverse syntax in multilingual PLMs, we introduce two additional syntax-related losses for incorporating POS tags and syntax trees.",
"Figure 2 shows the architecture of our multilingual MRC framework with the proposed siamese semantic disentanglement model.",
"Our multilingual MRC framework consists of three essential components: the multilingual PLM layer, the siamese semantic disentanglement module, and the linear output layer.",
"The output representations from the multilingual PLM are fed into S 2 DM to disassociate semantic and syntactic information.",
"Only the disentangled semantic representations are input to the linear output layer for predicting answer spans in passages.",
"In order to facilitate the zero-shot cross-lingual transfer of only semantic knowledge from the rich-resource source language to the low-resource target language, we take a two-stage training strategy.",
"First, we pre-train S 2 DM with parallel data (see Section 3.2) while the parameters of the multilingual PLM are frozen.",
"Once S 2 DM is trained, only the output of source language MLP network is fed into the linear output layer for MRC.",
"In the second step, we freeze the parameters of the S 2 DM and fine-tune the entire multilingual MRC framework on MRC data of the source language.",
"In S 2 DM, we assume that a sentence x is generated by a semantic and syntactic variable, i.e., y and z , independently.",
"We follow VGVAE Chen et al. 993 (2019) to use the von Mises-Fisher (vMF) distribution for the semantic variable and the Gaussian distribution for the syntactic variable.",
"Formally, the joint probability of the sentence and its two latent variables can be factorized as: p ( x, y, z ) = p ( y ) p ( z ) p ( x | y, z ) (1) where p ( x | y, z ) is a generative model consisting of bag-of-words decoder.",
"The variational inference process of VGVAE uses a factorized approximated posterior q ( y | x ) q ( z | x ) = q ( y, z | x ) with the objective function that maximizes a lower bound of the marginal log-likelihood: LV GV AE = LRL + KL( q ( z | x ) || p ( z )) + KL( q ( y | x ) || p ( y )) , (2) LRL = E y q ( y | x ) z q ( z | x ) (cid:2) log p ( x | y, z ) (cid:3) (3) where q ( y | x ) is subject to vMF ( ( x ) , ( x )) while q ( z | x ) follows N ( ( x ) , diag ( ( x ))) .",
"The prior p ( y ) and p ( z ) follows the uniform distribution vMF ( , 0) and a standard Gaussian distribution respectively.",
"Eq.(3) is the reconstruction loss (RL) of the generator.",
"In our model, we adopt a multilayer perceptron (MLP) network to learn the mean ( ) and variance ( ) of two distributions.",
"As pre-trained representations are contextually-encoded token vectors, latent variable vectors obtained by sampling from the distributions need to be averaged so as to output sentence-level semantic and syntactic vector.",
"Since S 2 DM uses a Siamese network for both the source and target language, the disentanglement between semantics and syntax is conducted for the two languages simultaneously with two parameter-shared subnetworks, as shown in Figure",
"2(b).",
"We attempt to extract rich semantic information from multilingual representations which is universal for multiple languages and contains less syntactic information.",
"Except for the conventional reconstruction loss, we propose two additional losses on parallel data to encourage the latent variable y to capture semantic information: a Cross-lingual Reconstruction Loss (CRL) and Semantic Discrimination Loss (SDL) .",
"The former estimates the cross-entropy loss when we use the semantic representation y t of the target language to reconstruct the source input and use the source semantic representation y s for target reconstruction.",
"The latter is used to force the learned source semantic representation y s to be as close as possible to the target semantic representation y t since the semantic meanings of the parallel source and target sentence is equivalent to each other.",
"The two losses are estimated as follows: LCRL = E yt q ( y | xt ) zs q ( z | xs ) (cid:2) log p ( x s | y t , z s ) (cid:3) + E ys q ( y | xs ) zt q ( z | xt ) (cid:2) log p ( x t | y s , z t ) (cid:3) , (4) LSDL = max (cid:8) 0 , sim( y s , y t ) + sim( y s , n t ) (cid:9) + max (cid:8) 0 , sim( y s , y t ) + sim( n s , y t ) (cid:9) (5) where sim( , ) is a cosine similarity score function.",
"The margin is a hyperparameter to control the gap between parallel sentence pair ( y s , y t ) and two nonparallel sentence pairs ( y s , n t ) and ( n s , y t ) .",
"n s is the semantic vector of a negative sample, which has the highest cosine similarity to y s .",
"Specially, as partial sentences in our corpus are parallel in more than two languages, we limit the data range of negative sampling to only 2-way parallel pairs.",
"n t are obtained in the similar way to n s .",
"In order to guide S 2 DM to disassociate syntactic information into the syntactic latent variable z , we also define three losses tailored for capturing different types of syntactic information.",
"First, we employ Word Position Loss (WPL) , defined as follows: LWPL = E z q ( z | x ) (cid:2) (cid:88) i log softmax( f ( h i )) i (cid:3) , (6) where softmax( ) i indicates the probability of the i th word at position i , and f ( ) is a three-layer feedforward neural network with input h i = [ e i ; z ] that is the concatenation of the syntactic variable z and the embedding vector e i of the multilingual PLM for the i th token in the input sentence.",
"In addition, we define a Part-of-Speech and syntax tree loss to encourage S 2 DM to isolate deeper syntactic information from pre-trained representations.",
"POS tagging is a sequence labeling task, which can be regarded as a multi-class classification problem for each token in a sentence.",
"Hence, we define Part-of-Speech (POS) Loss as a cross-entropy style loss as follows: LPOS = (cid:88) i (cid:2) m (cid:88) j =1 log softmax( g ( h i )) j = class (cid:3) (7) where g ( ) is a linear layer, softmax ( ) j = class estimates the probability of gold POS tag class , m is the number of different POS tags.",
"that PLMs can encode syntactic structures of sentences (Hewitt and Manning (2019); Chi et al. (2020)).",
"Inspired by Hewitt and Manning (2019), we formulate syntactic parsing from pre-trained word representations as two independent tasks: depth prediction of a word and distance prediction of two words in the parse tree.",
"Given a matrix B R k m as a linear transformation, the losses of these two subtasks are defined as: L depth = (cid:88) i ( w i Bh i 22 ) , (8) L distance = (cid:88) i,j (cid:12)(cid:12) d T ( w i , w j ) d B ( h i , h j ) (cid:12)(cid:12) (9) where w i is the parse depth of a word defined as the number of edges from the root of the parse tree to w i , and Bh i 2 is the tree depth L2 norm of the vector space under the linear transformation.",
"As for d B ( h i , h j ) , it can be defined as the squared L 2 distance after transformation by B: d B ( h i , h j ) = ( B ( h i h j )) T ( B ( h i h j )) (10) To induce parse trees, we minimize the summation of the above two losses L depth and L distance , and LSTL is defined as: LSTL = L depth + L distance (11) According to the different syntactic tasks, we train two S 2 DM variants: S 2 DM_POS and S 2 DM_SP (SP for syntactic parsing), where their training objectives are defined as follows: L 1 = LV GV AE + LCRL + LSDL + LWPL + LPOS , L 2 = LV GV AE + LCRL + LSDL + LWPL + LSTL 3.3 Generalization Analysis In this section, we analyze the generalization of our decoupling-based multilingual MRC model.",
"d T ( w i , w j ) is the number of edges in the path between the i th and j th word in the parse tree T .",
"By two reconstruction losses",
"Eq.(3) and",
"Eq.(4), we will prove that the syntactic and semantic vectors obtained by S 2 DM are language-agnostic.",
"Since the mathematic structures of",
"Eq.(3) and",
"Eq.(4) are the same, we take one part of",
"Eq.(4) for analysis.",
"Due to z s and y t are independent of each other, p ( x s , z s | y t ) = p ( x s , z s ) .",
"We obtain: E yt q ( y | xt ) zs q ( z | xs ) (cid:2) log p ( x s | y t , z s ) (cid:3) = E y t q ( y | x t ) (cid:0) (cid:88) z s q ( z | x s ) p ( z s )log p ( z s ) p ( x s , z s | y t ) (cid:1) = KL( p ( z s ) || p ( x s , z s )) Similarly, E ys q ( y | xs ) zt q ( z | xt ) [ log p ( x t | y s , z t )] = KL( p ( z t ) || p ( x t , z t )) LRL = KL( p ( y s ) || p ( x s , y s )) + KL( p ( y t ) || p ( x t , y t )) Minimizing KL( q ( z | x ) || p ( z )) and KL( q ( y | x ) || p ( y )) will eventually fit both p ( x s , z s ) and p ( x t , z t ) into the same distribution.",
"In the same way, both p ( x s , y s ) and p ( x t , y t ) also fit to the same distribution, no matter what the target language is.",
"This is consistent with our motivation to use the siamese network.",
"Furthermore, the semantic discrimination loss in",
"Eq.(5) guarantees that the semantic vectors of the source language and the target language are similar to each other.",
"Minimizing",
"Eq.(5) can be equivalent to: (cid:26) sim( y s , y t ) > sim( y s , n t ) + sim( y s , y t ) > sim( n s , y t ) + which is to maximize sim( y s , y t ) to encourages the target semantic vector to approach parallel source semantic vector.",
"In summary, S 2 DM can obtain language-agnostic semantic and syntactic vectors.",
"Therefore, our multilingual MRC model is suitable even for low-resource languages without training data for the decoupling model.",
"To verify the effectiveness of our multilingual MRC model, we conducted experiments on three multilingual question answering benchmarks:",
"XQuAD (Artetxe et al., 2020) consists of 11 datasets of different languages translated from the SQuAD v1.1 (Rajpurkar et al., 2016) development set, including Spanish (es), German (de), Greek (el), Russian (ru), Turkish (tr), Arabic (ar), Vietnamese (vi), Thai (th), Chinese (zh), Hindi (hi), and Romanian (ro).",
"MLQA (Lewis et al., 2020) consists of over 5K extractive MRC instances in 7 languages: English (en), Arabic (ar), German (de), Spanish (es), Hindi (hi), Vietnamese (vi) and Chinese (zh).",
"MLQA is also highly parallel, with MRC instances parallel across 4 different languages on average.",
"TyDi QA-GoldP is the gold passage task in TyDi QA (Clark et al., 2020) covering 9 typologically diverse languages: Arabic (ar), Bengali (bg), English (en), Finnish (fi), Indonesian (id), Korean (ko), Russian (ru), Swahili (sw), Telugu (te).",
"It 995 is a more challenging MRC benchmark as questions have been written without seeing the answers, leading to 3 and 2 times less lexical overlap than XQuAD and MLQA, respectively (Hu et al., 2020).",
"We used the following two multilingual PLMs to build our MRC model to conduct experiments:",
"mBERT is the multilingual version of BERT Devlin et al. (2019), with 177M parameters, is pre-trained on the Wikipedia of 104 languages to optimize the masked language modeling objective.",
"XLM-100 uses a pre-training objective similar to that of mBERT but with a larger number of parameters (578M) and a larger shared vocabulary than mBERT, and is trained on the same Wikipedia data covering 100 languages as mBERT.",
"Furthermore, we compared with a strong baseline that uses external knowledge to enhance cross-lingual MRC: LAKM is a pre-trained task proposed in (Yuan et al., 2020) by introducing external sources for phrase-level masked language modeling task.",
"The external corpus contain 363.5k passages and 534k knowledge phrases in four languages: English (en), French (fr), German (de), and Spanish (es).",
"For S 2 DM, we collected approximately 26k labelled parallel sentence pairs from the Universal Dependencies (UD 2.7) Corpus (Zeman et al., 2020) as the training set.",
"The training set covers 20 languages and overlap with 13 languages of three MRC datasets.",
"We used Universal POS tags and HEAD tags in UD 2.7 for the POS tagging and syntactic parsing task.",
"We chose data from the Chinese semantic textual similarity (STS) task (Tang et al., 2016) as the development set.",
"For hyper-parameters in S 2 DM, the learning rate was set to 5e-5, the margin was 0.4, and the latent variable dimensions was 200.",
"For our multilingual MRC models and two baseline models, we fine-tuned them on the SQuAD v1.1 (Rajpurkar et al., 2016) and evaluated them on the test data of the three multilingual MRC datasets.",
"For models based on mBERT, we fine-tuned them for 3 epochs with a training batch size of 32 and a learning rate of 2e-5.",
"We fine-tuned models based on XLM-100 for 2 epochs with a training batch size of 16 and a learning rate of 3e-5.",
"The overall experimental results are shown in Table 2. All our tests were conducted under the conditions of zero-shot transfer.",
"Our models (S 2 DM_POS, S 2 DM_SP combined with XLM-100 or mBERT) significantly outperform both XLM-100 and mBERT baselines on three datasets.",
"S 2 DM_SP achieves the best performance, indicating that the learning of deeper syntax information is compelling.",
"Especially, compared with baselines on the TyDi QA-Gold dataset, S 2 DM_SP based on XLM-100 and mBERT gains 4.1%, 4.2% EM improvements on average across 9 languages, respectively.",
"The results of 12 languages in XQuAD and MLQA are shown in Table 3. For cross-lingual transfer performance, our models are better than the two baselines in terms of either EM or F1 on all 11 low-resource target languages.",
"On the MLQA dataset, LAKM uses a larger extra corpus to train a better backbone language model, while our method with less external data can still achieve similar performance in German (de) and Spanish (es).",
"The TyDi QA-GoldP dataset is more challenging than XQuAD and MLQA.",
"The results of TyDi QA-GoldP are shown in Table 4, and our models are superior to the baselines in terms of either EM or F1 for all 8 low-resource target languages.",
"Significantly, XLM+S 2 DM_SP outperforms the XLM-100 baselines by 8.4%, 9.5% in EM for Finnish (fi), Russian(ru), respectively.",
"The language families of these two languages are different from that of English.",
"The evaluation results on these three datasets verify the effectiveness of our proposed method.",
"In Section 3.3, we theoretically analyze the generalization of our model.",
"The results on the three datasets show the effectiveness on five languages not included in the training target languages for S 2 DM.",
"The five languages are Romanian (ro), Vietnamese (vi) in XQuAD and Bengali (bg), Swahili (sw), Telugu (te) in TyDi QA-GoldP, which are resource-scarce and have differ-996 XQuAD(EM/F1) en ar de el es hi ro ru th tr vi zh avg XLM-100 66.5/86.5 35.6/72.4 53.8/80.9 37.9/66.3 54.6/81.0 39.9/64.9 56.6/79.6 54.0/ 79.5 10.3/27.0 42.0/72.4 49.5/75.4 42.7/65.4 45.3/70.9 XLM-100 XLM+S 2 DM_POS 67.5/87.4 40.2 / 74.9 54.2/80.8 41.9/71.3 55.4/82.1 40.0/66.2 56.4/79.6 54.0/79.3 13.8 / 38.9 41.9/70.8 50.6/75.8 42.9/65.1 46.6/72.7 XLM+S 2 DM_SP 68.3 / 88.0 39.8/ 74.9 55.8 / 81.7 44.1 / 72.4 56.8 / 82.5 40.5 / 66.5 59.0 / 81.7 54.2 / 79.5 13.3/38.3 44.5 / 72.9 51.3 / 76.1 44.5 / 67.6 47.7 / 73.5 mBERT 72.6/83.6 44.3/ 60.6 54.0/69.6 46.0/61.1 57.3/74.9 38.3/53.3 58.3/72.5 54.0/69.6 30.9/39.9 33.8/50.9 46.1/65.9 46.3/57.4 48.5/63.3 mBERT mBERT+S 2 DM_POS 73.4 /83.2 44.9 /59.9 55.6 / 71.9 44.8/59.7 57.4 / 75.0 41.3/55.7 58.1/72.4 55.3 / 71.2 32.7 / 40.7 34.0/50.8 48.2/67.4 47.1/56.9 49.4/63.7 mBERT+S 2 DM_SP 73.2/ 84.0 43.3/60.0 55.2/70.7 46.6 / 61.8 57.1/74.1 42.7 / 56.5 59.5 / 73.4 54.6/70.3 30.4/38.9 36.3 / 51.4 49.8 / 69.7 48.9 / 58.5 49.8 / 64.1 MLQA(EM/F1) XLM-100 59.1/81.8 27.0/62.8 43.5/71.3 -42.7/73.8 29.3/56.4 --37.4/65.0 30.1/53.7 38.5/66.4 XLM-100 XLM+S 2 DM_POS 61.1 /82.8 30.5/65.7 43.9/71.2 -43.1/73.5 31.5/58.0 --39.7/66.7 30.5/53.1 40.1/67.3 XLM+S 2 DM_SP 61.1 / 83.0 31.2 / 67.1 45.9 / 72.9 -43.6 / 74.1 34.1 / 61.2 --41.4 / 68.5 32.3 / 55.6 41.4 / 68.9 mBERT 67.0/79.3 31.5/49.5 43.8/58.3 -45.8/64.1 29.4/45.2 --37.5/57.3 34.5/56.1 41.2/58.5 LAKM 66.8/ 80.0 -45.5 / 60.5 -48.0 / 65.9 ----mBERT mBERT+S 2 DM_POS 66.3/79.5 32.4 /50.2 45.1/59.7 -46.8/65.1 30.8/46.0 --39.5/59.4 38.4 /59.1 42.8/59.9 mBERT+S 2 DM_SP 67.5 /79.8 32.1/ 50.5 45.3/59.9 -47.2/65.0 32.0 / 46.9 --41.1 / 60.6 38.0/ 59.3 43.3 / 60.3 Table 3: EM and F1 score of 12 languages on the XQuAD and MLQA dataset.",
"ent language families from English.",
"Significantly, mBERT+S 2 DM_SP outperforms the mBERT baseline by 13.6% in EM for Swahili (sw).",
"We further conducted an ablation study based on the mBERT and VGVAE model with different combinations of losses (introduced in the Sec-tion.3.2).",
"The results are shown in Figure 3. Our mBERT+S 2 DM_SP MRC model achieves the strongest performance among all variants, surpassing the model w/ all losses.",
"According to the results shown in Figure 3, we can summarize that each loss is essential and suitable to our model.",
"The results without POS and STL loss (e.g., w/ CRL+SDL+WPL) on the MLQA dataset validate the effectiveness of our losses (POS or STL loss) tailored for capturing syntactic information.",
"The performance of models that only contain two losses in CRL, SDL, and WPL drops significantly compared with the w/ CRL+SDL+WPL model.",
"The results of models that only contain one of the losses in CRL, SDL drop slightly, but the EM of the model with only WPL is better than w/ CRL+WPL and w/ SDL+WPL, which further demonstrates the importance of the syntax-oriented loss.",
"All ablation models do not exceed our best model, illustrating the importance of all proposed losses.",
"In order to separate semantic information from PLMs, an alternative way is to train a single network based on the VGVAE model as shown in Figure 4. Compared with S 2 DM, the single-network model does not use the CRL and SDL loss and only requires labeled monolingual data.",
"Corresponding to S 2 DM, there are also two single-network variants: S 2 DM_single_POS and S 2 DM_single_SP.",
"Since there is no explicit semantics learning across the source and target language, we conjecture that the single-network S 2 DM will affect the quality of learned semantic vectors and the degree of semantics-syntax decoupling.",
"As shown in Table 5, the performance of the single-network S 2 DM is worse than the siamese-network model.",
"Our method mainly aims to reduce the potential negative impact of syntactic differences of languages in the zero-shot transfer process by explicitly isolating semantics from syntax in representations from multilingual pre-trained models.",
"Therefore, we hope to obtain multilingual semantic representations with rich semantic information to guide the machine to read and understand texts.",
"In order to examine (1) whether semantic vectors y in S 2 DM encode rich semantic information, and (2) whether semantics is sufficiently separated from syntax, and (3) whether semantic disentanglement can improve predicted answer spans in matching syntactic structures of the target language, we conducted additional experiments and analyses.",
"Here we used three datasets of cross-lingual semantic textual similarity (STS) in SemEval-2017 2 to evaluate the quality of semantic vectors learned by S 2 DM.",
"The three datasets are for Arabic to English (ar-en), Spanish to English (es-en), and Turkish to English (tr-en) cross-lingual STS.",
"We report the results of our models in Figure 5 based on mBERT.",
"We also evaluated learned syntactic vectors in cross-lingual STS, hoping that the performance gap between semantic vectors (i.e., y in S 2 DM) and syntactic vectors (i.e., z in S 2 DM) is as large as possible.",
"As shown in Figure 5, disentangled semantic representations significantly improve Pearson correlation over the baseline in ar-en, es-en, and tr-en by 11.46%, 3.40%, 4.98%, respectively.",
"Additionally, disentangled syntactic representations are negatively correlated to STS in most cases.",
"These results suggest that disentangled semantic vectors indeed learn rich universal semantic information.",
"We visualize hidden representations of the last layer of mBERT and semantic representations of mBERT+S 2 DM_POS and mBERT+S 2 DM_SP in Figure 6, in which the parallel sentences are from 2 https://alt.qcri.org/semeval2017/ task1/ 998 a 15-way parallel corpus (Conneau et al., 2018).",
"It is clear to see that disentangled semantic representations learned by S 2 DM make parallel sentences in 15 languages (semantically equivalent to each other) closer to one another in space, blending language boundaries clearly seen from mBERT representations (Figure",
"6(a)).",
"Combined with the negative/positive results of syntactic/semantic vectors in the cross-lingual STS task in SemEval-2017, the visualization demonstrates that S 2 DM can efficiently disassociate semantics from syntax.",
"Finally, we evaluated the degree of consistency to syntactic constituents of predicted answer spans.",
"As described in Section 1, 23.15% of the non-transfer predicted correct answers violate syntactic constraint of the target language during the raw zero-shot cross-lingual transfer on BiPaR.",
"By contrast, S 2 DM_POS and S 2 DM_SP drop this percentage to 12.98% and 6.60%, respectively.",
"Moreover, on the entire test set of BiPaR (Jing et al., 2019) in Chinese, 93.27% answers predicted by S 2 DM_SP exactly span syntactic constituents, which is 8.14% higher than the mBERT model.",
"In this paper, we have presented a novel multilingual MRC model for zero-shot cross-lingual transfer, which can disentangle semantic from syntactic representations and explicitly transfer semantic information from rich-resource language to low-resource languages, reducing the influence of syntactic differences between languages on the answer span prediction of the target language.",
"To disassociate semantics from syntax in multilingual pre-trained representations, we propose the siamese semantic disentanglement model that semantics/syntax-oriented losses to guide latent variables to learn corresponding information.",
"For low-resource languages without training data for the decoupling model, our theoretical analysis and experiments verify the generalization of our multilingual MRC model.",
"Further in-depth analyses suggest that the proposed S 2 DM can efficiently disentangle semantics from syntax and significantly improve syntactic consistency of answer predictions on the target language after zero-shot cross-lingual transfer.",
"The present research was supported by the National Natural Science Foundation of China (NSFC)",
"(61972455), the Joint Project of AISHU.com, Bayescom, Zhejiang Lab (No. 2022KH0AB01) and the Natural Science Foundation of Tianjin (No. 19JCZDJC31400).",
"Xiaowang Zhang is supported by the program of Peiyang Young Scholars in Tianjin University (2019XRX-0032)."
] | [
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"method",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"abstain",
"other",
"other",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"result",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"abstain",
"other",
"other",
"other"
] |
[
"Distributional semantic models have become a mainstay in NLP, providing useful features for downstream tasks.",
"However, assessing long-term progress requires explicit long-term goals.",
"In this paper, I take a broad linguistic perspective, looking at how well current models can deal with various semantic challenges.",
"Given stark differences between models proposed in different subfields, a broad perspective is needed to see how we could integrate them.",
"I conclude that, while linguistic insights can guide the design of model architectures, future progress will require balancing the often conflicting demands of linguistic expressiveness and computational tractability.",
"In order to assess progress in any field, the goals need to be clear.",
"In assessing progress in semantics, Koller (2016) contrasts top-down and bottom-up approaches: a top-down approach begins with an overarching goal, and tries to build a model to reach it; a bottom-up approach begins with existing models, and tries to extend them towards new goals.",
"1 Like much of NLP, distributional semantics is largely bottom-up: the goals are usually to improve performance on particular tasks, or particular datasets.",
"Aiming to improve NLP applications is of course a legitimate decision, but Koller points out a problem if there is no top-down goal: Bottom-up theories are intrinsically unfalsifiable... We won't know where distributional semantics is going until it has a top-down element.",
"This is contrasted against truth-conditional semantics, a traditional linguistic approach which is largely top-down: truth-conditional semantics hasn't reached its goal, but at least we knew what the goal was.",
"1 For further discussion, see: Bender and Koller (2020).",
"the meanings of all utterances in a language.",
"This is an ambitious goal, and a broad one.",
"To make this goal more precise, in the following sections I will elaborate on several aspects of meaning which could be considered crucial.",
"For each aspect, I identify a plausible goal, lay out out the space of possible models, place existing work in this space, and evaluate which approaches seem most promising.",
"By making the goals explicit, we can assess whether we are heading in the right direction, and we can assess what still needs to be done.",
"If a reader should disagree with my conclusions, they should start by looking at my goals.",
"The aim of distributional semantics is to learn the meanings of linguistic expressions from a corpus of text.",
"The core idea, known as the distributional hypothesis , is that the contexts in which an expression appears give us information about its meaning.",
"2 The idea has roots in American structuralism (Harris, 1954) and British lexicology (Firth, 1951, 1957) 3 , and with the advent of modern computing, it began to be used in practice.",
"In a notable early work, Sparck-Jones (1964) represented word meanings as boolean vectors, based on a thesaurus.",
"Distributional semantics has become widespread in NLP, first with the rise of count vectors (for an overview, see: Erk, 2012; Clark, 2015), then of word embeddings (Mikolov et al., 2013), and most recently, of contextualised embeddings (Pe-ters et al., 2018; Devlin et al., 2019).",
"4 What all of these approaches share is that they learn representations in an unsupervised manner on a corpus.",
"2 The hypothesis is often stated more narrowly, to say that similar words appear in similar contexts, but in this paper I am interested in semantics beyond just similarity.",
"3 Firth used the term collocational , not distributional .",
"4 For connections between count vectors and embeddings, see: Levy and Goldberg (2014); Cotterell et al. (2017); for connections with contextual embeddings: Kong et al. (2020).",
"While much work takes a bottom-up approach, as Koller observes, a notable exception is the type-driven tensorial framework of Coecke et al. (2010) and Baroni et al. (2014), which has broad linguistic goals, and will be mentioned in several sections below.",
"This framework represents the meanings of words as tensors, and constructs phrase meanings using tensor contraction based on predicate-argument structure.",
"For example, there is one vector space for nouns, and a second vector space for sentences, so intransitive verbs are matrices (map-ping noun vectors to sentence vectors).",
"Language is always about something.",
"In this section, I discuss challenges in connecting a semantic model to things in the world.",
"As Harnad (1990) discusses, if the meanings of words are defined only in terms of other words, these definitions are circular.",
"One goal for a semantic model is to capture how language relates to the world, including sensory perception and motor control this process of connecting language to the world is called grounding .",
"5 A purely distributional model is not grounded, as it is only trained on text, with no direct link to the world.",
"There are several ways we could try to ground a distributional model (for an overview, see: Baroni, 2016).",
"The simplest way is to train a distributional model as normal, then combine it with a grounded model.",
"For example, Bruni et al. (2011) concatenate distributional vectors and image feature vectors.",
"This has also been applied to other senses: Kiela et al. (2015) use olfactory data, and Kiela and Clark (2017) use both visual and auditory data.",
"However, while there is grounded information in the sensory dimensions, concatenation leaves the distributional dimensions ungrounded.",
"A second approach is to find correlations between distributional and sensory features.",
"For example, Bruni et al. (2014) perform SVD on concatenated vectors, Silberer and Lapata (2014) train an autoencoder on concatenated vectors, and Lazaridou et al. (2014) and Bulat et al. (2016) learn a mapping from distributional vectors to visual vectors (and vice versa).",
"However, there is no guarantee 5 This includes connecting abstract concepts to the world, although such connections are necessarily more indirect.",
"For further discussion, see: Blondin-Mass e et al. (2008); Pecher et al. (2011); Pulverm uller (2013); Barsalou et al. (2018) that every distributional feature will correlate with sensory features.",
"Distributional features without correlations will remain ungrounded.",
"Finally, a third approach is joint learning we define a single model, whose parameters are learnt based on both corpus data and grounded data.",
"For example, Feng and Lapata (2010) train an LDA model (Blei et al., 2003) for both words and visual words (clusters of visual features).",
"Lazaridou et al. (2015) use a Skip-gram model (Mikolov et al., 2013) to jointly predict both words and images.",
"Kiros et al. (2014) embed both text and images in a single space, training an RNN to process captions, and a CNN to process images.",
"Pure distributional models look for word co-occurrence patterns, while joint models prefer co-occurrence patterns that match the grounded data.",
"For this reason, I believe joint learning is the right approach to ground corpus data semantic representations can be connected to grounded data from the outset, rather than trying to make such connections after the fact.",
"However, we must still make sure that all distributional features are grounded.",
"With Feng and Lap-ata's LDA model, some topics might only generate words rather than visual words.",
"Similarly, with Lazaridou et",
"al.'s joint Skip-gram model, some embeddings might only predict words rather than images.",
"Conversely, we also need to make sure that we make full use of corpus data, rather than discarding what is difficult to ground.",
"For example, Kiros et",
"al.'s joint embedding model learns sentence embeddings in order to match them to images.",
"It is not obvious how this approach could be extended so that we can learn embeddings for sentences that cannot be easily depicted in an image.",
"This leads to the question: how should a joint architecture be designed, so that we can fully learn from corpus data, while ensuring that representations are fully grounded?",
"Grounding is hard, and indeed Kuhnle et al. (2018) find that some semantic constructions (such as superlatives) are much harder for grounded models to learn than others.",
"In the following section, I discuss how language relates to the world.",
"Clarifying this relationship should help us to design good joint architectures.",
"How do meanings relate to the world?",
"In truth-conditional semantics, the answer is that meaning is defined in terms of truth .",
"6 If an agent under-6 For a discussion of this point, see: Lewis (1970).",
"For an stands a language, then in any given situation, they know how to evaluate whether a sentence is true or false of that situation.",
"7 An advantage of this approach is that it supports logical reasoning, which I will discuss in 5.2.",
"One goal for a semantic theory is to be able to generalise to new situations.",
"This is difficult for traditional truth-conditional semantics, with classical theories challenged on both philosophical grounds (for example: Wittgenstein, 1953, 6671) and empirical grounds (for example: Rosch, 1975, 1978).",
"However, a machine learning approach seems promising, since generalising to new data is a central aim of machine learning.",
"For a semantic model to be compatible with truth-conditional semantics, it is necessary to distinguish a concept (the meaning of a word) from a referent (an entity the word can refer to).",
"8 The importance of this distinction has been noted for some time (for example: Ogden and Richards, 1923).",
"A concept's set of referents is called its extension .",
"9 Even if we can construct grounded concept vectors, as discussed in 3.1, there is still the question of how to relate a concept vector to its referents.",
"10 One option is to embed both concepts and entities in the same space.",
"We then need a way to decide how close the vectors need to be, for the entity to be in the concept's extension.",
"A second option is to embed concepts and referents in distinct spaces.",
"We then need a way to relate the two spaces.",
"In both cases, we need additional structure beyond representing concepts and referents as points.",
"One solution is to represent a concept by a region of space (Gardenfors, 2000, 2014).",
"Entities embedded inside the region are referents, while those outside are not.",
"For example, McMahan and Stone (2015) learn representations of colour terms, which are grounded in a well-understood perceptual space.",
"A related idea is to represent a concept as a binary classifier, where an entity is the input.",
"11 One class is the concept's extension, and the other class introduction to truth-conditional semantics, see: Cann (1993); Allan (2001); Kamp and Reyle (2013).",
"7 On the notion of situation , see: Barwise and Perry (1983).",
"On knowing how to evaluate truth values vs. actually evaluating truth values, see: Dummett (1976, 1978).",
"8 Following Murphy (2002, pp. 45), I use the term concept without committing to a particular theory of concepts.",
"9 Or denotation .",
"In psychology, the term category is also used (for example: Smith and Medin, 1981; Murphy, 2002).",
"10 While distributional representations can be learnt for named entities (for example: Herbelot, 2015; Boleda et al., 2017), most real-world entities are not mentioned in text.",
"11 For deterministic regions and classifiers, there is a one-toone mapping between them, but this is not true for probabilistic regions and classifiers, due to covariance.",
"is everything else.",
"Larsson (2013) represents the meaning of a perceptual concept as a classifier of perceptual input.",
"A number of authors have trained image classifiers using captioned images (for example: Schlangen et al., 2016; Zarrie and Schlangen, 2017a,b; Utescher, 2019; Matsson et al., 2019).",
"Such representations have however seen limited use in distributional semantics.",
"Erk (2009a,b) and Dong et al. (2018) learn regions, but relying on pre-trained vectors, which may have already lost referential information (such as co-reference) that we would like to capture.",
"Jameel and Schockaert (2017) learn a hybrid model, where each word is represented by a point (as a target word) and a region (as a context word).",
"In my own work, I have learnt classifiers (Emerson and Copestake, 2016, 2017a,b), but with a computationally expensive model that is difficult to train.",
"The computational challenge is partially resolved in my most recent work (Emerson, 2020a), but there is still work to be done in scaling up the model to make full use of the corpus data.",
"The best way to design such a model, so that it can both make full use of the data and can be trained efficiently, is an open question.",
"In this section, I discuss challenges in representing the meanings of individual words.",
"Entities often fall along a continuum without a sharp cutoff between concepts.",
"This is called vagueness (or gradedness ).",
"(For an overview, see: Sutton, 2013, chapter 1; Van Deemter, 2010.)",
"For example, Labov (1973) investigated the boundaries between concepts like cup , mug , and bowl , asking participants to name drawings of objects.",
"For typical referents, terms were used consistently; meanwhile, for objects that were intermediate between concepts (for example, something wide for a cup but narrow for a bowl), terms were used inconsistently.",
"For these borderline cases, a single person may make different judgements at different times (McCloskey and Glucksberg, 1978).",
"One goal for a semantic model is to capture how it can be unclear whether an entity is an referent of a concept.",
"One approach is to use fuzzy truth values, which are not binary true/false, but rather values in the range [0,1], where 0 is definitely false, 1 is definitely true, and intermediate values represent borderline cases (Zadeh, 1965, 1975).",
"Fuzzy logic has not seen much use in computational linguistics.",
"12 A second solution is to stick with binary truth values, but using probability theory to formalise uncertainty about truth, as has been proposed in formal semantics (for example: Lassiter, 2011; Fernandez and Larsson, 2014; Sutton, 2015, 2017).",
"At the level of a single concept, there is not much to decide between fuzzy and probabilistic accounts, since both assign values in the range [0,1].",
"However, we will see in 5.2 that they behave differently at the level of sentences.",
"Uncertainty has also been incorporated into distributional vector space models.",
"Vilnis and McCallum (2015) extend Mikolov et",
"al.'s Skip-gram model, representing meanings as Gaussian distributions over vectors.",
"Barkan (2017) incorporate uncertainty into Skip-gram using Bayesian inference rather than optimising word vectors, the aim is to calculate the posterior distribution over word vectors, given the observed data.",
"The posterior is approximated as a Gaussian, so these two approaches produce the same kind of object.",
"Balkr (2014), working within the type-driven tensorial framework (see 2), uses a quantum mechanical mixed state to model uncertainty in a tensor.",
"For example, this replaces vectors by matrices, and replaces matrices by fourth-order tensors.",
"While these approaches represent uncertainty, it is challenging to use them to capture vagueness.",
"This basic problem is this: a distribution allows us to generate referents of a concept, but how can we go in the other direction, to recognise referents of a concept?",
"It is tempting to classify a point using the probability density at that point, but if we compare a more general term with a more specific term (like animal and dog ), we find a problem: a more general term has its probability mass spread more thinly, and hence has a lower probability density than the more specific term, even if both terms could be considered true.",
"I argued in 3.2 that, to talk about truth, we need to represent predicates as regions of space or as classifiers.",
"While a distribution over a space might at first sight look like a region of space, normalising the probability mass to sum to 1 makes a distribution a different kind of object.",
"12 Carvalho et al. (2012) survey fuzzy logic in NLP, noting that its use is in decline, but they do not mention distributional semantics.",
"Proposals such as Monte Carlo Semantics (Bergmair, 2010) and Fuzzy Natural Logic (Nov ak, 2017) do not provide an approach to distributional semantics.",
"A rare exception is Runkler (2016), who infers fuzzy membership functions from pre-trained vectors.",
"The meaning of a word can often be broken up into distinct senses .",
"Related senses are called polysemous : for example, school can refer to a building or an institution.",
"In contrast, homonymous senses are unrelated: for example, a school of fish.",
"All of the above senses of school are also lexicalised established uses that a speaker would have committed to memory, rather than inferring from context.",
"I will discuss context-dependent meaning in 5.3, and focus here on lexicalised meaning.",
"One goal for a semantic model is to capture how a word can have a range of polysemous senses.",
"One solution is to learn a separate representation for each sense (for example: Schutze, 1998; Rapp, 2004; Li and Jurafsky, 2015; for a survey, see: Camacho-Collados and Pilehvar, 2018).",
"However, deciding on a discrete set of senses is difficult, and practical efforts at compiling dictionaries have not provided a solution.",
"Indeed, the lexicographer Sue Atkins bluntly stated, I don't believe in word senses.",
"13 Although the sense of a word varies across usages, there are many ways that we could cluster usages into a discrete set of senses, a point made by many authors (for example: Sparck-Jones, 1964; Kilgarriff, 1997, 2007; Hanks, 2000; Erk, 2010).",
"To quantify this intuition, Erk et al. (2009, 2013) produced the WSsim and Usim datasets, where annotators judged the similarity between dictionary senses, and the similarity between individual usages, respectively.",
"McCarthy et al. (2016) quantify clusterability in USim, showing that for some words, usages cannot be clustered into discrete senses.",
"A good semantic model should therefore be able to capture variation in meaning without resorting to finite sense inventories.",
"We could instead learn a single representation for all polysemous senses together.",
"Indeed, Ruhl (1989) argues that even frequent terms with many apparent senses, such as bear and hit , can be analysed as having a single underspecified meaning, with the apparent diversity of senses explainable from context.",
"The challenge is then to represent such a meaning without overgeneralising to cases where the word wouldn't be used, and to model how meanings are specialised in context.",
"The second half of this challenge will be discussed in 5.3.",
"I have already argued in previous sections that we should move away from representing each word as a single vector.",
"As discussed in 4.1, words 13 Kilgarriff (1997) and Hanks (2000) both quote Atkins.",
"can be represented with distributions, and such an approach has also been applied to modelling word senses.",
"For example, Athiwaratkun and Wilson (2017) use a mixture of Gaussians, extending Vilnis and McCallum's model to allow multiple senses.",
"However, this ultimately models a fixed number of senses (one for each Gaussian).",
"In principle, a distribution could be parametrised in a more general way, moving beyond finite mixture models.",
"In the type-driven tensorial framework (see 2), Piedeleu et al. (2015) use mixed quantum states, similarly to Balkr's approach (see 4.1).",
"Although they only propose this approach for homonymy, it could plausibly be used for polysemy as well.",
"If a word is represented by a region, or by a classifier, we don't have the problem of finite sense inventories, as long as the region or classifier is parametrised in a general enough way for example, a multi-layer neural net classifier, rather than a finite mixture of simple classifiers.",
"In the previous two sections, I discussed meanings of single words.",
"However, words do not exist on their own, and one goal for semantic model is to represent relations between them.",
"A classic relation is hyponymy , 14 which describes when one term (the hyperonym or hypernym ) has a more general meaning than another (the hyponym ).",
"Words that share a hyperonym are called co-hyponyms .",
"In a vector space model, it is not clear how to say if one vector is more general than another.",
"One idea is that a hyperonym should occur in all the contexts of its hyponyms.",
"This is known as the Distributional Inclusion Hypothesis (DIH; Weeds et al., 2004; Geffet and Dagan, 2005).",
"Using this idea and tools from information retrieval, Kotlerman et al. (2009, 2010) define the balAPinc measure of hyponymy.",
"Herbelot and Ganesalingam (2013) view a vector as a distribution over contexts, using KL-divergence to measure hyponymy.",
"Rei (2013) gives an overview of hyponymy measures, and proposes a weighted cosine measure.",
"For embeddings, the motivation for such measures is less direct, but dimensions can be seen as combinations of contexts.",
"Indeed, Rei and Briscoe (2014) find embeddings perform almost as well as count vectors.",
"14 This is also referred to as lexical entailment , making a link with logic (see 5.2).",
"Other relations include antonymy, meronymy, and selectional preferences.",
"For reasons of space, I have decided to discuss one relation in detail, rather than many relations briefly.",
"Hyponymy could be considered basic.",
"However, a speaker is likely to choose an expression with a degree of generality appropriate for the discourse (the Maxim of Quantity; Grice, 1967), and hence the DIH can be questioned.",
"Rimell (2014) points out that some contexts are highly specific.",
"For example, mane is a likely context of lion but not animal , even though lion is a hyponym of animal , contradicting the DIH.",
"Rimell instead proposes measuring hyponymy using coherence (formalised using pointwise mutual information): the contexts of a general term minus those of a hyponym are coherent, but the reverse is not true.",
"Moving away from count vectors and pre-trained embeddings, there are other options.",
"One is to build the hyponymy relation into the definition of the space.",
"For example, Vendrov et al. (2016) use nonnegative vectors, where one vector is a hyponym of another if it has a larger value in every dimension.",
"They train a model on WordNet (Miller, 1995; Fellbaum, 1998).",
"Building on this, Li et al. (2017) learn from both WordNet and text.",
"However, for a hierarchy like WordNet, there are exponentially more words lower down.",
"This cannot be embedded in Euclidean space without words lower in the hierarchy being increasingly close together.",
"Nickel and Kiela (2017) propose using hyperbolic space, where volume increases exponentially as we move away from any point.",
"T , ifrea et al. (2019) build on this, adapting Glove (Pennington et al., 2014) to learn hyperbolic embeddings from text.",
"However, this approach does not generalise to non-tree hierarchies for example, WordNet gives bass as a hyponym of singer , voice , melody , pitch , and instrument .",
"Requiring that bass is represented close to all its hyperonyms also forces them close together (by the triangle in-equality), which we may not want, since they are in distant parts of the hierarchy.",
"Alternatively, we can view hyponymy as classification, and simply use distributional vectors to provide input features (for example: Weeds et al., 2014; Rei et al., 2018).",
"However, under this view, hyponymy is an opaque relationship, making it difficult to analyse why one vector is classified as a hyponym of another.",
"Indeed, Levy et al. (2015) find that such classifiers mainly learn which words are common hyperonyms.",
"Moving away from vector representations, it can be easier to define hyponymy.",
"Erk (2009a,b) and Gardenfors (2014, 6.4) discuss how using regions of space provides a natural definition: P is a hyponym of Q if the region for P is contained in the region for Q .",
"Bouraoui et al. (2017) and Vilnis et al. (2018) use this idea for knowledge base completion, and Bouraoui et al. (2020) build on this, using corpus data to identify conceptual neighbours.",
"In the type-driven tensorial framework (see 2), Bankova et al. (2019) and Lewis (2019) model words as normalised positive operators, with hyponymy defined in terms of subspaces (eigenspaces).",
"Probability distributions also allow us to define hyponymy, but it is harder than for regions, since a distribution over a smaller region has higher probability density.",
"Vilnis and McCallum (2015) propose using KL-divergence.",
"Athiwaratkun and Wilson (2018) propose a thresholded KL-divergence.",
"In the type-driven tensorial framework, Balkr (2014) proposes using a quantum version of KL-divergence, which can be extended to phrases (Balkr et al., 2015; Sadrzadeh et al., 2018).",
"However, detecting hyponymy from corpus data remains challenging.",
"Even in recent shared tasks (Bordea et al., 2016; Camacho-Collados et al., 2018), many systems use pattern matching, following Hearst (1992).",
"For example, a string of the form X such as Y suggests that Y is a hyponym of X .",
"In the above shared tasks, the best performing systems did not rely solely on distributional vectors, but used pattern matching as well.",
"Although much work remains to be done in developing learning algorithms which can detect hyponymy, I believe that a region-based approach is the most promising.",
"Not only does it give a simple definition, but it is also motivated for other reasons, discussed elsewhere in this paper.",
"In the previous section, I discussed meaning at the level of words.",
"I now turn to challenges in representing meaning at the level of sentences.",
"Language is productive a fluent speaker can understand a completely new sentence, as long as they know each word and each syntactic construction in the sentence.",
"One goal for a semantic model is to be able to derive the meaning of a sentence from its parts, so it can generalise to new combinations.",
"This is known as compositionality .",
"15 15 Kartsaklis et al. (2013) discuss how composition is often conflated with disambiguation , since composing ambiguous expressions often disambiguates them.",
"Disambiguation can be seen as a kind of contextualisation or context dependence , For vector space models, the challenge is how to compose word vectors to construct phrase representations.",
"If we represent both words and phrases in the same vector space, the challenge is to find a composition function that maps a pair of vectors to a new vector.",
"In the general case, this must be sensitive to word order, since changing word order can change meaning.",
"Mitchell and Lapata (2008, 2010) compare a variety of such functions, but find that componentwise multiplication performs best, despite being commutative, and hence insensitive to word order.",
"The effectiveness of componentwise multiplication and addition has been replicated many times (for example: Baroni and Zamparelli, 2010; Blacoe and Lapata, 2012; Rimell et al., 2016; Czarnowska et al., 2019).",
"However, it is unclear how to adapt it to take word order into account, and Polajnar et al. (2014) show that performance degrades with sentence length.",
"Alternatively, we can use a sentence space distinct from the word space.",
"This is often done with a task-based perspective words are combined into sentence representations, which are useful for solving some task.",
"For example, the final state of an RNN can be seen as a representation of the whole sequence.",
"To make the composition more linguistically informed, the network can be defined to follow a tree structure, rather than linear order (for example: Socher et al., 2010, 2012; Tai et al., 2015), or even to learn latent tree structure (for example: Dyer et al., 2016; Maillard and Clark, 2018).",
"Alternatively, a sequence of token representations can be combined using attention, which calculates a weighted sum, as in a Transformer architecture (Vaswani et al., 2017).",
"Regardless of architecture, the model can be optimised either for a supervised task, such as machine translation (for example: Cho et al., 2014), or for an unsupervised objective, as in an autoencoder (for example: Hermann and Blunsom, 2013) or language model (for example: Peters et al., 2018; Devlin et al., 2019).",
"If we take a task-based perspective, it is difficult to know if the representations will transfer to other tasks.",
"In fact, Changpinyo et al. (2018) find that for some combinations of tasks, training on one task can be harmful for another.",
"which I discuss in 5.3.",
"The focus in this section is on deriving semantic representations for larger expressions.",
"compose representations based on argument structure.",
"16 Polajnar et al. (2015) explore sentence spaces with dimensions defined by co-occurrences.",
"However, a weakness with the above approaches is that they map sentences to a finite-dimensional space.",
"As we increase sentence length, the number of sentences with distinct meanings increases exponentially.",
"For example, consider relative clauses: the dog chased the cat ; the dog chased the cat which caught the mouse ; and so on.",
"To keep these meanings distinct, we have two options.",
"If the meanings must be a certain distance apart, the magnitudes of sentence vectors need to increase exponentially with sentence length, so there is enough space to distinguish them.",
"17 Alternatively, if the meanings can be arbitrarily close, we need to record each dimension to a high precision in order to distinguish the meanings.",
"The fine-grained structure of the space then becomes important, but small changes to model parameters (such as updates during training) would cause drastic changes to this structure.",
"I do not know any work exploring either option.",
"Otherwise, we are forced to view sentence vectors as lossy compression.",
"18 As Mooney (2014) put it: You can't cram the meaning of a whole %&!$# sentence into a single $&!#* vector!",
"Although compression can be useful for many tasks, full and detailed semantic representations also have their place.",
"This is particularly important at a discourse level: it would be absurd to represent, as vectors of the same dimensionality, both a five-word sentence and the whole English Wikipedia.",
"However, this leaves open the question of how we should represent sentence meaning.",
"In the following section, I turn to logic as a guide.",
"Sentences can express complex thoughts, and build chains of reasoning.",
"Logic formalises this, and one goal for a semantic model is to support the logical notions of truth (discussed in 3.2), and entailment (one proposition following from another).",
"Vectors do not have logical structure, but can still 16 Zanzotto et al. (2015) show how sentence similarity in this framework decomposes in terms of similarity of corresponding parts, because composition and dot products are linear.",
"17 This can be formalised information-theoretically.",
"Consider sending a message as a D -dimensional vector, through a noisy channel.",
"If there is an upper bound K to the vector's magnitude, the channel has a finite channel capacity .",
"The capacity scales as KD , which is only polynomial in K .",
"18 This conclusion has been drawn before (for example: Goodfellow et al., 2016, p. 370), but my argument makes the conditions more precise.",
"be used to provide features for a logical system, for example if entailment is framed as classification: given a premise and hypothesis , the task is to decide if the premise entails the hypothesis, contradicts it, or neither.",
"Datasets include SNLI (Bowman et al., 2015) and MultiNLI (Williams et al., 2018).",
"However, it is difficult to analyse approaches that do not use an explicit logic.",
"In fact, Gururan-gan et al. (2018) suggest that high performance may be due to annotation artifacts: only using the hypothesis, they achieve 67% on SNLI and 53% on MultiNLI, much higher than the majority class baseline (34% and 35%, respectively).",
"Performance on such datasets may therefore overestimate the ability of neural models to perform inference.",
"To explicitly represent logical structure, there are a few options.",
"One is to build a hybrid system, combining a vector space with a logic.",
"For example, Herbelot and Vecchi (2015) aim to give logical interpretations to vectors.",
"They consider a number of properties (such as: is edible , has a handle , made of wood ), and for each, they learn a mapping from vectors to values in [0 , 1] , where 0 means the property applies to no referents, and 1 means it applies to all referents.",
"This is an interesting way to probe what information is available in distributional vectors, but it is unclear how it could be generalised to deal with individual referents (rather than summarising them all), or to deal with complex propositions (rather than single properties).",
"Garrette et al. (2011) and Beltagy et al. (2016) incorporate a vector space model into a Markov Logic Network (Richardson and Domingos, 2006), a kind of probability logic.",
"If two predicates have high distributional similarity, they add a probabilistic inference rule saying that, if one predicate is true of an entity, the other predicate is likely to also be true.",
"This allows us to use distributional vectors in a well-defined logical model, but it assumes we can interpret similarity in terms of inference (for discussion, see: Erk, 2016).",
"As argued in 3 above, pre-trained vectors may have already lost information, and in the long term, it would be preferable to learn logical representations directly.",
"Lewis and Steedman (2013) use a classical logic, and cluster predicates that are observed to hold of the same pairs of named entities for example, write ( Rowling , Harry Potter ) and author ( Rowling , Harry Potter ).",
"This uses corpus data directly, rather than pre-trained vectors.",
"However, it would need to be generalised to learn from arbitrary sentences, and not just those involving named entities.",
"A second option is to define a vector space with a logical interpretation.",
"Grefenstette (2013) gives a logical interpretation to the type-driven tensorial framework (see 2), where the sentence space models truth values, and the noun space models a domain of N entities.",
"However, Grefenstette shows that quantification would be nonlinear, so cannot be expressed using tensor contraction.",
"Hedges and Sadrzadeh (2019) provide an alternative account which can deal with quantifiers, but at the expense of noun dimensions corresponding to sets of entities, so we have 2 N dimensions for N entities.",
"Copestake and Herbelot (2012) propose that dimensions could correspond to logical expressions being true of an entity in a situation.",
"However, this requires generalising from an actual distribution (based on observed utterances) to an ideal distribution (based on truth of logical expressions).",
"They do not propose a concrete algorithm, but they discuss several challenges, and suggest that grounded data might be necessary.",
"In this vein, Kuzmenko and Herbelot (2019) use the Visual Genome dataset (Krishna et al., 2017) to learn vector representations with logically interpretable dimensions, although these vectors are not as expressive as Copestake and Herbelot's ideal distributions.",
"Finally, a third option is to learn logical representations instead of vectors.",
"For example, in my own work I have represented words as truth-conditional functions that are compatible with first-order logic (Emerson and Copestake, 2017b; Emerson, 2020b).",
"Since referents are not observed in distributional semantics, this introduces latent variables that make the model computationally expensive, although there are ways to mitigate this (Emerson, 2020a).",
"Despite the computational challenges, I believe the right approach is to learn a logically interpretable model, either by defining a vector space with logical structure, or by directly using logical representations.",
"However, an important question is what kind of logic to use.",
"I argued in 4.1 that probabilities of truth and fuzzy truth values can capture vagueness, and there are corresponding logics.",
"In probability logic, propositions have probabilities of being true or false, with a joint distribution for the truth values of all propositions (for an introduction, see: Adams, 1998; Demey et al., 2013).",
"In fuzzy logic, propositions have fuzzy truth values, and classical logical operators (such as: , , ) are replaced with fuzzy versions (for an introduction, see: Hajek, 1998; Cintula et al., 2017).",
"Fuzzy operators act directly on truth values for example, given the fuzzy truth values of p and q , we can calculate the fuzzy truth value of p q .",
"In contrast, in probability logic, given probabilities of truth for p and q , we cannot calculate the probability of truth for p q , unless we know the joint distribution.",
"A problem with fuzzy logic, observed by Fine (1975), comes with propositions like p p .",
"For example, suppose we have a reddish orange object, so the truth of red and orange are both below 1.",
"Intuitively, both red or not red and red or orange should definitely be true.",
"However, in fuzzy logic, they could have truth below 1.",
"This makes probability logic more appealing than fuzzy logic.",
"19 Furthermore, there are well-developed frameworks for probabilistic logical semantics (for example: Goodman and Lassiter, 2015; Cooper et al., 2015), which a probabilistic distributional semantics could connect to, or draw inspiration from.",
"The flipside of compositionality is context dependence : the meaning of an expression often depends on the context it occurs in.",
"For example, a small elephant is not a small animal , but a large mouse is the meanings of small and large depend on the nouns they modify.",
"One goal for a semantic model is to capture how meaning depends on context.",
"20 Following Recanati (2012), we can distinguish standing meaning , the context-independent meaning of an expression, and occasion meaning , the context-dependent meaning of an expression in a particular occasion of use.",
"21 However, every usage occurs in some context, so a standing meaning must be seen as an abstraction across usages, rather than a usage in a null context (for discussion, see: Searle, 1980; Elman, 2009).",
"One approach is to treat a distributional vector as a standing meaning, and modify it to produce occasion meanings.",
"For example, vectors could be modified according to syntactic or semantic dependencies (for example: Erk and Pado, 2008; Thater et al., 2011; Dinu et al., 2012), or even chains of 19 H ajek et al. (1995) prove that fuzzy logic can be used to provide upper and lower bounds on probabilities in a probability logic, giving it a different motivation.",
"20 Ultimately, this must include dependence on real-world context.",
"Even the intuitive conclusion that a large mouse is a small animal depends on the implicit assumption that you and I are both humans, or at least, human-sized.",
"From the perspective of an ant, a mouse is large animal.",
"21 This terminology adapts Quine (1960).",
"dependencies (for example: Weir et al., 2016).",
"This mapping from standing vectors to occasion vectors can also be trained (for example: Czarnowska et al., 2019; Popa et al., 2019).",
"Large language models such as ELMo (Peters et al., 2018) and BERT (Devlin et al., 2019) can also be interpreted like this these models map a sequence of input embeddings to a sequence of contextualised embeddings, which can be seen as standing meanings and occasion meanings, respectively.",
"Alternatively, standing meanings and occasion meanings can be represented by different kinds of object.",
"Erk and Pado (2010) represent a standing meaning as a set of vectors (each derived from a single sentence of the training corpus), and an occasion meaning is a weighted sum of these vectors.",
"For a probabilistic model, calculating an occasion meaning can be cast as Bayesian inference, conditioning on the context.",
"This gives us a well-understood theoretical framework, making it easier to generalise a model to other kinds of context.",
"Dinu and Lapata (2010) interpret a vector as a distribution over latent senses, where each component is the probability of a sense.",
"Given probabilities of generating context words from latent senses, we can then condition the standing distribution on the context.",
"However this model relies on a finite sense inventory, which I argued against in 4.2.",
"Lui et al. (2012) and Lau et al. (2012, 2014) use LDA (Blei et al., 2003), where an occasion meaning is a distribution over context words (vary-ing continuously as topic mixtures), and a standing meaning is a prior over such distributions.",
"22 A separate model is trained for each target word.",
"Chang et al. (2014) add a generative layer, allowing them to train a single model for all target words.",
"However, a single sense is chosen in each context, giving a finite sense inventory.",
"Skip-gram can be interpreted as generating context words from a target word.",
"While we can see an embedding as a standing meaning, nothing can be seen as an occasion meaning.",
"Brazinskas et al. (2018) add a generative layer, generating a latent vector from the target word, then generating context words from this vector.",
"We can see a latent vector as an occasion meaning, and a word's distribution over latent vectors as a standing meaning.",
"occasion meanings by conditioning on the context (Emerson and Copestake, 2017b), but in contrast to the above approaches, standing meanings are truth-conditional functions (binary classifiers), which I have argued for elsewhere in this paper.",
"A common thread among all of the above sections is that reaching our semantic goals requires structure beyond representing meaning as a point in space.",
"In particular, it seems desirable to represent the meaning of a word as a region of space or as a classifier, and to work with probability logic.",
"However, there is a trade-off between expressiveness and learnability: the more structure we add, the more difficult it can be to work with our representations.",
"To this end, there are promising neural architectures for working with structured data, such dependency graphs (for example: Marcheggiani and Titov, 2017) or logical propositions (for example: Rocktaschel and Riedel, 2017; Minervini et al., 2018).",
"To mitigate computationally expensive calculations in probabilistic models, there are promising new techniques such as amortised variational inference, used in the Variational Autoencoder (Kingma and Welling, 2014; Rezende et al., 2014; Titsias and Lazaro-Gredilla, 2014).",
"My own recent work in this direction has been to develop the Pixie Autoencoder (Emerson, 2020a), and I look forward to seeing alternative approaches from other authors, as the field of distributional semantics continues to grow.",
"I hope that this survey paper will help other researchers to develop the field in a way that keeps long-term goals in mind.",
"This paper is based on chapter 2 of my PhD thesis (Emerson, 2018).",
"For invaluable advice on the structure and framing of that chapter, and therefore also of this paper, I want to thank my PhD supervisor Ann Copestake.",
"I would also like to thank my PhD examiners, Katrin Erk and Paula Buttery, for feedback on that chapter, as well as Emily M. Bender, Guy Aglionby, Andrew Caines, and the NLIP reading group in Cambridge, for feedback on earlier drafts of this paper.",
"I would like to thank ACL reviewers 1 & 3 for pointing out areas that were unclear, and reviewer 2 for their kind praise.",
"I am supported by a Research Fellowship at Gonville & Caius College, Cambridge."
] | [
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"other",
"abstain",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other"
] |
[
"We propose a novel manifold based geometric approach for learning unsupervised alignment of word embeddings between the source and the target languages.",
"Our approach formulates the alignment learning problem as a domain adaptation problem over the manifold of doubly stochastic matrices.",
"This viewpoint arises from the aim to align the second order information of the two language spaces.",
"The rich geometry of the doubly stochastic manifold allows to employ efficient Riemannian conjugate gradient algorithm for the proposed formulation.",
"Empirically, the proposed approach outperforms state-of-the-art optimal transport based approach on the bilingual lexicon induction task across several language pairs.",
"The performance improvement is more significant for distant language pairs.",
"Learning bilingual word embeddings is an important problem in natural language processing (Mikolov et al., 2013; Faruqui and Dyer, 2014; Artetxe et al., 2016; Conneau et al., 2018), with usage in cross-lingual information retrieval (Vulic and Moens, 2015), text classification (Wan et al., 2011; Klementiev et al., 2012), machine translation (Artetxe et al., 2018c) etc.",
"Given a source-target language pair, the aim is to represent the words in both languages in a common embedding space.",
"This is usually achieved by learning a linear function that maps word embeddings of one language to the embedding space of the other language (Mikolov et al., 2013).",
"Several works have focused on learning such bilingual mapping in supervised setting, using a bilingual dictionary during the training phase (Artetxe et al., 2018a; Joulin et al., 2018; Jawanpuria et al., 2019).",
"Recently, unsupervised bilingual word embeddings have also been explored (Zhang et al., 2017a,b; Conneau et al., 2018; Artetxe et al., 2018b; Hoshen and Wolf, 2018; Grave et al., 2019; Alvarez-Melis and Jaakkola, 2018; Zhou et al., 2019; Jawanpuria et al., 2020).",
"Learning unsupervised cross-lingual mapping may be viewed as an instance of the more general unsupervised domain adaptation problem (Ben-David et al., 2007; Gopalan et al., 2011; Sun et al., 2016; Mahadevan et al., 2018).",
"The latter fundamentally aims at aligning the input feature (em-beddings) distributions of the source and target domains (languages).",
"In this paper, we take this point of view and learn cross-lingual word alignment by finding alignment between the second order statistics of the source and the target language embedding space.",
"We formulate a novel optimization problem on the set of doubly stochastic matrices.",
"The objective function consists of matching covariances of words from source to target languages in a least-squares sense.",
"For optimization, we exploit the fact that the set of doubly stochastic matrices has rich geometry and forms a Riemannian manifold (Douik and Hassibi, 2019).",
"The Riemannian optimization framework (Absil et al., 2008; Edelman et al., 1998; Smith, 1994) allows to propose a computationally efficient conjugate gradient algorithm (Douik and Hassibi, 2019).",
"Experiments show the efficacy of the proposed approach on the bilingual lexicon induction benchmark, especially on the language pairs involving distant languages.",
"We introduce the bilingual word alignment setup followed by a discussion on domain adaptation approaches.",
"Bilingual alignment.",
"Let X R n d and Z R n d be d -dimensional word embeddings of n words of the source and the target languages, respectively.",
"The aim is to learn a linear operator W : R d R d that best approximates source embeddings in the target language space.",
"In the supervised setup, a list of source words and their translations in the target language is provided.",
"This is represented by an alignment matrix Y of size n n , where Y ij = 1 if j -th word in the target language is a translation of the i -th word in the source language and Y ij = 0 otherwise.",
"A standard way to learn orthogonal W is by solving the orthogonal Procrustes problem (Artetxe et al., 2016; Smith et al., 2017), i.e., min W R d d (cid:107) XW YZ (cid:107) 2Fro subject to W (cid:62) W = I , (1) where (cid:107)(cid:107) Fro is the Frobenius norm and I is the identity matrix.",
"Problem (1) has the closed-form solution W (cid:63) = UV (cid:62) , where U and V are the respective left and right orthogonal factors of the singular value decomposition of X (cid:62) YZ (Schonemann, 1966).",
"In the unsupervised setting, Y is additionally unknown apart from W .",
"Most unsupervised works (Zhang et al., 2017b; Artetxe et al., 2018b; Grave et al., 2019; Conneau et al., 2018) tackle this challenge by learning Y and W jointly.",
"However, their performance rely on finding a good initialization candidate for the alignment matrix Y (Zhang et al., 2017b; Grave et al., 2019; Alaux et al., 2019; Jawanpuria et al., 2020).",
"Performing optimization over the set of binary matrices, Y { 0 , 1 } n n , to learn the bilingual alignment matrix is computationally hard.",
"Hence, some works (Zhang et al., 2017b; Xu et al., 2018) view the source and the target word embedding spaces as two distributions and learn Y as the transformation that makes the two distributions close.",
"This viewpoint is based on the theory of optimal transport (Villani, 2009; Peyre and Cuturi, 2019).",
"Y is, thus, modeled as a doubly stochastic matrix: the entries in Y [0 , 1] and each row/column sums to 1 .",
"Permutation matrices are extreme points in the space of doubly stochastic matrices.",
"Alvarez-Melis and Jaakkola (2018) propose learning the doubly stochastic Y as a transport map between the metric spaces of the words in the source and the target languages.",
"They optimize the Gromov-Wasserstein (GW) distance, which measures how distances between pairs of words are mapped across languages.",
"For learning Y , they propose to min Y DS n Trace( Y (cid:62) CXYCZ ) , (2) where DS n := { Y R n n : Y 0 , Y (cid:62) 1 = 1 and Y1 = 1 } is the set of n n doubly stochastic matrices, Y 0 implies entry-wise non-negativity, 1 is a column vector of ones, and CX = XX (cid:62) and CZ = ZZ (cid:62) are n n word covariance matrices of source and target languages, respectively.",
"An iterative scheme is proposed for solving (2), where each iteration involves solving an optimal transport problem with entropic regularization (Peyre et al., 2016; Peyre and Cuturi, 2019).",
"The optimal transport problem is solved with the popular Sinkhorn algorithm (Cuturi, 2013).",
"It should be noted that the GW approach (2) only learns Y .",
"The linear operator to map source language word embedding to the target language embedding space can then be learned by solving (1).",
"Domain adaptation.",
"Domain adaption refers to transfer of information across domains and has been an independent research of interest in many fields including natural language processing (Daume III, 2007; Borgwardt et al., 2006; Adel et al., 2017; Baktashmotlagh et al., 2013; Fuku-mizu et al., 2007; Wang et al., 2015; Prettenhofer and Stein, 2011; Wan et al., 2011; Sun et al., 2016; Mahadevan et al., 2018; Ruder, 2019).",
"One modeling of interest is by Sun et al. (2016), who motivate a linear transformation on the features in source and target domains.",
"In (Sun et al., 2016), the linear map A R d d is solved by min A R d d (cid:13)(cid:13) A (cid:62) DXA DZ (cid:13)(cid:13) 2 Fro , (3) where D 1 and D 2 are d d are feature covariances of source and target domains (e.g., DX = X (cid:62) X and DZ = Z (cid:62) Z ), respectively.",
"Interestingly, (3) has a closed-form solution and shows good performance on standard benchmark domain adaptation tasks (Sun et al., 2016).",
"The domain adaptation solution strategies of (Sun et al., 2016; Mahadevan et al., 2018) can be motivated directly for the cross-lingual alignment problem by dealing with word covariances instead of feature covariances.",
"However, the cross-lingual word alignment problem additionally has a bidirectional symmetry: if Y aligns X to Z , then Y (cid:62) aligns Z to X .",
"We exploit this to propose a bi-directional domain adaptation scheme based on (3).",
"The key idea is to adapt the second order information of the source and the target languages into each other's domain.",
"We formulate the above as follows: min Y DS n (cid:107) Y (cid:62) CXY CZ (cid:107) 2Fro + (cid:107) YCZY (cid:62) CX (cid:107) 2Fro , (4) The first term in the objective function (cid:107) Y (cid:62) CXY CZ (cid:107) 2Fro adapts the domain of X (source) into Z (target).",
"Equivalently, minimizing only the first term in the objective function of (4) leads to row indices in Y (cid:62) X aligning closely with the row indices of Z .",
"Similarly, minimizing only the second term (cid:107) YCZY (cid:62) CX (cid:107) 2Fro adapts Z (now treated as the source domain) into X (now treated as the target domain), which means that the row indices YZ and X are closely aligned.",
"Overall, minimizing both the terms of the objective function allows to learn the alignment matrix Y from X to Z and Y (cid:62) from Z to X simultaneously.",
"Empirically, we observe that bi-directionality acts as a self regularization, leading to optimization stability and better generalization ability.",
"The differences of the proposed formulation (4) with respect to the GW formulation (2) are two fold.",
"First, the formulation (2) maximizes the inner product between Y (cid:62) CXY and CZ .",
"This inner product is sensitive to differences in the norms of Y (cid:62) CXY and CZ .",
"The proposed approach circumvents this issue since (4) explicitly penalizes entry-wise mismatch between Y (cid:62) CXY and CZ .",
"Second, the GW algorithm for (2) is sensitive to choices of the entropic regularization parameter (Alvarez-Melis and Jaakkola, 2018; Peyre and Cuturi, 2019).",
"In our case, no such regularization is required.",
"Most recent works that solve optimal transport problem by optimizing over doubly stochastic matrices employ the Sinkhorn algorithm with entropic regularization (Cuturi, 2013; Peyre et al., 2016; Peyre and Cuturi, 2019).",
"In contrast, we exploit the Riemannian manifold structure of the set of doubly stochastic matrices ( DS n ) recently studied in (Douik and Hassibi, 2019).",
"DS n is endowed with a smooth Fisher information metric (inner product) that makes the manifold smooth (Douik and Hassibi, 2019; Sun et al., 2015; Lebanon and Lafferty, 2004).",
"In differential geometric terms, DS n has the structure of a Riemannian submanifold.",
"This makes computation of optimization-related ingredients, e.g., gradient and Hessian of a function, projection operators, and retraction operator, straightforward.",
"Leveraging the versatile Riemannian optimization framework (Absil et al., 2008; Edelman et al., 1998; Smith, 1994), the constrained problem (4) is conceptually transformed to an unconstrained problem over the nonlinear manifold.",
"Consequently, most unconstrained optimization algorithms generalize well to manifolds.",
"We solve (4) using the Riemannian conjugate gradient algorithm (Absil et al., 2008; Douik and Hassibi, 2019).",
"There exist several manifold optimization toolboxes such as Manopt (Boumal et al., 2014), Pymanopt (Townsend et al., 2016), Manopt.jl (Bergmann, 2019), McTorch (Meghwanshi et al., 2018) or ROPTLIB (Huang et al., 2016), which have scalable off-the-shelf generic implementation of Riemannian algorithms.",
"We use Manopt for our experiments, where we only need to provide the objective function (4) and its derivative with respect to Y .",
"The manifold optimization related ingredients are handled by Manopt internally.",
"The computational cost per iteration of the algorithm is O ( n 2 ) , which is similar to that of GW (Alvarez-Melis and Jaakkola, 2018).",
"We term our algorithm as M anifold B ased A lignment (MBA) algorithm.",
"Our code is available at https://pratikjawanpuria.com/ publications/ .",
"We compare the proposed algorithm MBA with state-of-the-art GW alignment algorithm (Alvarez-Melis and Jaakkola, 2018) for the bilingual induction (BLI) task.",
"Both the algorithms use second order statistics (word covariance matrices) to learn the word alignment between two languages.",
"In our experimental setup, we first learn the word alignment between the source and the target languages and then compute cross-lingual mapping by solving the Procrustes problem (1).",
"For inference of nearest neighbors, we employ the cross-domain similarity local scaling (CSLS) similarity score (Conneau et al., 2018).",
"We report Precision @1 (P @1 ) as in (Alvarez-Melis and Jaakkola, 2018; Artetxe et al., 2018b) for the BLI task.",
"We show results on the MUSE dataset (Con-neau et al., 2018), which consists of fastText monolingual embeddings for different languages (Bo-janowski et al., 2017) and dictionaries between several languages (but mostly with English).",
"Follow-Method de-xx en-xx es-xx fr-xx it-xx pt-xx xx-de xx-en xx-es xx-fr xx-it xx-pt avg.",
"ing existing works (Artetxe et al., 2018b; Alvarez-Melis and Jaakkola, 2018; Alaux et al., 2019), the embeddings are normalized.",
"The MUSE dataset provides predefined thirty test bilingual dictionaries between six European languages: English (en), German (de), Spanish (es), French (fr), Italian (it), and Portuguese (pt) on which we evaluate the methods.",
"Additionally, we compute performance on the test dictionaries between English and twelve other languages: Arabic (ar), Bulgarian (bg), Czech (cs), Danish (da), Dutch (nl), Finnish (fi), Greek (el), Hindi (hi), Hungarian (hu), Polish (po), Russian (ru), and Turkish (tr).",
"Following Alvarez-Melis and Jaakkola (2018), we consider top n = 20 000 most frequent words in the vocabulary set for all the languages during the training stage.",
"The inference is performed on the the full vocabulary set.",
"For GW, we use the original codes shared by Alvarez-Melis and Jaakkola (2018) and follow their recommendations on tuning the entropic regularization parameter and scaling of covariance matrices CX and CZ .",
"As a practical implementation of MBA, we incrementally increase n starting from 1000 to 20 000 every fixed-number of iterations.",
"We begin by discussing the results on six close-by European languages in Table 1.",
"We observe that both MBA and GW perform similarly when the languages are related.",
"Hence, in the second set of experiments, we consider other European languages that are distant to English.",
"We observe from Table 2 that MBA outperforms GW, by an average BLI score of 6 points, in this challenging setting.",
"Table 3 reports results on language pairs involving English and three non-European languages.",
"We again observe that the proposed algorithm MBA performs significantly better than GW.",
"Overall, the experiments show the benefit of a geometric optimization framework.",
"Aligning the metric spaces of languages has a wide usage in cross-lingual applications.",
"A popular approach in literature is the Gromov-Wasserstein (GW) alignment approach (Memoli, 2011; Peyre et al., 2016; Alvarez-Melis and Jaakkola, 2018), which constructs a transport map by viewing the two embedding spaces as distributions.",
"In contrast, we have viewed unsupervised bilingual word alignment as an instance of the more general unsupervised domain adaptation problem.",
"In particular, our formulation allows search over the space of doubly stochastic matrices and induces bi-directional mapping between the source and target words.",
"Both are motivated solely from the language perspective.",
"The Riemannian framework allows to exploit the geometry of the doubly stochastic manifold.",
"Empirically, we observe that the proposed algorithm MBA outperforms the GW algorithm for learning bilingual mapping (Alvarez-Melis and Jaakkola, 2018), demonstrating the benefit of geometric optimization modeling."
] | [
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"The long short-term memory (LSTM) language model (LM) has been widely investigated for automatic speech recognition (ASR) and natural language processing (NLP).",
"Although excellent performance is obtained for large vocabulary tasks, tremendous memory consumption prohibits the use of LSTM LMs in low-resource devices.",
"The memory consumption mainly comes from the word embedding layer.",
"In this paper, a novel binarized LSTM LM is proposed to address the problem.",
"Words are encoded into binary vectors and other LSTM parameters are further binarized to achieve high memory compression.",
"This is the first effort to investigate binary LSTMs for large vocabulary language modeling.",
"Experiments on both English and Chinese LM and ASR tasks showed that binarization can achieve a compression ratio of 11.3 without any loss of LM and ASR performance and a compression ratio of 31.6 with acceptable minor performance degradation.",
"Language models (LMs) play an important role in natural language processing (NLP) tasks.",
"N-gram language models used to be the most popular language models.",
"Considering the previous N-1 words, N-gram language models predict the next word.",
"However, this leads to the loss of long-term dependencies.",
"The sample space size increases exponentially as N grows, which induces data sparseness (Cao and Yu, 2017).",
"Neural network (NN) based models were first introduced into language modeling in 2003 (Ben-gio et al., 2003).",
"Given contexts with a fixed size, the model can calculate the probability distribution of the next word.",
"However, the problem of long-term dependencies still remained, beBoth authors contributed equally to this work.",
"cause the context window is fixed.",
"Currently, recurrent neural network (RNN) based models are widely used on natural language processing (NLP) tasks for excellent performance (Mikolov et al., 2010).",
"Recurrent structures in neural networks can solve the problem of long-term dependencies to a great extent.",
"Some gate based structures, such as long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) and gated recurrent unit (GRU) (Chung et al., 2014) improve the recurrent structures and achieve state-of-the-art performance on most NLP tasks.",
"However, neural network models occupy tremendous memory space so that it is almost impossible to put the models into low-resource devices.",
"In practice, the vocabulary is usually very large.",
"So the memory consumption mainly comes from the embedding layers.",
"And, the word embedding parameters are floating point values, which adds to the memory consumption.",
"The first contribution in this paper is that a novel language model, the binarized embedding language model (BELM) is proposed to reduce the memory consumption.",
"Words are represented in the form of binarized vectors.",
"Thus, the consumption of memory space is significantly reduced.",
"Another contribution in the paper is that we binarize the LSTM language model combined with the binarized embeddings to further compress the parameter space.",
"All the parameters in the LSTM language model are binarized.",
"Experiments are conducted in language modeling and automatic speech recognition (ASR) rescoring tasks.",
"Our model performs well without any loss of performance at a compression ratio of 11.3 and still has acceptable results with only a minor loss of performance even at a compression ratio of 31.6.",
"Investigations are also made to evaluate whether the binarized embeddings lose information.",
"Experiments are conducted on word 2113 similarity tasks.",
"The results show the binarized embeddings generated by our models still perform well on the two datasets.",
"The rest of the paper is organized as follows, section 2 is the related work.",
"Section 3 explains the proposed language model and section 4 shows the experimental setup and results.",
"Finally, conclusions will be given in section 5 and we describe future work in section 6.",
"Nowadays, with the development of deep learning, neural networks have yielded good results in many areas.",
"However, neural networks may require tremendous memory space, making it difficult to run such models on low-resource devices.",
"Thus, it is necessary to compress neural networks.",
"In recent years, many methods of compressing neural networks have been proposed.",
"Pruning (Han et al., 2015) reduces the number of parameters of the neural network by removing all connections with the weights below a threshold.",
"Quantization (Han et al., 2015) clusters weights to several clusters.",
"A few bits are used to represent the neurons and to index a few float values.",
"Binarization is also a method to compress neural networks.",
"BNNs(Courbariaux et al., 2016) are binarized deep neural networks.",
"The weights and activations are constrained to 1 or 1 .",
"BNNs can drastically reduce memory size and replace most arithmetic operations with bit-wise operations.",
"Different from pruning and quantization, binarization does not necessarily require pre-training and can achieve a great compression ratio.",
"Many binarization methods have been proposed (Cour-bariaux et al., 2015, 2016; Rastegari et al., 2016; Xiang et al., 2017).",
"However, only a few (Hou et al., 2016; Edel and Koppe, 2016) are related to recurrent neural network.",
"(Hou et al., 2016) implements a character level binarized language model with a vocabulary size of 87.",
"However, they did not do a comprehensive study on binarized large vocabulary LSTM language models.",
"The RNN language model is proposed to deal with sequential data.",
"Due to the vanishing and exploding gradients problem, it is difficult for a RNN language model to learn long-term dependencies.",
"The LSTM, which strengthens the recurrent neural model with a gating mechanism, tackles this problem and is widely used in natural language processing tasks.",
"The goal of a language model is to compute the probability of a sentence ( x 1 , . . . , x N ) .",
"A typical method is to decompose this probability word by word.",
"(Hochreiter and Schmidhuber, 1997) proposed a Long Short-Term Memory Network, which can be used for sequence processing tasks.",
"Consider an one-layer LSTM network, where N is the length of the sentence, and x t is the input at the t -th moment.",
"y t is the output at the t -th moment, which is equal to x t +1 in a language model.",
"Denote h t and c t as the hidden vector and the cell vector at the t -th moment, which is used for representing the history of ( x 1 , ..., x t 1 ) .",
"h 0 and c 0 are initialized with zero.",
"Given x t , h t 1 and c t 1 , the model calculates the probability of outputting y t .",
"The first step of an LSTM language model is to extract the representation e t of the input x t from the embeddings W e .",
"Since x t is a one-hot vector, this operation can be implemented by indexing rather than multiplication.",
"After that, e t , along with h t 1 and c t 1 are fed into the LSTM cell.",
"The hidden vector h t and the cell vector c t can be computed according to: f t = sigmoid ( W f { h t 1 , e t } + b f ) i t = sigmoid ( W i { h t 1 , e t } + b i ) o t = sigmoid ( W o { h t 1 , e t } + b o ) c t = tanh ( W c { h t 1 , e t } + b c ) c t = f t c t 1 + i t c t h t = o t tanh ( c t ) (3) The word probability distribution at the t -th moment can be calculated by: P ( y t | x 1 , ..., x t ) = p t = softmax( W y h t ) (4) The probability of taking y t as the output at the t -th moment is: p y t = p t y t (5) 2114 3.2 Binarized Embedding Language Model The binarized embedding language model (BELM) is a novel LSTM language model with binarized input embeddings and output embeddings.",
"For a one-layer LSTM language model with a vocabulary size of V , embedding and hidden layer size of H .",
"The size in bytes of the input embeddings, the output embeddings, and the LSTM cells are 4 V H , 4 V H and 32 H 2 + 16 H .",
"When V is much larger than H , which is often the case for language models, the parameters of the input embeddings and the output embeddings occupy most of the space.",
"If the embeddings of the input layer and the output layer are binarized, the input layer and the output layer will only take 1 / 32 of the original memory consumption, which can greatly reduce the memory consumption of running neural language model.",
"It is important to find good binary embeddings.",
"Directly binarizing well-trained word embeddings cannot yield good binarized representations.",
"Instead, we train good binary embeddings from scratch.",
"The training approach is similar to the methods proposed in (Courbariaux et al., 2016; Rastegari et al., 2016).",
"At run-time, the input embedding and the output embedding are binarized matrices.",
"However, at train-time, float versions of the embeddings, which are used for calculating the binarized version of embeddings, are still maintained.",
"In the propagation step, a deterministic function sign is used to binarize the float versions of the embeddings.",
"In the back-propagation step, the float versions of the embeddings are updated according to the gradient of the binarized embedding.",
"The derivative of the sign function is zero almost everywhere, and it is impossible to back-propagate through this function.",
"As introduced in (Hubara et al., 2016), a straight-through estimator is used to get the gradient.",
"Assume the gradient of the binarized weight C W b has been obtained, the gradient of the float version of the weight is: C W = C W b (7) A typical weight initialization method initializes each neuron's weights randomly from the Gaussian distribution N (0 , 1 /H ) .",
"This initialization approach can maximize the gradients and mitigate the vanishing gradients problem.",
"From this perspective, 1 or 1 is too large.",
"So, in practice, we binarize the embeddings to a smaller scale.",
"Although the weight is binarized to a floating point number, the matrix can also be saved one bit per neuron, as long as the fixed float value is memorized separately.",
"",
"(8) Since directly binarizing the input embeddings W e and the output embeddings W y will limit the scale of the embeddings, additional linear layers (without activation) are added behind the input embedding layer and in front of the output embedding layer to enhance the model.",
"Denote W be and W b y as the binarized weights corresponding to W e and W y .",
"Denote WT e and b T e , WT y and b T y as the weights and the biases of the first and the second linear layer.",
"The input of the LSTM e t and the word probability p t of the binarized embedding language model are calculated according to: e t = WT e ( W be x t ) + b T e p t = softmax ( W by ( WT y h t + b T y )) (9) The additional linear layer before the output embedding layer is very important for the binarized embedding language model, especially for low dimensional models.",
"Removing this layer will result in an obvious decrease in performance.",
"Subsection 3.2 explains how to binarize the embedding layer, but the LSTM network can also be binarized.",
"In a binarized LSTM language model, all the matrices in the parameters are binarized, which can save much more memory space.",
"Implementing the binarized linear layer is important for designing a binarized LSTM language model (BLLM).",
"In a binarized linear layer, there are three parameters, W , and b .",
"W is a matrix, and b are vectors.",
"The matrix W , which takes up most of the space in a linear layer, is binarized.",
"and b remain floating point values.",
"b is the bias of the linear layer, and is introduced to fix the scale problem of the binary matrix.",
"The forwardand back-propagation algorithms are shown in Algorithm 1 and Algorithm 2.",
"The structure of this linear layer is very similar to the structure of batch normalization (Ioffe and Szegedy, 2015), except the output of each dimension over the mini-batches is not normalized.",
"Batch normalization is hard to apply to a recurrent neural network, due to the dependency over entire sequences.",
"However, the structure of the batch normalization is quite useful.",
"Since binarizing W would fix the scale of the weight, additional free-dom is needed to overcome this issue.",
"The shift operation can rescale the output to a reasonable range.",
"Algorithm 2 The back-propagation of linear layer Input: input x , weights W , and b , binarized weight W b , temporary value s (calculated in the propagation period), the gradient of the output C y , learning rate , binary weight range",
"gradient of the weight C W , C , C b , update the weights 1: C b = C y 2: C = C y s exp ( ) 3: C s = C y exp ( ) , C W b = C s x , C W = C W b 4: C x = C s W b 5: update W , , b according to C W , C , C b with learning rate .",
"6: clamp( W , , ) 7: return Cx The structure of the input embeddings and the output embeddings of the binarized LSTM language model is similar to the binarized embedding language model.",
"The embeddings are binarized and additional linear layers are added after the input embedding layer and in front of the output embedding layer.",
"However, the additional linear layers are also binarized according to Algorithm 1 and Algorithm 2.",
"Denote the size of the vocabulary as V , and the size of the embedding and hidden layer as H .",
"The memory consumptions of a one-layer LSTM language model, BELM and BLLM are listed in Table 1.",
"For a language model, the vocabulary size is usually much larger than the hidden layer size.",
"The main memory consumption comes from the embedding layers, which require 8 V H bytes for an LSTM language model.",
"Binarized embeddings can reduce this term to 0 .",
"25 V H bytes.",
"Further compression of the LSTM can drop the coefficient of H 2 from 32 to 1 .",
"25 .",
"Our model is evaluated on the English Penn TreeBank (PTB) (Marcus et al., 1993), Chinese short message (SMS) and SWB-Fisher (SWB).",
"The Penn TreeBank corpus is a famous English dataset, with a vocabulary size of 10K and 4.8% words out of vocabulary (OOV), which is widely used to evaluate the performance of a language model.",
"The training set contains approximately 42K sentences with 887K words.",
"The Chinese SMS corpus is collected from short messages.",
"The corpus has a vocabulary size of about 40K.",
"The training set contains 380K sentences with 1931K words.",
"The SWB-Fisher corpus is an English corpus containing approximately 2.5M sentences with 24.9M words.",
"The corpus has a vocabulary size of about 30K.",
"hub5e is the dataset for the SWB ASR task.",
"We also evaluate the word embeddings produced by our models on two word similarity datasets.",
"The models are trained on the Text8 corpus to extract the word embeddings.",
"The Text8 corpus is published by Google and collected from Wikipedia.",
"Text8 contains about 17M words with a vocabulary size of about 47k.",
"The WordSimilarity-353(WS-353) Test Collection contains two sets of English word pairs along with 2116 human-assigned similarity judgments.",
"The collection can be used to train and test computer algorithms implementing semantic similarity measures.",
"A combined set (combined) is provided that contains a list of all 353 words, along with their mean similarity scores.",
"(Finkelstein et al., 2001)",
"The MEN dataset consists of 3,000 word pairs, randomly selected from words that occur at least 700 times in the freely available ukWaC and Wackypedia corpora combined (size: 1.9B and 820M tokens, respectively) and at least 50 times (as tags) in the open-sourced subset of the ESP game dataset.",
"In order to avoid picking unrelated pairs only, the pairs are sampled so that they represent a balanced range of relatedness levels according to a text-based semantic score (Bruni et al., 2014).",
"First, we conduct experiments on the PTB, SWB and Text8 corpora respectively to evaluate language modeling performance.",
"We use perplexity (PPL) as the metric to evaluate models of different sizes.",
"Then, the models are evaluated on ASR rescoring tasks.",
"Rescoring the 100-best sentences generated by the weighted finite state transducer (WFST), the model is evaluated by word error rate (WER).",
"Finally, we conduct experiments on word similarity tasks to evaluate whether the word embeddings produced by our models lose any information.",
"For traditional RNN based language models, the memory consumption mainly comes from the embedding layers (both input and output layers).",
"However, when the hidden layer size grows, the memory consumption of the RNN module also becomes larger.",
"So the total memory usage relates to both the vocabulary size and hidden layer size, as mentioned in section 3.4.",
"Experiments are conducted in language modeling to evaluate the model on the PTB, SWB, and SMS corpora respectively.",
"In language modeling tasks, we regularize the networks using dropout(Zaremba et al., 2014).",
"We use stochastic gradient descent (SGD) for optimization.",
"The batch size is set to 64.",
"For the PTB corpus, the dropout rate is tuned for different training settings.",
"For the SWB corpus, we do not use dropout technique.",
"For the SMS corpus, the dropout rate is set to 0.25.",
"We train models of different sizes on the three corpora and record the memory usage of the trained models.",
"The initial learning rate is set to 1.0 for all settings.",
"Since PTB is a relatively small dataset and the convergence rates of the BELM and the BLLM are slower than LSTM language model, we reduce the learning rate by half every three epochs if the perplexity on the validation set is not reduced.",
"For the other experiments, the learning rate is always reduced by half every epoch if the perplexity on the validation set is not reduced.",
"As introduced in section 3, the bias of the output embedding layer is omitted.",
"Adding bias term in the output embedding layer leads to small performance degradation in the BELM and the BLLM model, although it leads to a small improvement in the LSTM model.",
"This phenomenon may be related to optimization problems.",
"Because the total memory usage relates to both the vocabulary size and hidden layer size, the memory reduction on various corpora is quite different.",
"For our BELM model, the floating point embedding parameters are replaced by single bits, which could significantly reduce the memory usage.",
"On the PTB corpus, the BELM models even 2117 outperform the baseline LSTM LM.",
"The small model (500 LSTM units) has a relative PPL improvement of 4.1 % and achieves a compression ratio of 4.3 and the large model (1000 LSTM units) also has a relative PPL improvement of 4.1 % and achieves a compression ratio of 2.6.",
"On the SWB corpus, the BELM models still perform well compared with the baseline model and achieve compression ratios of 9.4 and 5.8 respectively for the small and large models.",
"On the SMS corpus, the BELMs model also gains relative PPL improvements of 0.2 % and 1.9 % , and achieves compression ratios of 11.3 and 7.1 respectively.",
"In summary, the BELM model performs as well as the baseline model both on English and Chinese corpora, and reduces the memory consumption to a large extent.",
"The BLLM model, however, does not outperform the baseline model, but still has acceptable results with a minor loss of performance.",
"Since both the LSTM model and the embeddings are binarized, the total compression ratio is quite sig-nificant.",
"The average compression ratio is about 32.0, so the memory consumption of the language model is significantly reduced.",
"We also study the performance of pruning the LSTM language model.",
"We prune each parameter matrix and the embedding layers with various pruning rates respectively, and fine-tune the model with various dropout rates.",
"In our experiments, pruning 75% parameter nodes hardly affects the performance.",
"However, if we try pruning more parameter nodes, the perplexity increases rapidly.",
"For example, for the English PTB dataset, when we prune 95% parameter nodes of the embedding layers of an LSTM language model (500 LSTM units), the perpexity will increase from 91.8 to 112.3.",
"When we prune 95% parameter nodes of an LSTM language model (500 LSTM units), the perplexity will increase from 91.8 to 132.3.",
"Therefore, the effect of pruning is not as good as binarization for the language modeling task.",
"Binarization can be considered as a special case of quantization, which quantizes the parameters to pairs of opposite numbers.",
"So, compared to normal quantization, binarization can achieve a better compression ratio.",
"In addition, for binarization, we do not need to determine the position of each unique values in advance.",
"Therefore, binarization is more flexible than quantization.",
"We then study the effect of extra binary linear layers in the BLLM.",
"The additional binary linear layer after the input embedding layer and the additional binary linear layer in front of the output embedding layer are removed respectively in this experiment.",
"We use well-trained embeddings to initialize the corresponding embedding layers and do the binarization using the method proposed in (Rastegari et al., 2016) when the additional binary linear layer is removed.",
"The perplexities are listed in Table 5.",
"No-i means no additional binary linear layer after the input embedding layer.",
"No-o means no additional binary linear layer in front of the output embedding layer.",
"No-io means no additional binary linear layers.",
"The experiment is conducted on the PTB corpus.",
"If the additional binary linear layer after the input embedding layer is removed, the performance does not drop, and even becomes better when the hidden layer size is 1000 .",
"Although the additional binary layer after the input embedding layer is removed, the float version of the input embeddings of BLLM no-i is initialized with well-trained embeddings, while the BLLM is not initialized with the well-trained embeddings.",
"We think initialization is the reason why the BLLM no-i performs comparatively to the BLLM.",
"We also observe a PPL increase of 1-2 points for BLLM no-i if the input embeddings are not pre-trained (not listed in the table).",
"This phenomenon prompts us to pretrain embeddings, which we leave to future work.",
"Once the additional binary linear layer in front of the output embedding layer is removed, the performance degradation is serious.",
"This shows that the output embeddings of the language model should not be directly binarized; the additional binary linear layer should be inserted to enhance the model's capacity, especially for low dimensional models.",
"Experiments are conducted on the ASR rescoring task to evaluate the model on the hub5e and SMS corpora.",
"Hub5e is a test dataset of the SWB corpus which we use for ASR rescoring tasks.",
"For the hub5e dateset, A VDCNN (Qian et al., 2016) 2118 (very deep CNN) model on the 300-hour task is applied as the acoustic model.",
"For the Chinese SMS dataset, the acoustic model is a CD-DNN-HMM model.",
"The weighted finite state transducer (WFST) is produced with a 4-gram language model.",
"Then our language models are utilized to rescore the 100-best candidates.",
"The models are evaluated by the metric of word error rate (WER).",
"Table 6 shows the results on ASR rescoring tasks.",
"The BELM model and BLLM model perform well both on the English and Chinese datasets.",
"The BELM model achieves an absolute 0.2 % WER improvement compared with the baseline model in three of the experiments.",
"The BLLM model also has good results, even though it performs not so well in language modeling.",
"The results show that our language models work well on ASR rescoring tasks even with much less memory consumption.",
"The experiments above show the good performances of our models.",
"We also want to investigate whether the binarized embeddings lose any information.",
"So, the embeddings are evaluated on two word similarity tasks.",
"Experiments are conducted on the WS-353 and MEN tasks.",
"We have trained the baseline LSTM model, the BELM model and BLLM model of a medium size on the Text8 corpus.",
"We binarize the embeddings of the trained baseline LSTM model to investigate whether there is any loss of information by the simple binarization method (labeled LSTM-bin in the table be-low).",
"For each dimension, we calculate the mean and set the value to 1 if it is bigger than the mean, otherwise, we set it to -1.",
"The embedding size and the hidden layer size are set to 500.",
"We use stochastic gradient descent (SGD) to optimize our models.",
"We use cosine distance to evaluate the similarity of the word pairs.",
"Spearman's rank correlation coefficient is calculated to evaluate the correlation between the two scores given by our models and domain experts.",
"Table 7 shows our models perform well in language modeling on the Text8 corpus.",
"Table 8 summarizes the performance of the word embeddings in the similarity tasks.",
"The embeddings generated by the simple binarization method perform obviously worse than the other embeddings, which indicates that much information is lost.",
"The BELM model outperforms the baseline model on the MEN task, although it doesnt perform as well as the baseline model on the WS-353 task.",
"However, the MEN dataset contains many more word pairs, which makes the results on this dataset more convincing.",
"The BLLM model significantly outperforms the baseline model on the two tasks.",
"The results indicate that the binarized embeddings of the BLLM do not lose any semantic information although the parameters are represented only by -1 and 1.",
"We suspect that binarization plays a role in regularization and produces more robust vectors.",
"We also give an example visualization of some word vectors.",
"The dimension of the embeddings of the BLLM is reduced by TSNE (Maaten and Hinton, 2008).",
"The words which are the closest to father (according to the cosine distance of word vectors) are shown in Figure 1.",
"In this figure, mother and parents are the closest words to father , which is quite understandable.",
"The words husband , wife , grandfather and grandmother also gather together and most words in the figure are related to father , indicat-2119 15 10 5 0 5 10 15 10 5 0 5 fathermother son eldest parents uncle grandfather wife husband daughter maternal creator paternal brother grandson heir nephewgrandmother king birthplace Figure 1: Visualization of the Binarized Embeddings ing the embeddings indeed carry semantic information.",
"In this paper, a novel language model, the binarized embedding language model (BELM) is proposed to solve the problem that NN based language models occupy tremendous space.",
"For traditional RNN based language models, the memory consumption mainly comes from the embedding layers (both input and output layers).",
"However, when the hidden layer size grows, the memory consumption of the RNN module also becomes larger.",
"So, the total memory usage relates to both the vocabulary size and hidden layer size.",
"In the BELM model, words are represented in the form of binarized vectors, which only contain parameters of -1 or 1.",
"For further compression, we binarize the long short-term memory language model combined with the binarized embeddings.",
"Thus, the total memory usage can be significantly reduced.",
"Experiments are conducted on language modeling and ASR rescoring tasks on various corpora.",
"The results show that the BELM model performs well without any loss of performances at compression ratios of 2.6 to 11.3, depending on the hidden and vocabulary size.",
"The BLLM model compresses the model parameters almost thirty-two times with a slight loss of performance.",
"We also evaluate the embeddings on word similarity tasks.",
"The results show the binarized embeddings even perform much better than the baseline embeddings.",
"In the future, we will study how to improve the performance of the BLLM model.",
"And, we will research methods to accelerate the training and reduce the memory consumption during training.",
"The corresponding author is Kai Yu.",
"This work has been supported by the National Key Research and Development Program of China under Grant No.2017YFB1002102, and the China NSFC projects (No. 61573241).",
"Experiments have been carried out on the PI supercomputer at Shanghai Jiao Tong University."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"other",
"other"
] |
[
"To effectively apply robots in working environments and assist humans, it is essential to develop and evaluate how visual grounding (VG) can affect machine performance on occluded objects.",
"However, current VG works are limited in working environments, such as offices and warehouses, where objects are usually occluded due to space utilization issues.",
"In our work, we propose a novel OCID-Ref dataset featuring a referring expression segmentation task with referring expressions of occluded objects.",
"OCID-Ref consists of 305,694 referring expressions from 2,300 scenes with providing RGB image and point cloud inputs.",
"To resolve challenging occlusion issues, we argue that it's crucial to take advantage of both 2D and 3D signals to resolve challenging occlusion issues.",
"Our experimental results demonstrate the effectiveness of aggregating 2D and 3D signals but referring to occluded objects still remains challenging for the modern visual grounding systems.",
"OCID-Ref is publicly available at https://github.com/ lluma/OCID-Ref 1 Introduction Visual grounding (VG), which aims to locate the object according to a structured language query, is a crucial task in natural language processing (NLP), computer vision (CV), and robotics.",
"Recent VG studies most focus on web-crawled images such as (Kazemzadeh et al., 2014; Krishna et al., 2017; Mao et al., 2016; Yu et al., 2016).",
"However, VG for human-robot interaction (HRI) is less explored.",
"Most of the images in existing VG datasets are people and daily necessities, e.g., RefCOCO contains mainly persons, cars, and cats, which are separated and therefore easier to detect.",
"Nevertheless, working spaces such as offices or warehouses, where robots are usually applied to assist works, are usually crowded, and objects are overlapped with each Equal contribution.",
"other to utilize space better.",
"Therefore, objects in working environments are often occluded and hard to detect.",
"Previous work (Ralph and Moussa, 2005) suggested that a system that uses language for human-computer interaction can help non-professionals instruct robots to complete technical work and collaborate.",
"Recent research pointed out that VG plays an important role in HRI.",
"(Shridhar and Hsu, 2018) utilized VG to resolve ambiguity in grasping tasks.",
"(Matuszek) studied how the robot learns about objects and tasks in an environment via nature language queries.",
"Therefore, explicit language instructions and good referring (grounding) expressions are pivotal in human-robot interaction and improve communication between non-expert humans and robots.",
"Some efforts have been made to collect VG datasets.",
"RefCOCO (Yu et al., 2016) and Cops-Ref (Chen et al., 2020b) utilize web-crawled images and manually label language expressions.",
"A limitation is that images alone do not provide precise position cues, which are essential for various downstream robotic tasks such as grasping.",
"A recent work, Sun-Spot (Mauceri et al., 2019), utilizes a depth channel for object detection and referring expression segmentation tasks.",
"Another existing dataset, ScanRefer (Chen et al., 2020a), uses more accurate multi-view point clouds for 3D signals.",
"However, both Sun-Spot and ScanRefer do not address occlusion issues, which is ordinary in working spaces and more challenging due to more compositions of shapes of each object.",
"As shown in figure 1, when an object (the red plastic bag) is blocked in an occluded environment, the shape of the object could be deformed and increase VG difficulty.",
"Observing this, we propose a novel OCID-Ref dataset with two key features: (1) For each scene, we utilize both RGB image and point cloud to provide multi-modal signals for learning system development.",
"(2) OCID-Ref scenes have higher clutter level compared to existing datasets, as shown in figure 2.",
"Hence, the model capability for resolving challenging occlusion issues could be evaluated.",
"To the best of our knowledge, OCID-Ref is the only existing dataset supporting the above features, and therefore allows VG task in grasping scenario.",
"Experimental results demonstrate that occluded scenes are more challenging to modern VG baselines.",
"We observe 27% to 34% performance drops on referring expression segmentation tasks.",
"Also, utilizing 3D information continually improves performance across all clutter levels.",
"Furthermore, fusing 2D and 3D features reach the best performance on all clutter levels.",
"We suggest that OCID-Ref dataset could pave a new path for VG research in HRI and benefit the research community and application developments.",
"To open up a new way for VG research in HRI, we collect a novel OCID-Ref dataset by the following steps: (1) We leverage a robotic object cluttered indoor dataset, OCID (Suchi et al., 2019), which consists of complex clutter-level scenes with rich 3D point cloud data and the point-wise instance",
"labels for each occluded objects.",
"(2) We manually annotate fine-grained attributes and relations such as color, shape, size relation or spatial relation.",
"(3) We generate referring expressions based on annotated attributes and relations with a similar scene-graph generation system from (Yang et al., 2020) and (Chen et al., 2020c).",
"In this section, we will describe more details on our data collection and the scene-graph generation method we adopt to generate the referring expressions.",
"A proper dataset to evaluate and develop VG models in a working environment requires two properties: (1) cluttered scenes and (2) 3D signals.",
"To point out the important of these two properties, we conduct a pilot experiment of grasp detection 1 .",
"We observe that using 3D cues significantly boosts performance, the geometric features extracted from point cloud data benefit the robots on visual perception (e.g., object grasping or object tracking).",
"Also, we see a severe performance drop in occluded scenes.",
"Therefore, to provide scenes with occluded objects to develop and evaluate learning systems, we leverage an existing robotic 3D dataset, OCID (Suchi et al., 2019), which has higher clutter level scenes and sequential object-level scenes that help robots better understand the instance difference between two subsequent scenes.",
"tions (e.g., color relation, spatial relation, etc.) for all the objects in dataset.",
"We design an online web-based annotation tool to collect these extra labels, and dispatch the labeling tasks over the annotation specialists from a professional data service company.",
"Additionally, we ensure each task is randomly assigned to three trained workers and veri-fied by one checker.",
"The overall tasks take around two months to finish.",
"Gathering the labels we annotated and following the method from the scene-graph based referring expression generation system.",
"In detail, first, we build up the scene graph for each scene in OCID-Ref, and the nodes and edges in the graph represent the attributes and relations, respectively.",
"Second, we design several textual templates (Table",
"2) to have various sentence structures.",
"Third, we leverage the conventional incremental algorithm(Dale and Reiter, 1995) and functional programs to generate reasonable REs.",
"That is, we add attributes and relations into our conditional set until it conforms with the specific unambiguous condition.",
"Finally, we generate the total of 305,694 referring expressions with an average length of 8.56, and for details, there are an average of 14.71 expressions per object instance and 113.07 expressions per scene.",
"OCID-Ref uses the same scenes as OCID, containing 2D object segmentation and 3D object bounding boxes for 2300 fully built-up indoor cluttered scenes.",
"Each object is associated with more than 20 relationships with other objects in the same scene, including 3D spatial relations, 2D spatial relations, comparative color relations, and comparative size relations.",
"Table 1 shows the basic statistic comparison of the previous 2D, RGB-D,3D referring datasets and the OCID-Ref.",
"To evaluate the difficulty of REC, we follow Cops-Ref to calculate the number of candidate objects of the same categories as the target object(Distractor score) for all scenes.",
"Though there are only 3.36 same candidates in an average of OCID-Ref, lower than 4.64 of ScanRefer, we attribute this difference to the dataset characteristic that our scenes are components of one by one sequence with few objects in the first few scenes.",
"To evaluate the referring performance from no clutter to dense clutter scenes, we follow OCID to separate the scenes into three cluttered levels, free, touching, and stacked, from clearly separated to physically touching to being on top of each other.",
"We also split the val split of ScanRefer into three clutter level.",
"We conduct referring expression segmentation experiments on our collected OCID-Ref dataset and ScanRefer (Chen et al., 2020a) dataset.",
"We compare different modalities, clutter levels, and regular expression lengths and provide a comprehensive analysis to pave a new path for future research.",
"We also conduct the grasp experiment using different modality data as input, and the details are described in Appendix A. All Free(F) Touching(T) Stacked(S) Decrease rate(F->S) 2D 0.512 / 0.501 0.673 / 0.660 0.450 / 0.496 0.450 / 0.433 33.23% / 34.29% 3D 0.588 / 0.580 0.745 / 0.751 0.589 / 0.583 0.507 / 0.489 31.97% / 34.92% Fusion 0.634 / 0.637 0.763 / 0.769 0.64 / 0.651 0.551 / 0.540 27.71% / 29.75% Table 3: Referring expression segmentation performance ([email protected]) on OCID-Ref.",
"R ( , ) = 4 if < 15 5 if > -15 16 + r ( ) if -65 < < -15 8 + r ( ) if 15 < < 65 r ( ) otherwise (1) r ( ) = 6 + (cid:18) + 22 .",
"5 45 (cid:19) (2) 3.1 Setup Baseline We run our experiments with a modern graph-based DGA (Yang et al., 2019) model.",
"We compare 2D (RGB), 3D (point cloud) and 2D+3D input signals.",
"Feature Extraction For 2D inputs, we use ResNet-101 based Faster-RCNN as our 2D feature extractor and pre-train the extractor on OCID to extract the ROI features from the pool5 layer as the 2D visual features, and use the original DGA's settings for node feature and edge feature on the graph.",
"For 3D inputs, we utilize point-wise features extracted from PointNet (Charles et al., 2017) as the 3D version of the visual feature for each node in the graph.",
"Also, we change the box information from 2D to 3D with box center, box bounds, and box volume.",
"The relations for the edges are modi-fied with 3D relationships between objects instead of 2D relationships.",
"Figure 3 and equation 1, 2 shows how we compute the angles related to 3D relation on spherical coordinates.",
"2D and 3D Fusion To utilize advantages from both 2D and 3D signals, we implement a handy fusion module.",
"We take max-pooling on the point features to aggregate them into a global scene feature and concatenate it to the 2D visual feature as a new visual feature for each object instance.",
"Afterward, we fuse the box information into (2D box center, 2D box bounds, 2D box area, 3D box center, 3D box bounds, 3D box volume) to preserve the location information from two distinct coordinates.",
"The edge representation is defined as the same as the 3D version.",
"Evaluation Metric We use [email protected] as our metric to measure the thresholded accuracy where the positive predictions have a higher intersection over union (IoU) with the ground truths than the thresholds.",
"Clutter Levels Table 3 compares 2D (RGB), 3D (point cloud) models and Fusion model performance on OCID-Ref dataset.",
"Obviously, all models struggle against the highly occluded stacked subset (Fourth column).",
"The 27 to 34 % of performances drop from free to stacked subset indicates that occlusion, which occurs in working environments, is a challenge for modern VG models.",
"Table 4 shows model performance on ScanRefer dataset, and the result is consistent with OCID-Ref dataset, where stacked performance is dropped from 0.465 to 0.320 for the unique scenario and from 0.198 to 0.131 for the multiple scenario.",
"The results suggest that tackling occlusion is crucial for future research and applications in working environments.",
"conFigure 4: Qualitative results from 2D, 3D, and the fusion methods.",
"Predicted masks with an IOU score higher than 0.25 are marked in green, otherwise in red.",
"Examples are tested in the same cluttered scene with referring expressions in different difficulty levels.",
"Fusion method produces better results than 2D and 3D method.",
"stantly outperforms the 2D model (First row) in all clutter levels and indicates that accurate spatial information is crucial.",
"Furthermore, aggregating 2D and 3D signals (Third row) reaches the best performance and suffers less performance drop from free to stacked.",
"Therefore, we suggest future work to explore an effective way to utilize and fuse 2D and 3D signals to tackle our challenging dataset.",
"Referring Expression Length Table 5 compares the performance of short (not more than 12 wordpieces) and long (equal or more than 12 word-pieces).",
"We observe that all models perform worse when the expressions are long.",
"Figure 4 shows results produced by 2D, 3D baseline, and the fusion model.",
"First, in figure 4-d we discover that all three methods fail when the RE is long and complicated.",
"The fusion method successfully localizes the towel in the scene with 2D and 3D spatial descriptions(refer to figure 4-c), while the 3D method has difficulty identifying what is \"lower-right.\"",
"Unsurprisingly, we observe that the 2D method fails on the query with the 3D relation \"rear\"(refer to figure 4-b).",
"Figure 4-d also shows the failure cases of the fusion method, indicating that our model cannot handle all spatial relations to distinguish between ambiguous objects.",
"2D and 3D get better performance when the query RE consisted mainly of the common sentences and relationships regarding the whole scene.",
"The failure case suggests that our fusion and localization module can still be improved to utilize the 2D information better and decrease the 3D features' misuse.",
"In this work, we propose a novel OCID-Ref dataset for VG with both 2D (RGB) and 3D (point cloud) and occluded objects.",
"OCID-Ref consists of 305,694 referring expressions from 2,300 scenes with providing RGB image and point cloud inputs.",
"Experimental results demonstrate the difficulty of occlusion and suggest the advantages of leveraging both 2D and 3D signals.",
"We are excited to pave a new path for VG researches and applications.",
"This work was supported in part by the Ministry of Science and Technology, Taiwan, under Grant MOST 110-2634-F-002-026 and Qualcomm Technologies, Inc.",
"We benefit from NVIDIA DGX-1 AI Supercomputer and are grateful to the National Center for High-performance Computing.",
"We also thank Yu-Kai Huang, Hsin-Ying Lee for his insightful suggestion on the figures."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"method",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"objective",
"other",
"other",
"other"
] |
[
"Inflecting When There's No Majority: Limitations of Encoder-Decoder Neural Networks as Cognitive Models for German Plurals Kate McCurdy Sharon Goldwater Adam Lopez Institute for Language, Cognition and Computation School of Informatics University of Edinburgh [email protected], { sgwater, alopez } @inf.ed.ac.uk Abstract Can artificial neural networks learn to represent inflectional morphology and generalize to new words as human speakers do?",
"Kirov and Cotterell (2018) argue that the answer is yes: modern Encoder-Decoder (ED) architectures learn human-like behavior when inflecting English verbs, such as extending the regular past tense form /-(e)d/ to novel words.",
"However, their work does not address the criticism raised by Marcus et al. (1995): that neural models may learn to extend not the regular , but the most frequent class and thus fail on tasks like German number inflection, where infrequent suffixes like /-s/ can still be productively generalized.",
"To investigate this question, we first collect a new dataset from German speakers (production and ratings of plural forms for novel nouns) that is designed to avoid sources of information unavailable to the ED model.",
"The speaker data show high variability, and two suffixes evince regular' behavior, appearing more often with phonologically atypical inputs.",
"Encoder-decoder models do generalize the most frequently produced plural class, but do not show human-like variability or regular' extension of these other plural markers.",
"We conclude that modern neural models may still struggle with minority-class generalization.",
"Morphology has historically been the site of vigorous debate on the capacity of neural models to capture human speaker behavior, and hence ground claims about speaker cognition.",
"In 1986, Rumelhart and McClelland described a neural network model which learned to map English present tense verbs to their past tense forms.",
"Importantly, the network handled both regular verbs, whose past tense is formed systematically by adding the suffix /-(e)d/ (e.g. jumped ), and irregular verbs where the present and past tenses bear no systematic relationship (e.g. ran ).",
"The authors suggested their model provided an alternative [...] to the implicit knowledge of rules (1986, 218), a claim which sparked considerable controversy.",
"Pinker and Prince (1988) highlighted many empirical inadequacies of the Rumelhart and McClelland model, and argued that these failures stemmed from cen-tral features of connectionist ideology and would persist in any neural network model lacking a symbolic processing component.",
"Recently, however, Kirov and Cotterell (2018, henceforth K&C) revisited the English past tense debate and showed that modern recurrent neural networks with encoder-decoder (ED) architectures overcome many of the empirical limitations of earlier neural models.",
"Their ED model successfully learns to generalize the regular past tense suffix /-(e)d/, achieving near-ceiling accuracy on held-out test data.",
"Moreover, its errors result from over-application of the regular past tense (e.g. throw throwed )a type of error observed in human language learners as wellas opposed to the unattested forms produced by Rumelhart and McClel-land's model.",
"K&C conclude that modern neural networks can learn human-like behavior for English past tense without recourse to explicit symbolic structure, and invite researchers to move beyond the rules' debate, asking instead whether the learner correctly generalizes to a range of novel inputs, and whether its errors (and other behavior) are human-like.",
"This challenge was first taken up by Corkery et al. (2019), who showed that, on novel English-like words designed to elicit some irregular generalizations from humans, the ED model's predictions do not closely match the human data.",
"While these results suggest possible problems with the ED model, English may not be the best test case to fully understand these, since the sole regular inflectional class is also by far the most frequent.",
"In contrast, many languages have multiple inflectional classes which can act regular' under various conditions (Seidenberg and Plaut, 2014; Clahsen, 2016).",
"In this paper, we examine German number inflection, which has been identified as a crucial test case for connectionist modeling (Kopcke, 1988; Bybee, 1995; Marcus et al., 1995; Clahsen, 1999b).",
"The German plural system features eight plural markers (c.f. Table 1), none of which hold a numerical majority in type or token frequency.",
"Different linguistic environments favor different plural markers (e.g. Kopcke, 1988; Wiese, 1996; Yang, 2016), and even the famously rare suffix /-s/ is nonetheless productive , in the sense that speakers readily extend it to new words.",
"1 In their analysis of the German plural system, Marcus et al. (1995, henceforth M95) argue that neural networks generalize the most frequent patterns to unfamiliar inputs, and thus struggle to represent productive but rare classes such as /-s/.",
"We investigate that claim using the novel German-like nouns M95 developed.",
"Because the design and results of previous human studies have been somewhat inconsistent, and because we want to compare to fine-grained results from individuals (not just published averages), we first collect a new dataset of plural productions and ratings from German speakers.",
"Our speaker data show high variability: no class holds a majority overall, and two less frequent suffixes show a relative preference for phonologically atypical inputs (Non-Rhymes).",
"We then compare our human data with the predictions of the encoder-decoder (ED) model proposed by K&C.",
"While our human data paint a more complex picture of the German plural system than M95 claimed, nevertheless M95's central idea is borne out: when given Non-Rhymes, the ED model prefers the most frequent plural class, but speakers behave differently.",
"This finding reveals that while modern neural models are far more powerful than earlier ones, they still have limitations as models of cognition in contexts like German number inflection, where no class holds a majority.",
"The model may correctly identify the most frequent class, but fails to learn the conditions under which minority classes are productive for speakers.",
"To evaluate whether neural models generalize correctly, we need to compare their behavior with that of humans on the same task.",
"Unfortunately, no existing datasets were suitable, so our first study asks how German speakers inflect novel nouns.",
"1 For example, the Institut f ur Deutsche Sprache ( https: //www.owid.de/service/stichwortlisten/neo_neuste ) officially added multiple /-s/-inflecting nouns to the German language in 2019, including Verhutungsapp, Morphsuit and Onesie .",
"Wug testing and productivity If an English speaker needs to produce the plural form of an unknown word such as wug , that speaker must decide whether wug belongs to the same inflectional class as dog and cat (yielding plural wugs ) or the same class as sheep and deer (yielding wug ).",
"Speakers' overwhelming preference for wugs in this scenario indicates that the /-s/ plural class is productive in English: a productive morphological process can be generalized to new inputs.",
"This task of inflecting novel ( nonce ) words is known as the wug test (Berko, 1958), and is the standard method to determine productivity in psycholinguistic research.",
"While the concept of morphological regularity' is not well-defined (Herce, 2019), productivity is nonetheless an essential component: an inflectional class that is not productive cannot be regular.",
"Productivity in German plurals The German plural system comprises five suffixes: /-e/, /-er/, / / 2 , /-(e)n/, and /-s/.",
"The first three can optionally combine with an umlaut over the root vowel.",
"3 Umlaut varies semi-independently of plural class (Wiese, 1996), and is not fully predictable; for simplicity, this study will focus only on the five main suffix classes for analysis.",
"Examples in all forms are shown in Table",
"1. Each plural suffix is also shown with its type frequency (counting each word type only once, how many types in the lexicon take this plural?) and token frequency (how often do words with this plural suffix appear in the corpus overall?).",
"German nouns can have one of 2 / / refers to the so-called zero plural, and is indicated as zero on all figures in this paper.",
"3 Umlaut is a process which fronts a back vowel, so only roots with back vowels can take an umlaut (e.g. Dach Dacher , Fuss Fusse ).",
"three grammatical genders masculine, feminine, or neuter and this lexical feature is highly associated with plural class: most feminine nouns take /-(e)n/, while /-e/ and / / nouns are often masculine or neuter.",
"The phonological shape of a noun also influences its plural class; for example, most nouns ending with schwa take /-(e)n/ (Elsen, 2002).",
"Although there are statistical tendencies, there are no absolute rules, and no suffix holds a majority overall.",
"Researchers continue to debate which plural markers are productive, and in which circumstances.",
"The dispute has historically centered on the infrequent class /-s/, which, despite its rarity, occurs across a wide range of linguistic environments.",
"Examples include proper names (e.g. der Bader die Bader the barber the barbers' but meine Freunden, die Baders my friends, the Barbers'), acronyms, and truncated and quoted nouns (e.g. der Asi die Asis , short for Asozialer antisocial person').",
"In addition, /-s/ tends to be the plural class for recent borrowings from other languages, and children reportedly extend /-s/ to novel nouns (Clahsen et al., 1992).",
"For these reasons, M95 argue that /-s/ is the default plural: it applies in a range of heterogeneous elsewhere conditions which do not define a cohesive similarity space, serving as the emergency plural form when other markers do not seem to fit.",
"They further assert that, as the default form, /-s/ is also the only regular plural form, in the sense that it applies not to particular sets of stored items or to their frequent patterns, but to any item whatsoever (1995, 192).",
"Under this minority-default analysis, other German plural classes may be productive, but in a limited sense they can only extend to novel inputs which are similar in some respect to existing class members, while infrequent /-s/ can apply to any noun regardless of its form (Clahsen, 1999b).",
"M95 claim that this behavior should be particularly difficult for connectionist, i.e. neural, models to learn: /-s/ cannot be generalized based on its frequency, as it is rare, and it cannot be generalized based on similar inputs, as it applies to heterogeneous, unfamiliar inputs.",
"Other researchers have challenged the minority-default account with evidence of regular, productive behavior from the two more common suffixes /-e/ and /-(e)n/.",
"/-(e)n/ is argued to be the default class for feminine nouns and nouns ending with the weak vowel schwa (Wiese, 1996; Dressler, 1999), and children have also been found to overgeneralize /-(e)n/ (Kopcke, 1998).",
"Indefrey (1999, 1025) argues that /-(e)n/ and /-e/ are regular and productive allomorphs with gender-dependent application domains, noting that /-e/ and /-(e)n/ are extended in elsewhere conditions where /-s/ is blocked for phonological reasons, such as letters ( die Xe ) and acronyms ( die MAZen, Magnetaufzeichnungen , magnetic recordings').",
"Bybee (1995) argues that, while /-s/ does act as the default plural, it is still less productive than other plural classes due to its low type frequency.",
"Wug testing for German plurals To assess whether German speakers treat /-s/ as a productive default for novel words, M95 developed a list of 24 monosyllabic nonce nouns for wug testing.",
"The stimuli represented two phonological classes: familiar' or Rhyme words, which rhymed with one or more existing words in German (e.g. Bral , rhyming with Fall ; Spert , rhyming with Wert ), and unfamiliar' or Non-Rhyme words (e.g. Plaupf , Fnohk ), which were constructed using rare but phonotactically valid phone sequences.",
"They hypothesized that Non-Rhymes, as phonologically atypical words, should be more likely to take the /-s/ plural.",
"M95 conducted a rating study in which stimuli were presented across three different sentence contexts.",
"If the word Bral was presented in the root condition, subjects would rate a set of sentences where the nonce word referred to some object: Die grunen BRAL sind billiger (The green brals are cheaper), Die grunen BRALE . . . , Die grunen BR ALE . . . ,",
"etc.; whereas in the name condition, the nonce word would refer to people: Die BRAL sind ein bichen komisch (The Brals [family name] are a bit weird), Die BRALEN . . . , Die BRALS . . . , etc.",
"With data from 48 participants, /-s/ was the top-rated plural form for 2 out of 12 rhyme words, and 7 out of 12 non-rhyme words; while /-e/ was rated highest overall, /-s/ was the only marker favored more for non-rhymes.",
"Clahsen (1999a) cites this asymmetry as crucial evidence for /-s/ as the only default plural form, at least with respect to these stimuli.",
"These results, however, have been called into question.",
"Zaretsky and Lange (2016, henceforth Z&L) conducted a large-scale follow-up study with 585 participants, using the same nonce words but a different task: instead of rating the plural forms within a sentence context, subjects were presented with the noun in isolation (e.g. Der Bral ) and asked to produce its plural form.",
"4 They found a much lower preference for /-s/ than expected based on M95's results, and a significant effect for feminine ( die ) versus non-feminine ( der, das ) grammatical gender, where M95 reported no effect of gender.",
"The authors conclude from their data that /-(e)n/, /-e/, and /-s/ are all productive in German, and also speculate that task differences (production versus rating) could account for the discrepancy between the two studies.",
"Motivation Although M95 published average rating data for each word in the appendix to their paper, we felt it necessary to collect our own data.",
"Z&L's findings suggest that the M95 /-s/ effect might reflect task artefacts: speaker behavior could differ for production and rating tasks, and with and without sentential context for the nonce words.",
"We seek to evaluate K&C's performance claims for ED models, which were based on speaker production probabilities rather than ratings.",
"To do so, we need speaker data which closely parallels the model task: given a noun in isolation, produce its plural inflected form.",
"We collect production data, and also ratings, to see whether speaker behavior is consistent across tasks.",
"Another issue raised by Z&L's findings is the role of grammatical gender.",
"Although Z&L reported significant gender effects, M95 did not: their reported rating averages combine all gender presentations (e.g. Der Bral, Die Bral, Das Bral ).",
"Previous experiments have found neural models of German plurals to be sensitive to grammatical gender (Goebel and Indefrey, 2000); therefore, the stimuli presented to speakers should be consistent with model inputs to enable valid comparison.",
"For simplicity, we opted to select one grammatical gender for presentation: neuter, or Das .",
"Based on similar experimentation by Kopcke (1988), speakers do not have a strong majority class preference for neuter monosyllablic nouns, hence this environment may be the most challenging for a neural model to learn.",
"For this reason, we present all stimuli as neuter to study participants.",
"Method The current study uses the same Rhyme and Non-Rhyme stimuli from M95's original experiment.",
"We collected both production and rating data on plural inflection for the 24 M95 nonce nouns through an online survey with 150 native 4 Z&L's data is unfortunately not freely available.",
"German-speaking participants.",
"Survey respondents were first prompted to produce a plural-inflected form for each noun (i.e. filling in the blank: Das Bral, Die ).",
"5 After producing plural forms for all nouns, they were prompted to rate the acceptability of each potential plural form for each noun on a 1-5 Likert scale, where 5 means most acceptable.",
"For example, a participant would see Das Bral , and then give an acceptability rating for each of the following plural forms: Bral, Bral, Brale, Brale, Bralen, Braler, Braler, Brals .",
"For details of the survey design, please see Appendix A. 2.3 Results Our study results are shown in Table",
"2. The production data collected in our survey appears broadly consistent with the distribution observed by Z&L and Kopcke: /-e/ is favored in production, followed by /-(e)n/.",
"The rhyme vs non-rhyme comparison is also consistent with Z&L's results.",
"/-s/ is produced more for Non-Rhymes than for Rhymes, as emphasized by Clahsen (1999a); however, /-(e)n/ also shows the same directional preference, and at a much higher frequency.",
"Our rating results diverge from production results in some ways for example, /-(e)n/ is fa-5 The article das indicates singular number, neuter gender; as all nouns were presented in neuter gender (see preceding discussion), all nouns were preceded by das .",
"Die here indicates plural number, so the following noun will be pluralized.",
"vored instead of /-e/ and are consistent in others: both /-s/ and /-(e)n/ are rated higher for Non-Rhymes compared to Rhymes.",
"The low ratings for /-s/ conflict with M95's findings, and suggest that presentation in sentence context is an important methodological difference from presentation in isolation.",
"For example, family surnames obligatorily take /-s/ in German, so it's possible that exposure to surnames in the name context primed subjects in the M95 rating study to find /-s/ more acceptable generally, across conditions.",
"6 In any case, our results demonstrate task effects: although /-e/ is the most produced plural form, /-(e)n/ obtains the highest ratings from the same speakers.",
"7 We compare these results with the modeling study in Section 4, focusing on production data.",
"Our second study trains an encoder-decoder (ED) model on the task of German plural inflection, following the method of Kirov and Cotterell (K&C).",
"We then compare its predictions on the M95 stimuli to the behavior of participants in Study",
"1. 3.1 Background Wug testing and computational models Wug tests have also been used to evaluate how computational models generalize, although the appropriate method of comparison to speakers is still under debate.",
"Albright and Hayes (2003) collected spoken productions and acceptability ratings of past tense inflections for English nonce verbs, comparing the prevalence of regular inflection (e.g. rife rifed )) to one or two pre-selected irregular forms for each nonce verb (e.g. rife rofe, riff ).",
"They then evaluated two different computational models on their wug data, focusing on correlation between model scores and participant ratings to select a rule-based learner as the best-performing model.",
"K&C also tested their ED model on Albright and Hayes' nonce words and evaluated performance using correlation with model scores; however, instead of the rating data, they focused on production probabilities : the percentage of speakers who produced each pre-selected irregular form.",
"Corkery et al. (2019) 6 Hahn and Nakisa (2000) reanalyze the M95 ratings and find that /-s/ is rated much higher for family surnames than other kinds of names within the name condition (e.g. first names), reflecting the strong link between this category and the /-s/ plural class.",
"7 Further analysis indicates that individual survey participants rated a plural form they did not produce as better than the form they did produce in fully one-third of cases.",
"call this methodology into question, noting that different random initializations of the ED model lead to highly variable rankings of the output forms, and thus to unstable correlation metrics.",
"Instead, they correlate the speaker production probabilities to the aggregated predictions of models with different random seeds, treating each model instance as simulating a unique speaker.",
"Our study follows the latter approach: we aggregate production probabilities over several model initializations and compare these results to the speaker production data.",
"Modeling German plurals The same M95 stimuli used in our Study 1 have also been applied to wug test computational models.",
"To date, no computational studies have reproduced the high /-s/ preference reported for participants in the original rating study.",
"Hahn and Nakisa (2000) framed the problem as a classification task, mapping noun inputs to their plural classes.",
"They trained a single-route exemplar-based categorization model (Nosofsky, 1988) alongside a dual-route version of the same model, which had an additional symbolic rule component to handle the /-s/ class.",
"Hahn and Nakisa also collected their own speaker productions of the M95 wug stimuli, and found that the single-route model showed a higher overall correlation to speaker production probabilities, relative to the dual-route model.",
"They did not explicitly compare model and speaker behavior on Rhymes versus Non-Rhymes, so we don't know whether the model learned speaker-like generalizations for phonologically atypical stimuli, or whether the model could achieve similar performance on the more challenging task of sequence prediction.",
"Goebel and Indefrey (2000) used a simple recurrent network (Elman, 1990) for sequence prediction on the M95 wug stimuli.",
"The model did produce /-s/ more often for Non-Rhymes than Rhymes, but as the overall production of /-s/ was relatively low, the authors did not consider this evidence of default behavior.",
"Instead, they find that the model learns to condition regular plural inflection on grammatical gender.",
"For both Rhymes and Non-Rhymes, the model predicted /-(e)n/ when the input was preceded by the feminine article die , and /-e/ when the input began with masculine der ; neuter das was not tested.",
"Goebel and Indefrey reanalyze the original M95 rating data and argue that its results are hypothetically 8 consistent with the model's behavior; they conclude that /-s/, /-(e)n/, and /-e/ are all reg-8 Hypothetically because M95 did not report results split by grammatical gender.",
"ular plural classes in German, with the latter two conditioned on grammatical gender.",
"These findings show the importance of controlling for grammatical gender in comparing speaker and model results.",
"Overview We model German number inflection using the sequence-to-sequence Encoder-Decoder architecture (Sutskever et al., 2014).",
"This comprises a recurrent neural network (RNN) which reads in an input sequence and encodes it into a fixed-length vector representation, and another RNN which incrementally decodes that representation into an output sequence.",
"Following Kann and Schutze (2016), our decoder uses neural attention (Bahdanau et al., 2015).",
"For our task of morphological transduction, the ED model takes character-level representations of German nouns in their singular form as inputs (e.g. (cid:104) m (cid:105) H U N D (cid:104) eos (cid:105) ), and learns to produce the noun's inflected plural form (e.g. H U N D E (cid:104) eos (cid:105) ).",
"Each character sequence starts with (cid:104) m (cid:105) , (cid:104) f (cid:105) , or (cid:104) n (cid:105) , to indicate grammatical gender.",
"Unlike English, the phonological-orthographic mapping is straightforward in German, so we can use a written corpus for model training.",
"We keep a held-out dev set for hyperparameter selection, and a held-out test set to asses the model's accuracy in generalizing to unseen German nouns.",
"In addition, the 24 M95 nouns were used for comparison with speaker behavior.",
"They were presented to the model as neuter gender, consistent with Study",
"1. Corpus We trained all models on the UniMorph German data set 9 (Kirov et al., 2016; Sylak-Glassman et al., 2015), which provides the singular and plural forms of 11,243 nouns.",
"Only nominative case forms were used.",
"Grammatical gender was 9 https://github.com/unimorph/deu Train Dev Test 99.9% (8694) 92.1% (1229) 88.8% (1320) Table 4: Model accuracy (N) by UniMorph corpus split, averaged over 25 random initializations.",
"obtained by merging the Unimorph dataset with a more recent Wiktionary scrape containing this feature.",
"10 Table 3 gives the distribution of plural suffixes for the UniMorph corpus overall, and for three relevant subsets: nouns with neuter gender, monosyllabic nouns (like the M95 stimuli), and nouns which were phonologically similar to the M95 stimuli, i.e. shared a rhyme.",
"The number of items in the train, dev, and test splits is shown (in parentheses) in Table 4.",
"Implementation Following K&C and Corkery et al. (2019), our model is implemented using Open-NMT (Klein et al., 2018) with their reported hy-perparameters (after Kann and Sch utze, 2016): 2 LSTM encoder layers and 2 LSTM decoder layers, 300-dimensional character embeddings in the encoder, and 100-dimensional hidden layers in both encoder and decoder; Adadelta optimization for training with a batch size of 20 and inter-layer dropout rate of 0.3; and a beam size of 12 for decoding during evaluation.",
"Since Corkery et al. (2019) found the ED model to be highly sensitive to initialization, we trained multiple simulations with the same architecture, varying only the random seed.",
"Reported results combine predictions from 25 separate random initializations.",
"The one hyperparameter we tuned was early stopping.",
"Best performance on the validation set was achieved at 10 epochs, which was sufficient to memorize the training data.",
"Results The model achieves 88.8% accuracy on the held-out test set (Table 4).",
"It performs best on /-(e)n/, the most frequent class (Table 5).",
"Unsurprisingly, the worst performance appears on the other' category, which comprises the long tail of idiosyncratic forms which must be memorized (e.g. Latinate plurals Abstraktum Abstrakta or other borrowings Zaddik Zaddikim ).",
"In keeping with the findings of Hahn and Nakisa (2000), /-s/ is the plural suffix with the worst generalization perfor-10 https://github.com/gambolputty/ german-nouns/ To ensure our results were not limited by the small size of the UniMorph dataset, we also trained the model on this larger dataset, including about 65,000 nouns.",
"As the outcome was consistent with our findings here, we report results from the smaller model.",
"mance; this cannot be attributed to low frequency alone (c.f. Table 3), as the model does much better on the similarly rare suffix /-er/ .",
"We use the M95 stimuli to compare model predictions to speaker data from Study",
"1. The model shows an overwhelming preference for /-e/ on these words (Table 5); roughly 80% of its productions are /-e/, relative to 45% of speaker productions (Figure 1).",
"In contrast, the model rarely predicts /-(e)n/, which speakers use 30% of the time.",
"The model's treatment of Rhymes and Non-Rhymes is even farther off the mark: where speakers use /-(e)n/ and /-s/ more for Non-Rhymes relative to Rhymes, the ED model uses them less , producing /-e/ for over 90% of Non-Rhymes.",
"Following K&C and Corkery et al. (2019), we calculate the Spearman rank correlation coefficient (Spearman's ) between model and speaker production probabilities within inflectional categories rather than across categories.",
"11 This means that, for each potential plural suffix, we compare speaker and model productions for that suffix on each individual M95 word.",
"Table 5 reports the correlation for each suffix.",
"None show a statistically significant difference from the null hypothesis of no correlation.",
"Figure 2 shows the distribution of plural classes in the top 5 most likely forms predicted by the model for each M95 word.",
"While all of the model's top-ranked predictions are well-formed outputs in the sense that they conform to one of the main German plural classes, the lower-ranked predictions are rapidly dominated by other forms which do not cohere to standard plural production.",
"An example from one model instance: the Rhyme input Spert had as its top five predictions Sperte, Spelte, Spente, Sperten, and Fspern ; the Non-Rhyme input Bneik had Bneiken, Bneiks, Bneikke, Bneikz, and Bneikme .",
"Corkery et al. (2019) observed instability in the ranking of irregular forms in ED models trained on the English past tense; however, English irregular forms are very diverse, which makes it difficult to draw broad conclusions about the plausibility of lower-ranked forms in the model's output.",
"In contrast, the five main plural suffixes for German cover 98% of the nouns in the UniMorph dataset, 11 For the English analyses in the prior works, this means calculating separate correlations for regular and irregular forms.",
"and 95% of speaker productions on M95 stimuli in Study",
"1. The predominance of ill-formed plurals in lower-ranked predictions 12 suggests ED model scores may not be cognitively plausible analogues to speaker behavior; if they were, we would expect forms with standard plural inflections to receive consistently high rankings.",
"The current study asks whether modern Encoder-Decoder neural models learn the full set of correct generalizations that is, human-like behavior with respect to German number inflection, which requires the learner to generalize non-majority inflectional classes.",
"The short answer is no: our model learns part of that set.",
"In particular, it correctly identifies /-e/ as the best' plural class for this context.",
"/-e/ is the most frequent class in the training data for similar inputs (neuter gender, monosyllabic, phonologically close to M95; c.f. Table 3), and it is also the plural suffix most frequently produced by speakers (Table 2).",
"Like all plural classes, /-e/ does not characterize a majority of German nouns overall (Table 1), so the model has technically learned to generalize a minority class in its appropriate context.",
"Nonetheless, it does not reproduce the behavior of survey participants in response to the same stimuli, which shows a more variable distribution over plural classes and different generalization patterns for Non-Rhymes relative to Rhymes.",
"12 Interestingly, while less frequent classes such as /-s/ and / / appear more often in the model's lower-ranked outputs, the class /-(e)n/ is almost never predicted despite being the second most frequent class in speaker data productions.",
"This outcome is not surprising when one considers that the model is trained to produce one correct form rather than a distribution over plausible forms; however, this is exactly the task faced by human language learners as well.",
"All the models of morphology discussed here assume that exposure to correct forms alone should suffice for learning speaker-like behavior.",
"Corkery et al. (2019, 3872, fn. 4) note that training on single target forms produces highly skewed ED model scores, with a great deal of probability mass on the top-ranked form and instability in lower rankings, but that training on a distribution would not be a cognitively plausible alternative.",
"However, it could be the case that German speakers do regularly encounter variable realizations of plural forms.",
"Kopcke observes that German plural inflection shows regional variation, for example northern speakers using /-s/ ( die Madels girls') where southern dialects prefer /-(e)n/ ( die Madeln ).",
"Incorporating dialect-informed variability into training might be one way to encourage neural models toward speaker-like generalization.",
"13 Parallel issues arise for model evaluation: how should we evaluate models of production when the target output is a distribution?",
"On simpli-fied versions of the task, such as classification (Hahn and Nakisa, 2000), the output distribution is constrained within a space of plausible forms, but sequence-to-sequence models deal with the open-ended domain of all possible strings.",
"For 13 Like previous studies on these stimuli, our Study 1 did not collect data on speakers' dialect background; we are addressing this issue in follow-up research.",
"We note that Study 1 began with an onboarding task prompting speakers to inflect existing nouns in Modern High German, which hopefully primed use of the standard variety for the following tasks.",
"encoder-decoders, the likelihood scores produced during beam-search decoding offer an intuitive option, and K&C use these scores to evaluate their model with respect to Albright and Hayes' wug data; however, Corkery et al. (2019) demonstrate that these model scores are not a suitable metric for that comparison.",
"Other recent research has highlighted the limitations of both beam search and model scores globally in neural sequence-to-sequence models (Stahlberg and Byrne, 2019).",
"Our results provide further evidence that lower-ranked ED predictions do not reflect cognitively plausible distributions: they contain many ill-formed outputs, and omit inflectional classes such as /-(e)n/, which is prevalent in speaker productions.",
"An alternative to model scores is to treat each randomly initialized instance of a model as an individual, and compare aggregate productions with speaker data (Goebel and Indefrey, 2000; Corkery et al., 2019).",
"For our experiments, this did not produce the distribution observed in the speaker data.",
"The discrepancy between speaker production and rating preferences poses another challenge, as it's not clear how the ED model might represent these different task modalities.",
"Beside variability, the other key discrepancy between speaker and ED behavior is the treatment of Non-Rhyme words.",
"If German has a default plural class, it should be realized more often on these phonologically atypical stimuli than the more familiar Rhyme words.",
"Speakers in Study 1 use /-s/ and /-(e)n/ more for Non-Rhymes than for Rhymes.",
"These results are consistent with earlier studies: M95 found that /-s/ was the only plural form to receive higher average ratings for Non-Rhymes compared to Rhymes, and Z&L found that speakers produced both /-(e)n/ and /-s/ more often for Non-Rhymes.",
"In contrast, the ED model appears to treat /-e/ as a default, producing /-e/ inflections for under 70% of Rhymes but over 90% of NonRhyme inputs.",
"This asymmetry suggests that the model has not induced the full set of correct generalizations for German plural inflection it has not recognized which plural classes are more productive for phonologically atypical nouns.",
"In fact, the model's preference for /-e/, the most frequent (if non-majority) suffix, is the behavior anticipated by M95: frequency in the input to a pattern associator causes a greater tendency to generalize (1995, 215).",
"It seems that the productivity of less frequent inflectional classes continues to challenge neural models and limit their cognitive application.",
"German number inflection has been claimed to have distributional properties which make it difficult for neural networks to model.",
"Our experimental speaker data does not necessarily support all of these claims; in particular, /-s/ does not appear to be the only plural suffix which speakers treat as a default' for phonologically unfamiliar words, as the more frequent marker /-(e)n/ shows similar trends.",
"Nonetheless, the German plural system continues to challenge ED architectures.",
"Our neural model struggles to accurately predict the distribution of /-s/ for existing German nouns.",
"On novel nouns, it generalizes the contextually most frequent plural marker /-e/; its predictions are less variable than speaker productions, and show different patterns of response to words which are phonologically typical (Rhymes) as opposed to atypical (Non-Rhymes).",
"Regardless of the minority-default question, it seems that ED models do not necessarily function as good cognitive approximations for inflectional systems like German number, in which no class holds the majority.",
"The authors thank Yevgen Matusevych, Maria Corkery, Timothy O'Donnell, the Agora reading group at Edinburgh, and the ACL reviewers for helpful feedback.",
"This work was supported in part by the EPSRC Centre for Doctoral Training in Data Science, funded by the UK Engineering and Physical Sciences Research Council (grant EP/L016427/1) and the University of Edinburgh.",
"This work was also supported by a James S McDonnell Foundation Scholar Award (#220020374) to the second author."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"abstain",
"abstain",
"other",
"other",
"other"
] |
[
"Predicting reading time has been a subject of much previous work, focusing on how different words affect human processing, measured by reading time.",
"However, previous work has dealt with a limited number of participants as well as word level only predictions (i.e. predicting the time to read a single word).",
"We seek to extend these works by examining whether or not document level predictions are effective, given additional information such as subject matter, font characteristics, and readability metrics.",
"We perform a novel experiment to examine how different features of text contribute to the time it takes to read, distributing and collecting data from over a thousand participants.",
"We then employ a large number of machine learning methods to predict a user's reading time.",
"We find that despite extensive research showing that word level reading time can be most effectively predicted by neural networks, larger scale text can be easily and most accurately predicted by one factor, the number of words.",
"Understanding how we read and process text has proven a large area of both cognitive science and natural language processing (NLP) research (Graesser et al., 1980; Liversedge et al., 1998; Frank et al., 2013a; Busjahn et al., 2014; Weller and Seppi, 2019, 2020).",
"Online content providers and consumers are also interested in this research; in the increasingly busy world of today, consumers lack the time to read long articles, prompting content creators to aim for specific reading lengths.",
"Many providers 1 have even examined traffic patterns in order to determine the ideal content length, with the general consensus finding 3-7 minutes of Work done as part of a capstone course with Adobe 1 Medium's study can be found here.",
"content optimal.",
"Thus, having established the optimal content length, article writers now face the next hurdle: when has their post reached the ideal length?",
"A news article about last night's football game may be easier to read than a technical post about NLP.",
"Perhaps the font type or size influences the consumer's comprehension, slowing down the reading process.",
"There are many factors, both textual and stylistic, that quickly come to mind when considering the potential reading time of an article.",
"Although there has been an extensive body of work on reading time prediction applied to single words (Frank, 2017; Willems et al., 2015; Shain, 2019; van Schijndel and Linzen, 2018), to the best of our knowledge there has been no research into understanding these effects on document sized text.",
"In this paper, we seek to address this area by building models to predict, understand, and interpret factors that could affect an article's reading time.",
"Our contributions to this area include a methodically designed statistical study, consisting of 1130 experimental trials and 32 different articles, experimental results for a broad collection of machine learning algorithms on this novel task, and discussion of potential reasons why more complex models fail.",
"To the best of our knowledge, this is the largest experimental study for reading time research, in terms of participants and breadth of factors.",
"All code and datasets are publicly available.",
"2 2 Related Work Researchers have made significant progress in predicting the reading time of single words, illustrating the effect of different words on the human brain (Frank et al., 2013b; Shain, 2019; Goodkind and Bicknell, 2018) for many different texts (Futrell et al., 2018; Kennedy et al., 2003).",
"Although this 2 The code and datasets for our experiments can be found at http://github.com/orionw/DocumentReadingTime effort is focused more on the cognitive effects of words, these results show that scientists can accurately predict the reading time of individual words in context.",
"With the rise in popularity of machine learning techniques, many scientists have found the most success through these methods, with the most recent research showing significant improvements from combining neural networks as language models with linear mixed models (LMMs) (Goodkind and Bicknell, 2018; de Vries et al., 2018; van Schijndel and Linzen, 2018).",
"However, all previous research has been confined to the effect of a specific word in context, which naturally leads to the question of how this research generalizes.",
"A separate but similar line of research, readability, measures the reading difficulty of a body of text.",
"This research area has investigated effects of readability in a plethora of areas: online vs paper (Kurniawan and Zaphiris, 2001), color and contrast (Legge et al., 1990), and writing style (Bostian, 1983).",
"The most famous readability metric for English, the FleschKincaid (Kincaid et al., 1975), uses the number of syllables and words to determine readability.",
"Other scientists have attempted to improve upon this simple metric, showing success in reading level classification with unigram language models (Si and Callan, 2001) or SVM models built on top of these basic textual characteristics (Pitler and Nenkova, 2008).",
"As previous metrics seem to be sufficient, recent research has focused on evaluating and comparing the diverse metrics on different domains (Sugawara et al., 2017; Redmiles et al., 2019).",
"We use these readability works to influence our choice of features, as readability seems inherently interwoven with reading time.",
"We employ the py-readability-metrics package to include 7 state-of-the-art metrics that we add to our data for the modeling task (Section 4, Appendix B).",
"We collected our reading time data from a statistical survey performed on Amazon's Mechanical Turk.",
"Since we were not physically present to observe the respondents we took a number of precautions and controls to ensure data quality.",
"We note however, that the inclinations of Mechanical Turk users align with our target audience: we would expect most readers of online content to be of a younger demographic, tech-savy, and prone to read as fast as possible.",
"In this section we will discuss our survey design, validation, and results.",
"In order to gather the maximum amount of information from a survey design, we implemented our survey following Fractional Factorial Design (FFD) (Box et al., 2005).",
"This method of survey collection allows us to exploit the sparsity-of-effects principle, gleaning the most information while only using a fraction of the effort of a full factorial design, in terms of experimental runs and resources.",
"This method works by defining two levels for each factor: for example, our factor font size had the levels 12 point and 16 point.",
"We extracted 8 factors with 2 levels, consisting of 2 8 unique surveys ( 2 8 3 = 32 using FFD) to design.",
"When choosing factors and levels, we focused on areas that would provide the most contrast in order to illustrate potential differences in reading time.",
"Although there are an almost endless number of factors that could potentially influence article reading time, the number of surveys needed to explore those factors increases exponentially; thus, we chose eight crucial factors.",
"Levels of the factor are indicated in parenthesis if applicable: font size (12 vs 16 point), font type (sans vs serif), subject matter (health vs. technology), genre (blog post vs news article), average syllables per word, number of words, average words per sentence, and average unigram frequency.",
"We note that we further collected the original article's text so that additional factors could be easily extracted for future analysis.",
"Again, these factors are not exhaustive but instead were chosen to give a representative sample for a specific area of online articles, while still showing contrast between documents (e.g. news articles vs blog posts or small vs large font).",
"To define the levels of our numeric features, such as unigram frequency or the average number of syllables, we collected 200 articles for the week of March 4th 2019, aggregating from different news and blog sources, but taking a maximum of three articles from each source (see a more comprehensive list on Github, as there are too many to list).",
"We took these articles, extracted our feature characteristics, and found the median of the distribution.",
"This number was then used as the cutoff between the two levels for that factor.",
"Unigram frequencies were computed using the wordfreq library, aggregating frequencies from numerous sources.",
"3 3 Details on which text corpora were aggregated can be found at https://github.com/LuminosoInsight/wordfreq/ 0 5 10 15 20 25 30 Article Number 100 200 300 400 500 R e a d i n g T i m e ( s ) 400 600 800 1000 1200 Article Length (words) 100 200 300 400 500 R e a d i n g T i m e ( s ) Figure 1: Left: boxplots for the results of each survey, with reading time in seconds.",
"With the requirements for each survey defined by the FFD, we gathered additional articles and parsed their features.",
"We then matched each one of the 32 combinations from the FFD to a unique article that contained those features.",
"In order to gather a large audience with similar characteristics to online readership, we distributed our survey through Amazon's Mechanical Turk using the Qualtrics platform.",
"Our survey flow consisted of five short demographic questions including age, gender, education level, familiarity with the article subject matter (health or technology) and their perception of their reading speed on a five point Likert scale (slow to fast).",
"They were then instructed to read the next page of the survey uninterrupted at their normal reading pace, after which they would be asked several basic comprehension questions for validation.",
"Each comprehension question was created to be easily answered if the user had read the article but non-trivial for those that had not.",
"See Appendix A for examples of comprehension questions.",
"If the user failed to answer any of the control questions correctly, the survey was terminated and the data was not used.",
"Due to the nature of Mechanical Turk, we employed various controls to ensure the quality of our data.",
"Many Mechanical Turk workers are prone to take multiple surveys concurrently, leave the page of the survey open for long periods of time, or rush through surveys in order to maximize their earnings.",
"However, the inclination to read through an article quickly is similar to that of online readers, thus, a crowdsourcer's work is acceptable as long as they pass our validation.",
"In order to control for these tendencies, we included many checks throughout each stage of the survey.",
"If the answers to the demographic questions were unrealistic (such as age greater than 90 or less than 18), we rejected the survey.",
"If the user failed to answer a validation question, such as asking the user to select a certain box before proceeding to the next page, they were disqualified.",
"If the user spent an unrealistic amount of time on the reading page due to any reason (less than two minutes or greater than ten minutes 4 for a long article, as an example) or failed to answer any of the comprehension questions, their data was not used.",
"The results from our surveys are plotted in Figure 1, consisting of 1130 respondents.",
"Note that the results have significant variance, especially as the length of the article increases.",
"More plots of the data can be found in our Github repository.",
"With the data gathered and readability metrics calculated (see Section 2), we explore the results from a variety of different models.",
"We employ three categories of models: models that only use 4 These times were found by initially performing this survey on a limited number of respondents with no limits and then extending the min/max by an additional two minutes.",
"extracted features, models that only use the text, and models that stack textual-only models with model features.",
"Basic extracted feature models include a vanilla Linear Regression (LR) with only the number of words variable (word), a Linear Regression model with all variables (all), Random Forests, K-Nearest Neighbors (KNN), and a Multi-Layered Perceptron (MLP).",
"As using the entire article as input for the text only models is not computationally feasible, we use modern neural networks to embed the text as a document embedding, using a linear output layer for regression.",
"We tried various state-of-the-art embedding models including roBERTa (Liu et al., 2019; Devlin et al., 2018), XLNet (Yang et al., 2019), and ELMo (Pe-ters et al., 2018).",
"The stacked models combine the document embedding with the extracted features, feeding them both into an MLP.",
"Embeddings use the Flair (Akbik et al., 2018) and HuggingFace (Wolf et al., 2019) libraries.",
"We use two baselines: a commonly used rule-of-thumb for online reading estimates, 240 words per minute (WPM), and the sum of the word-level predictions (Surprisal-Sum) from a surprisal model in order to compare with recent works (van Schijndel and Linzen, 2018; Shain, 2019).",
"For the Surprsial-Sum baseline predictions, we employ the model used in (van Schijndel and Linzen, 2018), where predictions are made by training a Linear Mixed Model over surprisal data.",
"The results from our experiments are found in Table 1.",
"We see that the most effective models were the simplest: the 240 WPM baseline, linear regression, k-nearest neighbors, and random forests.",
"Using the word count only linear model, because of its easy interpretability, shows us an R 2 value of 0.40, meaning that 40% of the variance of reading time can be explained by the number of words in the article.",
"We also see that scaling a regression model to include demographic and textual information (the all linear regression model) does not seem to provide significant improvements in prediction.",
"Given the amount of empirical evidence from word level reading time prediction, we were surprised to see a dearth of similar results for document level prediction.",
"Models that provide strong results in word level prediction, such as varieties of neural networks, fail to be as effective as the simpler models.",
"Perhaps this is due to the length of Features Only: RMSE (sd) MAE (sd) 240 WPM 66.0 10.7 52.1 8.3 Surprisal-Sum 141.5 42.8 118.4 35.8 MLP 84.8 10.5 67.2 7.0 Random Forest 64.3 7.7 50.2 5.6 LR (word) 65.5 10.7 51.1 7.9 LR (all) 65.7 9.8 51.6 8.0 KNN 70.1 9.6 54.3 7.1 Text-Only: RMSE (sd) MAE (sd) XLNet 81.0 8.6 62.8 6.6 ELMo 84.3 13.1 66.7 8.6 roBERTa 83.2 13.9 66.3 9.1 Stacked: RMSE (sd) MAE (sd) XLNet/MLP 80.3 10.4 62.9 8.0 ELMo/MLP 83.2 13.7 66.4 9.4 roBERTa/MLP 83.5 10.5 66.1 6.9 Table 1: Results on the reading time prediction task.",
"the document small changes in word level reading time simply get evened out at the document level (for example, see the Surprisal-Sum model).",
"Alternatively, the level of surprisal in online articles may remain constant with the number of words.",
"Given previous work in single word reading time prediction, we conducted a large novel study to test whether document level reading time could be predicted.",
"We carefully designed an experiment containing a myriad of potential factors to measure reading time, distributed the survey to more than a thousand people, and collected the results into the first dataset of its kind.",
"We then employed machine learning techniques to predict the time to read, finding that simpler models were the most competitive, with the number of words as the sole critical factor in predicting reading time.",
"We hope this resource can benefit future research into developing techniques to model and understand human responses to document sized text.",
"We would like to thank Hayden Harris for his help and advice during the capstone project."
] | [
"abstain",
"abstain",
"objective",
"objective",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"result",
"result",
"result",
"result",
"result",
"abstain",
"other",
"abstain",
"abstain",
"objective",
"objective",
"result",
"objective",
"other"
] |
[
"Modern neural machine translation (NMT) models have achieved competitive performance in standard benchmarks such as WMT.",
"However, there still exist significant issues such as robustness, domain generalization, etc.",
"In this paper, we study NMT models from the perspective of compositional generalization by building a benchmark dataset, CoGnition, consisting of 216k clean and consistent sentence pairs.",
"We quantitatively analyze effects of various factors using compound translation error rate, then demonstrate that the NMT model fails badly on compositional generalization, although it performs remarkably well under traditional metrics.",
"Neural machine translation (NMT) has shown competitive performance on benchmark datasets such as IWSLT and WMT (Vaswani et al., 2017; Edunov et al., 2018a; Liu et al., 2020a), and even achieves parity with professional human translation under certain evaluation settings (Hassan et al., 2018).",
"However, the performance can be relatively low in out-of-domain and low-resource conditions.",
"In addition, NMT systems show poor robustness and vulnerability to input perturbations (Belinkov and Bisk, 2018a; Cheng et al., 2019).",
"One example is shown in Table 1, where simple substitution of a word yields translation with completely different semantics.",
"Many of these issues origin from the fact that NMT models are trained end-to-end over large parallel data, where new test sentences can be sparse.",
"Disregarding out-of-vocabulary words, a main cause of sparsity is semantic composition: given a limited vocabulary, the number of possible compositions grows exponentially with respect to the composite length.",
"The ability to understand and Input Translation Taylor breaks his promise \u0000 (Taylor keeps his promise) James breaks his promise y \u0000 (James breaks his promise) Table 1: Translation samples obtained from one popular web translation engine on January 19, 2021. produce a potentially infinite number of novel combinations of known components, namely compositional generalization (Chomsky; Montague; Lake and Baroni, 2018; Keysers et al., 2020), has been demonstrated deficient in many machine learning (ML) methods (Johnson et al., 2017a; Lake and Baroni, 2018; Bastings et al., 2018; Loula et al., 2018; Russin et al., 2019a). In this paper, we study compositional generalization in the context of machine translation. For example, if red cars and blue balls are seen in training, a competent algorithm is expected to translate red balls correctly, even if the phrase has not been seen in training data.",
"Intuitively, the challenge increases as the composite length grows.",
"Recently, several studies have taken steps towards this specific problem.",
"They either use a few dedicated samples (i.e., 8 test sentences) for evaluation (Lake and Baroni, 2018; Li et al., 2019b; Chen et al., 2020), or make simple modifications in sampled source sentences such as removing or adding adverbs, and concatenating two sentences (Raunak et al., 2019; Fadaee and Monz, 2020a).",
"Such experimental data is limited in size, scope and specificity, and the forms of composition are coarse-grained and non-systematic.",
"As a result, no qualitative conclusions have been drawn on the prevalence and characteristics of this problem in modern NMT.",
"We make a first large-scale general domain investigation, constructing the CoGnition dataset ( Co mpositional G eneralizati on Mach i ne T ranslat ion Dataset), a clean and consistent paral-Dataset Type Source Target SCAN Atoms jump, twice JUMP JUMP Compounds jump twice CFQ Atoms Who [predicate] [entity], directed, Elysium SELECT DISTINCT",
"lel dataset in English-Chinese, along with a synthetic test set to quantify and analyze the compositional generalization of NMT models.",
"In particular, we define frequent syntactic constituents as compounds , and basic semantic components in constituents as atoms .",
"In addition to the standard training, validation and test sets, the CoGnition dataset contains a compositional generalization test set, which contains novel compounds in each sentence, so that both the generalization error rate can be evaluated, and its influence on BLEU (Papineni et al., 2002) can be quantified.",
"Our compositional generalization test set consists of 2,160 novel compounds, with up to 5 atoms and 7 words.",
"In this way, generalization ability can be evaluated based on compound translation error rate.",
"Empirical results show that the dominant Transformer (Vaswani et al., 2017) NMT model faces challenges in translating novel compounds, despite its competitive performance under traditional evaluation metrics such as BLEU.",
"In addition, we observe that various factors exert salient effects on model's ability of compositional generalization, such as compound frequency, compound length, atom co-occurrence, linguistic patterns, and context complexity.",
"The CoGnition dataset along with the automatic evaluation tool are realesed on https://github.com/yafuly/CoGnition.",
"Analysis of NMT.",
"Our work is related to research analyzing NMT from various perspectives.",
"There has been much linguistic analysis of NMT representations (Shi et al., 2016; Belinkov et al., 2017; Bisazza and Tump, 2018), interpretability (Ding et al., 2017; He et al., 2019; Voita et al., 2019a), and attention weights (Voita et al., 2019b; Michel et al., 2019).",
"Robustness is also an important research direction.",
"Work has shown that NMT models are prone to be negatively affected by both synthetic and natural noise (Belinkov and Bisk, 2018b; Cheng et al., 2018; Ebrahimi et al., 2018).",
"For better exploration of robust NMT, Michel and Neubig (2018) propose an MTNT dataset containing several types of noise.",
"Wang et al. (2020) provide in-depth analyses of inference miscalibration of NMT resulting from the discrepancy between training and inference.",
"Our work is in line but we discuss robustness from the perspective of compositional generalization.",
"In this respect, Lake and Baroni (2018) propose a simple experiment to analyze compositionality in MT, followed by Chen et al. (2020) and Li et al. (2019b).",
"Specifically, they introduce a novel word dax , and their training data contains a single pattern of sentence pairs (e.g. I am daxy , je suis daxiste ) while the test set contains different patterns.",
"However, their work is limited in that there are only 8 sentences in the test set.",
"Raunak et al. (2019) observe a performance drop on a dataset of concatenated source sentences.",
"Fadaee and Monz (2020b) modify source sentences by removing adverbs, substituting numbers, inserting words that tend to keep syntax correct (e.g. very ), and changing the gender, and find unexpected changes in the translation.",
"In contrast to these studies, we quantitatively measure compositionality of NMT under compound translation error rate.",
"Translation involves various challenges such as low-frequency words, polysemy and compositional complexity.",
"In this work, we focus on how the NMT model generalizes to complex compositions in a controllable setting and minimize the effects of the other factors.",
"Compositional Generalization.",
"Neural networks have been shown sample-inefficient, requiring large-scale training data, which suggests that they may lack compositionality (Lake and Baroni, 2018).",
"Lake and Baroni (2018) introduce the SCAN dataset to help study compositional generalization of neural networks, which has received increasing interests (Russin et al., 2019b; Dess` and Baroni, 2019; Li et al., 2019c; Lake, 2019; Andreas, 2020; Gordon et al., 2020).",
"Various benchmarks have been proposed including in the area of visual reasoning (Johnson et al., 2017b; Hudson and Manning, 2019), mathematics (Saxton et al., 2019), and semantic parsing (CFQ) (Keysers et al., 2020).",
"However, no benchmark has been dedicated to machine translation in practice.",
"We fill this gap by introducing a dataset with 216,000 instances and an average sentence length of 9.7 tokens.",
"Following Keysers et al. (2020), compositional generalization is defined as the capacity to systematically generalize to novel combinations of components which are learned sufficiently during training.",
"Key elements to measure compositional generalization include atoms and compounds .",
"Specifically, atoms are primitive elements in the train set whereas compounds are obtained by composing these atoms.",
"The research question is whether neural models perform well on unseen compounds.",
"Take Table 2 for example, in the SCAN dataset, the atoms are simple commands such as jump and the composite command jump twice is a compound.",
"In the CFQ, the compounds are questions such as Who directed Elysium , and the atoms correspond to the primitive elements in the questions such as the predicate directed , the question patterns Who [predicate] [entity] and the entities Elysium .",
"In theory, compounds in MT can be defined as phrases, sentences or even document.",
"In practice, however, we want to control the number of atoms in a novel compound for quantitative evaluation.",
"In addition, it can be highly difficult to construct a large-scale dataset where novel compounds are sentences of practical sizes (the number of synthesized sentences increases exponentially with their length) while ensuring their grammatical correctness.",
"Therefore, we constrain compounds to syntactic constituents, and define atoms as basic semantic components in constituents according to syntactic and semantic rules for forming constituents (Par-tee, 1995).",
"As a result, we randomly assign multiple sentential contexts for investigating each novel compound.",
"Table 2 shows a contrast between our dataset and existing datasets for compositional generalization in semantics.",
"Mistakes caused by weakness in computational generalization can be easily found in state-of-the-art NMT models.",
"In particular, we train a Transformer-based model (Vaswani et al., 2017) on WMT17 En-Zh Dataset 1 .",
"One sentence in the standard test set, but the problem is , with the arrival of durant , thompson 's appearance rate will surely decline , which is bound to affect his play , is translated into F / , @ \\ p y 0 e , d n \u0000 h M , \u0000 q 0 h (English: but the problem is , with the arrival of durant , thompson 's will surely look worse , which is bound to affect his play ).",
"The novel compound appearance rate is composed of two atoms (i.e., appearance and rate ), both with a high frequency of more than 27,000 times in the training set.",
"However, the sentence semantics is completely distorted due to the failure of semantic composition, which is possibly influenced by the context word play .",
"More importantly, as the overall translation highly overlaps with the reference, the model achieves a high score in similarity-based metrics such as BLEU, demonstrating that fatal translation errors can be overlooked under traditional evaluation metrics.",
"Figure 1 gives an overview of our data construction process.",
"We first source monolingual data (Section 4.1), and then build parallel data based by translation (Section 4.2).",
"Then we synthesize a test set of novel compounds (Section 4.3), and offer an automatic evaluation method (Section 4.4).",
"Our goal is to focus on compositional generalization and minimize the influence of additional factors such as polysemy (Berard et al., 2019), misalignment (Munteanu and Marcu, 2005), and stylistic problems (Hovy et al., 2020).",
"The dataset should ideally have following characteristics.",
"First, the vocabulary size should be small and contain only words of high-frequency in order to avoid problems caused by rare words.",
"In other words, variety of composition should come from combining different frequent words instead of word diversity, as suggested in (Keysers et al., 2020).",
"Metaphorical words, which can increase the translation difficulty, should be excluded.",
"Second, source sentences should not be too long or have complex syntactic structures.",
"As a result, a sentence can be 1 http://www.statmt.org/wmt17/ Monolingual Data Source WMT IWSLT ROC Story Parallel Data Construction The small dog was sick.",
"translated literally, directly, and without rhetoric.",
"Third, the corpus size should be large enough for training an NMT model sufficiently.",
"Widely-adopted corpora such as parallel data released on WMT and IWSLT 2 have large vocabularies and also contain noisy sentences and rich morphology (Li et al., 2019a), which do not fully meet our goal.",
"We choose Story Cloze Test and ROCStories Corpora (Mostafazadeh et al., 2016, 2017) as our data source.",
"The dataset is created for commonsense story understanding and generation, and consists of 101903 5-sentence stories.",
"These stories are rather simple in items of vocabulary and syntax, but still contain rich phrases.",
"In addition, the topic is constrained to daily life.",
"Since the vocabulary size of 42 , 458 is large, we select the top 2 , 000 frequent words as our vocabulary and extract sentences where the words are exclusively from the restricted vocab.",
"Moreover, sentences that are longer than 20 words are removed.",
"In this way, we finally obtain 216 , 246 2 https://wit3.fbk.eu/ sentences for parallel data construction.",
"More detailed statistics including comparison to WMT and IWSLT data are shown in Appendix B. 4.2 Parallel Data Construction We take an MT post-editing method to construct parallel data, first using a public translation engine to obtain model-generated translations, and then requesting expert translators to post-edit them.",
"The following aspects are highlighted: Ensure the fluency of translations.",
"Ensure word-level matching between translated sentences and source sentences.",
"Typically, every word should be correctly translated, without omission for legibility.",
"Finally, we obtain a parallel dataset of 216 , 246 sentences in CoGnition , and randomly split it into three subsets: 196 , 246 sentence pairs for training , 10 , 000 sentence pairs for validation , and 10 , 000 sentence pairs as the random test set .",
"In addition to the above split, we additionally make a compositional generalization test set , which is described in the next section.",
"We manually construct a special test set dedicated for evaluation of compositional generalization, by synthesizing new source sentences based on novel compounds and known contexts.",
"Designing Compound Patterns We use Berkeley Parser to obtain constituent trees (Kitaev and Klein, 2018).",
"In CoGnition, noun phrases (NP), verb phrases (VP) and positional phrases (PP) are three most frequent constituents, accounting for 85 .",
"1% of all constituents, and thus we construct compounds based on them.",
"According to syntactic and semantic rules (Partee, 1995), we choose basic semantic components as our atoms including determiners (DET), nouns (N), verbs (V), prepositions (P), adjectives (ADJ), and postpositive modifiers (MOD).",
"Specifically, postpositive modifiers include prepositional phrases and relative clauses, and can contain multiple words.",
"We consider them as a single atom due to their semantic inseparability.",
"In this way, we generate 4 compound patterns for NP, VP, and PP, respectively, which are listed in Table 3 with corresponding examples.",
"Making Novel Compounds We use Stanza (Qi et al., 2020) to obtain POS tagging for each word in training sentences.",
"We construct novel compounds by first selecting atom candidates with relatively consistent translation in the training set.",
"The frequency of candidate atoms covers a wide range from 34 to 73518 .",
"We list full set of atom candidates in Table",
"4. For constructing compounds, we enumerate all possible combinations of atoms according to the patterns in Table 3, and then remove those that are ungrammatical or likely to cause ethic issues, obtaining 2,160 compounds finally.",
"We do not deliberately make all compounds unseen, yet only 0 .",
"93% of them appear in the training data.",
"Synthesizing Source Sentences We embed the compounds in specific context to form complete source sentences.",
"Concretely, we first apply Berkeley Parser on the training sentences to obtain sentence templates, where certain constituents are replaced by placeholders according to their constituent types, e.g., NP-placeholder spent a lot of time to set up a wedding . .",
"Then we select 5 sentence templates for each constructed compound accordingly, so that every compound can be evaluated under 5 different contexts.",
"To distinguish from VP and PP, we put NP compounds only in sentences with the placeholder outside VP and PP.",
"Making Reference To maintain statistical consistency, target translations of synthetic sentences are also obtained using the same MT post-edit approach.",
"In addition to the annotation principles listed in 4.2, we set several additional rules: Filter sentences with ethical issues and replace them with other synthetic ones.",
"Finally, we obtain a compositional generalization test set ( CG test set ) of 10 , 800 parallel sentences.",
"The final dataset statistics is shown in table",
"5. 4.4 Automatic Evaluation We mainly adopt human evaluation for the experiments of this paper (Section 5) for ensuring reliability of findings.",
"Despite its accuracy, human evaluation can be expensive.",
"To facilitate fast evaluation in future research, we introduce an automatic evaluation approach to quantify a model's generalization ability on our CG test set.",
"In particular, we manually construct a dictionary for all the atoms based on the training set (See Appendix C).",
"The prerequisite of correctly translating one compound is that all of the atom translations should be contained.",
"Besides, in most cases the translation of nouns should be placed after that of other atoms.",
"Based on this, we design a heuristic algorithm to determine whether compounds are translated correctly.",
"With the human annotation as ground truth, our automatic evaluation tool achieves a precision of 94 .",
"80% and a recall of 87 .",
"05% , demonstrating it can serve as an approximate alternative to human evaluation.",
"We conduct experiments on CoGnition dataset and perform human evaluation on the model results.",
"We tokenize the English side using Moses tokenizer and do not apply byte pair encoding (BPE) (Sen-nrich et al., 2016) due to the small vocabulary (i.e., 2000).",
"The Chinese sentences are segmented by Type Candidates DET the, every, any, another, each N car, dog, girl, doctor, boyfriend, apartment, child, sandwich chair, farm, building, hat, waiter, airplane, lawyer, peanut, farmer, clown, bee ADJ small, large, red, special, quiet, empty, dirty, lazy, smart, fake, silly MOD he liked, at the store, on the floor V took, told, found, asked, saw, left, gave, lost, liked woke, stopped, invited, met, caught, heard, hated, watched, visited, chose P to, for, on, with, from, about, before, like, around inside, without, behind, under, near, towards, except, toward Table 4: Atoms used in constructing compounds, sorted by frequency in the training set.",
"jieba segmenter 3 .",
"We employ BPE with 3,000 merge operations, generating a vocabulary of 5,500 subwords.",
"We focus on Transformer (Vaswani et al., 2017) because of its state-of-the-art performance on machine translation (Edunov et al., 2018b; Takase and Kiyono, 2021; Raffel et al., 2020; Zhu et al., 2020; Liu et al., 2020b) and better performance on existing compositional generalization dataset (Daniel et al., 2019).",
"We implement our model using BASE configuration provided by Fairseq (Ott et al., 2019).",
"The model consists of a 6-layer encoder and a 6-layer decoder with the hidden size 512.",
"We tie input and output embeddings on the target side.",
"The model parameters are optimized by Adam (Kingma and Ba, 2015), with \u0000 1 = 0 .",
"1 , \u0000 2 = 0 .",
"98 and = 10 \u0000 9 .",
"The model is trained for 100,000 steps and we choose the best checkpoint on validation set for evaluation.",
"We report character-level BLEU scores using SacreBLEU (Post, 2018) to measure the overall translation performance.",
"In addition, we request expert translators to annotate the correctness of compound translation.",
"Translators are asked to only focus on examining whether the compound itself is translated correctly or not, disregarding errors in context.",
"Specifically, a compound is correct only if its translation contains semantic meaning of all atoms and is fluent in human language.",
"Since each of the 2,160 compounds is provided with 5 contexts, we can compute the translation error-rate for each compound.",
"3 https://github.com/fxsjy/jieba 5.2 Main Results Table 6 shows the results.",
"Besides the CG test set , we also list results on three of its subsets, which only contain NP, VP or PP compounds respectively.",
"The model achieves a 69.58 BLEU score on the random test set , which partly indicates distributional consistency and quality of the dataset.",
"In comparison, the performance on the CG test set drops dramatically by more than 20 BLEU points.",
"Given that the only difference between synthetic sentences and training sentences is the unseen compounds (i.e., contexts are seen in training), the decrease of 20 BLEU points indicates that unseen compounds pose a significant challenge, which is however easy to be overlooked in traditional evaluation metrics.",
"For example, the model mis-translates alas , he became sick from eating all of the peanut butter on the ball into \u0000 : : @ \u0000 q \u0000 \u0000 (English: alas , he became sick from eating all of the peanut butter on the field ).",
"With a minor mistake on the compound on the ball , the model achieves a sentence-level BLEU of 61.4, despite that the full sentence meaning is largely affected.",
"In other words, the BLEU score of 69.58 can be misleading since novel compounds can be rare in the random test set .",
"Such mistakes in generalizing new compounds can severely hinder overall performance of translation engines in practice, as shown earlier in Table",
"1. Also, we calculate BLEU for the original training sentences that provide contexts for the CG test set (row 3).",
"The model achieves 99.74 BLEU, further demonstrating that the performance degradation is mainly caused by the unseen compounds.",
"Instance-wise, 27.31% compounds are translated incorrectly.",
"However, when aggregating all 5 contexts, 61 .",
"62% compounds suffer at least one incorrect translation.",
"This suggests that a well-trained NMT model is not robust in translating compounds, though all atoms within them are highly frequent in Test Set Error Rate BLEU Instance Aggregate Random-test -69.58 Train -99.74 CG-test 27.31% 61.62% 48.66 CG-test/NP 21.94% 54.03% 51.29 CG-test/VP 22.25% 55.56% 47.55 CG-test/PP 37.72% 75.28% 47.14 Table 6: BLEU score and compound translation error rate on the random test set and the CG test set .",
"the training set.",
"We also observe that the error rate of PP compounds, 37 .",
"72% , is much higher than the other two, 21 .",
"94% and 22 .",
"25% , which we will discuss in detail in the following section.",
"We conduct experiments to explore in what situations the model is error-prone by considering compound frequency, compound length, compound structure, atom frequency, atom co-occurrence, and the complexity of external context.",
"Intuitively, compounds with higher frequencies in the training set are easier to infer.",
"We classify compounds according to their frequency levels, including many-shots (frequency higher than 10), few-shots (frequency from 1 to 10) and zero-shot, and show the error rate for each bucket in Figure",
"2. The model translates all the many-shots compounds correctly.",
"For few-shot compounds, translation error rate increases to 5 .",
"00% , but is still much lower than zero-shot compounds with an error rate of 27 .",
"53% .",
"The result suggests the model is good at memorizing correspondence between sentence segments.",
"However, the model deteriorates severely when test samples are unseen in the training set, which further confirms model's weakness in compositional generalization (Lake and Baroni, 2018).",
"As shown in Figure 3, the error rate grows with the increase of compound length (i.e., the number of atoms in a compound).",
"Only 4.50% of the shortest compounds are translated incorrectly, each of which consists of a determiner and a noun.",
"The error rate increases to 13.72% when the compound length grows to 3 atoms (e.g., the smart lawyer ).",
"The longest compounds contain a determiner, a noun, an adjective, a modifier and a preposition or verb in each of them, e.g., taking every special chair he liked .",
"The error rate increases to 36.63%, demonstrating that it is more difficult to generalize in longer compounds, which contain richer semantic information.",
"We conjecture that if the range of compound is further expanded, the error rate will be much higher.",
"We empirically divide compounds into multiple groups according to the minimum frequency of their atoms, where each group consists of similar numbers of compounds.",
"The intuition is that the atom with low frequency might be difficult to translate and therefore hinders the whole compound translation.",
"We fix the compound length to 3 in order to reduce effects of compound length.",
"(cid:1) Figure 5: Effect of atom co-occurrence on compound translation error rate.",
"correlation with the atom frequency.",
"This can be because all atoms in our corpus are simple and relatively frequent and thus it is easy for the NMT model to memorize the semantics of most atoms.",
"Therefore, simply increasing atom frequency does not enhance model's generalization ability of novel compounds.",
"We observe similar patterns for compounds of other lengths (Appendix A).",
"Although the NMT model may never see a compound, there can exist many local segments where atoms co-occur.",
"For example, in the unseen compound the smart lawyer , smart and lawyer may occur within some training sentences.",
"Intuitively, the compounds of which atoms co-occur more frequently may be translated better.",
"We calculate pointwise mutual information (PMI) and compare error rates of compounds with positive or negative mean PMI scores (MPMI): MPMI( C ) = 1 MN \u0000 1 X i =1 NX j = i +1 PMI( a i , a j ) , (1) where a i is the i -th atom in the compound C , N is the compound length, M is the number of possible combinations of two atoms, and PMI score is computed as: PMI ( x, y ) = log p ( a i , a j ) p ( a i ) p ( a j ) , (2) where the probabilities p ( a i ) and p ( a i , a j ) are obtained by dividing the number of n-grams in which one word or both words occur by the total number of n-grams 4 .",
"We divide compounds into 4 groups by their length and compare error rates within each group.",
"As shown in Figure 5, across all groups, the error rates with positive mean PMI scores are lower than those with negative ones, verifying our hypotheses.",
"4 We use 5-gram here (cid:1) 0.00 0.10 0.20 0.30 0.40 0.50 0.60 P1.1 P1.2 P1.3 P1.4 P2.1 P2.2 P2.3 P2.4 P3.1 P3.2 P3.3 P3.4 E rr o r R a t e Pattern # Figure 6: Compound translation error rates of different patterns.",
"Figure 6 shows the error rates of all compound patterns in Table",
"3. The MOD atom exerts salient influence on translation error rate.",
"The error rate of compounds with MOD is 19.78% higher than those without on average.",
"In contrast, adding ADJ into compounds only increases error rate by 2.66%.",
"The major difficulty caused by MOD is word reordering.",
"One can translate the small dog monotonically without adjusting word order.",
"However, compounds like the dog he liked require the model to recognize he liked as MOD and put its translation before that of the dog in Chinese.",
"We find many cases where the model translates such compounds without reordering or breaking the connection between nouns and modifiers.",
"Across these groups, we can see that the error rate of NP (Pattern 1.*) is generally lower than that of VP (Pattern 2.*) and PP (Pattern 3.*).",
"Such phenomenon is more obvious for the patterns without MOD.",
"The reason is that compounds in Pattern 1.* are generally shorter and contain less semantic and syntactic information.",
"However, the error rates of Pattern 2.3 and 2.4 are lower than other patterns with MOD (i.e., Pattern 1.3, 1.4, 3.3 and 3.4), indicating the model performs better in V+DET(+ADJ)+NN+MOD .",
"This can be because under certain situations the MOD can be useful for correctly translating verbs, which are more commonly seen in the training set, e.g., found the chair on the floor .",
"We also observe that compounds of PP (Pat-tern 3.*) are more difficult to translate compared with VP (Pattern 2.*), although both types of compounds share the same compound length.",
"In the training set, verbs typically have consistent translations, whereas the meanings of prepositions vary with contexts.",
"Therefore prepositional compounds are more difficult to translate as more context infor-0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 [ 1 .",
"Due to the nature of NMT, the semantic representation of each compound is context-aware.",
"Intuitively, translation of compounds is also influenced by external context, which is sentential in our case but can also be document-level in practice.",
"We investigate effects of context lengths and sentence comprehension difficulty.",
"In particular, the context length is calculated by subtracting the sentence length by the number of words in the compound.",
"Comprehension difficulty of the training sentences which provide contexts, is quantified by the dependency distance (Liu, 2008): MMD( x ) = 1 N \u0000 1 P Ni D i , where N is the number of words in the sentence and D i is the dependency distance of the i -th syntactic link of the sentence.",
"The results are shown in Figure 7.",
"The translation error rate increases stably with the context length as well as the dependency distance.",
"These observations demonstrate that the generalization for novel compounds correlates strongly with context complexity.",
"Sentences with higher dependency distances are harder for model to comprehend during training.",
"Given that our test sentences are restricted to 20 words, compositional generalization can be more challenging in practice where average sentence lengths can be much longer.",
"We proposed a dedicated parallel dataset for measuring compositional generalization of NMT and quantitatively analyzed a Transformer-based NMT model manually.",
"Results show that the model exhibits poor performance on novel compound translation, which demonstrates that the NMT model suffers from fragile compositionality, and it can be easily overlooked under transitional metrics.",
"To the best of our knowledge, we are the first one to propose a practical benchmark for compositionality of NMT, which can be a testbed for models tailored for this specific problem.",
"As mentioned, we collected our data from Story Cloze Test and ROCStories Corpora that all are public to academic use, and they contain no sensitive information (Mostafazadeh et al., 2016, 2017).",
"The legal advisor of our institute confirms that the sources of our data are freely accessible online without copyright constraint to academic use.",
"Our data construction involves manual annotation.",
"Annotators were asked to post-edit machine translation and filter out samples that may cause ethic issues, which do not involve any personal sensitive information.",
"We hired 4 annotators who have degrees in English Linguistics or Applied Linguistics.",
"Before formal annotation, annotators were asked to annotate 100 samples randomly extracted from the dataset, and based on average annotation time we set a fair salary (i.e., 32 dollars per hour) for them.",
"During their training annotation process, they were paid as well.",
"Yue Zhang is the corresponding author.",
"We thank all reviewers for their insightful comments.",
"This work is supported by National Natural Science Foundation of China (NSFC) under grant No.61976180 and a grant from Lan-bridge Information Technology Co., Ltd.",
"We thank colleagues from Lan-bridge for examining data and evaluating results.",
"Major contributors include Xianchao Zhu, Guohui Chen, Jing Yang, Jing Li, Feng Chen, Jun Deng and Jiaxiang Xiang."
] | [
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other"
] |
[
"Languages are continuously undergoing changes, and the mechanisms that underlie these changes are still a matter of debate.",
"In this work, we approach language evolution through the lens of causality in order to model not only how various distributional factors associate with language change, but how they causally affect it.",
"In particular, we study slang, which is an informal language that is typically restricted to a specific group or social setting.",
"We analyze the semantic change and frequency shift of slang words and compare them to those of standard, nonslang words.",
"With causal discovery and causal inference techniques, we measure the effect that word type (slang/nonslang) has on both semantic change and frequency shift, as well as its relationship to frequency, polysemy and part of speech.",
"Our analysis provides some new insights in the study of language change, e.g., we show that slang words undergo less semantic change but tend to have larger frequency shifts over time.",
"1 1 Introduction Language is a continuously evolving system, constantly resculptured by its speakers.",
"The forces that drive this evolution are many, ranging from phonetic convenience to sociocultural changes (Blank, 1999).",
"In particular, the meanings of words and the frequencies in which they are used are not static, but rather evolve over time.",
"Several previous works, in both historical and computational linguistics, have described diachronic mechanisms, often suggesting causal relationships.",
"For example, semantic change, i.e. change in the meaning of a word, has both been suggested to cause (Wilkins, 1993; Hopper and Traugott, 2003) and be caused by (Hamilton et al., 2016) polysemy, while also part Equal contribution.",
"of speech (POS) has been implied to be a causal factor behind semantic change (Dubossarsky et al., 2016).",
"However, none of these studies perform a causal analysis to verify these claims.",
"Causality (Pearl, 2009) allows us to not only infer causal effects between pairs of variables, but also model their interactions with other related factors.",
"In this work, we focus on the linguistic evolution of slang, defined as colloquial and informal language commonly associated with particular groups (Gonzlez, 1998; Bembe and Beukes, 2007), and use a causal framework to compare the change dynamics of slang words to those of standard language.",
"More specifically, we compare the semantic change as well as the changes in frequency, i.e., 1422 frequency shift , over time between slang words and standard, nonslang words.",
"We learn a causal graphical model (Spirtes et al., 2000) to assess how these variables interact with other factors they have been previously found to correlate with, such as frequency , polysemy and part of speech (Dubossarsky et al., 2016; Hamilton et al., 2016).",
"Having discovered a graph, we proceed to use do-calculus (Pearl, 1995) to evaluate the causal effects of a word's type (slang/nonslang) on semantic change and frequency shift.",
"Semantic change is measured using the average pairwise distance (APD) (Sagi et al., 2009; Giulianelli et al., 2020) between time-separated contextualized representations, which were obtained from a Twitter corpus via a bi-directional language model (Liu et al., 2019).",
"Our method builds on recent semantic change literature (Schlechtweg et al., 2020), with novel additions of dimensionality reduction and a combined distance function.",
"By deploying a causal analysis, we establish that there is not just an association, but a direct effect of a word's type on its semantic change and frequency shift.",
"We find that a word being slang causes it to undergo slower semantic change and more rapid decreases in frequency.",
"To illustrate, consider the slang word duckface and the nonslang word inclusive as shown in Figure 1.",
"Duckface is a face pose commonly made for photos (Miller, 2011) in the early 2010s, and while it has largely decreased in frequency since, its meaning has not changed.",
"In contrast, the nonslang word inclusive has developed a new usage in recent years (Merriam-Webster, 2019) and was given a high semantic change score by our model.",
"Our analysis also sheds light on a couple of previous findings in the diachronic linguistics literature.",
"We find support for the S-curve theory (Kroch, 1989), showing a causal effect from a word's polysemy to its frequency.",
"This relationship is evident in the increase in frequency that the word inclusive displays in Figure 1 after it develops a new meaning (Merriam-Webster, 2019).",
"However, similar to Dubossarsky et al. (2017), we do not find causal links to semantic change from frequency, polysemy, or POS, which have been suggested in previous works (Hamilton et al., 2016; Dubossarsky et al., 2016).",
"In summary, our main contributions are threefold:",
"(i) we formalize the analysis of change dynamics in language with a causal framework;",
"(ii) we propose a semantic change metric that builds upon contextualized word representations; and",
"(iii) we discover interesting insights about slang words and semantic change e.g., showing that the change dynamics of slang words are different from those of nonslang words, with slang words exhibiting both more rapid frequency fluctuations and less semantic change.",
"A typical method for measuring semantic change is by comparing word representations across time periods (Gulordava and Baroni, 2011; Kim et al., 2014; Jatowt and Duh, 2014; Kulkarni et al., 2015; Eger and Mehler, 2016; Schlechtweg et al., 2019).",
"With this approach, previous research has proposed laws relating semantic change to other linguistic properties (Dubossarsky et al., 2015; Xu and Kemp, 2015; Dubossarsky et al., 2016; Hamilton et al., 2016).",
"For instance, Dubossarsky et al. (2016) find that verbs change faster than nouns, whereas Hamilton et al. (2016) discover that polysemous words change at a faster rate, while frequent words change slower.",
"However, the validity of some of these results has been questioned via case-control matching (Dubossarsky et al., 2017), highlighting the influence of word frequency on the representations and thus on the semantic change metric (Hellrich and Hahn, 2016).",
"Such analyses can indeed give stronger evidence for causal effects.",
"In this work we take a methodologically different approach, considering observational data alone for our causal analysis.",
"The aforementioned works rely on fixed word representations, whereas more recent approaches (Hu et al., 2019; Giulianelli et al., 2020) have proposed semantic change measures based on contextualized word embeddings (Peters et al., 2018; Devlin et al., 2019), which can flexibly capture contextual nuances in word meaning.",
"This has lead to a further stream of work on semantic change detection with contextualized embeddings (Mar-tinc et al., 2020; Kutuzov and Giulianelli, 2020; Schlechtweg et al., 2020; Montariol et al., 2021; Kutuzov et al., 2021; Laicher et al., 2021).",
"We build upon this line of work and extend them using principal component analysis (PCA) and a combination of distance metrics.",
"Slang is an informal, unconventional part of the language, often used in connection to a certain setting or societal trend (Dumas and Lighter, 1978).",
"It can reflect and establish a sense of belonging to a group (Gonzlez, 1998; Bembe and Beukes, 2007; Carter, 2011) or to a generation (Citera et al., 2020; Earl, 1972; Barbieri, 2008).",
"Mattiello (2005) highlights the role slang plays in enriching the language with neologisms, and claims that it follows unique word formation processes.",
"Inspired by this, Kulkarni and Wang (2018) propose a data-driven model for emulating the generation process of slang words that Mattiello (2005) describes.",
"Others have described the ephemerality of slang words (Gonzlez, 1998; Carter, 2011), although this property has not been previously ver-ified by computational approaches.",
"Examining change dynamics through a causal lens helps determine the existence of direct causal effects, by modeling the interactions between variables.",
"For example, it allows us to conclude whether word type directly influences semantic change, or rather influences polysemy, which in turn causes semantic change.",
"In this section, we first give a short overview of relevant work on causality, before presenting how we apply these concepts to word change dynamics.",
"A common framework for causal reasoning is through causal directed acyclic graphs (DAGs) (Pearl, 2009).",
"A causal DAG consists of a pair ( G, P ) where G = ( V, E ) is a DAG and P is a probability distribution over a set of variables.",
"Each variable is represented by a node v V , and the graph's edges e E reflect causal relationships.",
"There are two main tasks in causality.",
"Causal discovery is the task of uncovering the causal DAG that explains observed data.",
"Assuming a causal DAG, the task of causal inference then concerns determining the effect that intervening on a variable, often referred to as treatment , will have on another variable, often referred to as outcome .",
"structure, causal discovery methods come in useful.",
"Constraint-based methods (Spirtes et al., 2000) form one of the main categories of causal discovery techniques.",
"These methods use conditional independence tests between variables in order to uncover the causal structure.",
"To do so, they rely on two main assumptions: that the graph fulfills the global Markov property and the faithfulness assumption.",
"Together they state that we observe conditional independence relations between two variables in the distribution if and only if these two variables are d-separated (Geiger et al., 1990) in the graphical model.",
"For more details, we refer to Appendix D.1.",
"Causal inference is commonly approached with do-calculus (Pearl, 1995).",
"We denote the intervention distribution P ( Y | do ( X = x )) to be the distribution of the outcome Y conditioned on an intervention do ( X = x ) which forces the treatment variable X to take on the value x .",
"Note that this is in general not necessarily equal to P ( Y | X = x ) .",
"2 When they are not equal, we say that there is confounding .",
"Confounding occurs when there is a third variable Z , which causes both the treatment X and the outcome Y .",
"We say that there is a causal effect of X on Y if there exist x and x (cid:48) such that P ( Y | do ( X = x )) (cid:54) = P ( Y | do ( X = x (cid:48) )) .",
"One way to quantify the causal effect is with the average causal effect (ACE) :",
"To estimate the causal effect using observational data, we need to rewrite the intervention distribution using only conditional distributions.",
"Assuming a causal DAG, this can be done with the truncated factorization formula (Pearl, 2009), P ( XV | do ( XW = x W )) = = (cid:89) i V \\ WP ( X i | X pa ( i ) ) 1 { XW = x W } , (3) for W V , with XW being the variables in P corresponding to the nodes in W .",
"2 For instance, there is a causal effect of altitude on temperature but not vice versa.",
"Hence, intervening on temperature will not cause a shift in the distribution of altitude, but conditioning will.",
"In this work, we estimate the direct causal effect of a word's type on its semantic change and frequency shift dynamics.",
"In order to establish that such an effect exists, and to know which variables to control for, we turn to causal discovery algorithms.",
"The variables in our causal graph additionally include frequency, polysemy and POS.",
"For learning the causal graph, we choose the constraint-based PC-stable algorithm (Colombo and Maathuis, 2014), an order-independent variant of the well-known PC algorithm (Spirtes et al., 2000), discussed in Appendix D.1.",
"We are learning a mixed graphical model (Lauritzen, 1996; Lee and Hastie, 2015), consisting of both continuous (e.g., frequency) and categorical (e.g., type) variables.",
"For this reason we opt for constraint-based algorithms, allowing us to tailor the conditional independence tests according to the various data types.",
"Having learned the causal graph (Section 6.2), we proceed to estimate the ACE of word type on both semantic change and frequency shift using do-calculus (Section 6.3).",
"We select 100 slang words and 100 nonslang words for our study, presented in Appendix E. In the tradeoff between statistical significance and time spent on computation and data collection, we found that a set of 200 words was enough to get highly significant results.",
"The slang words are randomly sampled from the Online Slang Dictionary, 3 which provides well-maintained and curated slang word definitions as well as a list of 4,828 featured slang words as of June 2021.",
"We limit the scope of our study to only encompass single-word expressions, and in so doing we filter out 2,169 multi-word expressions.",
"To further clean the data, we also delete words with only one character and acronyms.",
"Lastly, we limit the causal analysis to words that are exclusively either slang or nonslang, excluding hybrid words with both slang and nonslang meanings, such as kosher or tool.",
"Including words of this type would have interfered with the causal analysis by creating a hardcoded dependency between word type and polysemy, as these words by definition are polysemous.",
"We do however perform a separate analysis of the hybrid words in Appendix C. 3 http://onlineslangdictionary.com/ For the reference set of standard, nonslang, words we sample 100 words uniformly at random from a list of all English words, supplied by the wordfreq library in Python (Speer et al., 2018).",
"We curate a Twitter dataset from the years 2010 and 2020, which we select as our periods of reference, and collect the following variables:",
"Word frequency: The average number of tweets containing the word per day in 2010 and 2020 (Section 5.2) Frequency Shift: The relative difference in frequency the word has undergone between 2010 and 2020 (Section 5.3) Polysemy: The number of senses a word has (Section 5.4) Part of speech: A binary variable for each POS tag (Section 5.5) Semantic change: The semantic change score",
"As a social media platform, Twitter data is rich in both slang and nonslang words.",
"The Twitter dataset we curated comprises 170,135 tweets from 2010 and 2020 that contain our selected words.",
"Sampling tweets from two separate time periods allows us to examine the semantic change over a 10-year gap.",
"For every slang and nonslang word, and each of the two time periods, we obtain 200-500 random tweets that contain the word and were posted during the corresponding year.",
"We keep each tweet's text, tweet ID, and date it was posted.",
"As a post-processing step, we remove all URLs and hashtags from the tweets.",
"To protect user privacy, we further replace all user name handles with the word user.",
"On average, we have 346 tweets per slang word and 293 tweets per nonslang word.",
"We approximate a word's frequency by the average number of times it is tweeted within 24 hours.",
"This average is calculated in practice over 40 randomly sampled 24 hour time frames in a given year, in each of which we retrieve the number of tweets containing the word.",
"The frequencies are calculated separately for 2010 and 2020.",
"Due to the growing 1425 Figure 2: Relative shift in frequency from 2010 to 2020, where a positive score corresponds to an increase in frequency.",
"popularity of social media, the number of tweets has significantly increased over the decade.",
"Therefore, we divide the counts from 2020 by a factor of 6 .",
"4 , which is the ratio between the average word counts in both years in our dataset.",
"The frequencies from both years are then averaged to provide the frequency variable for the causal analysis.",
"We are now interested in analyzing the dynamics of frequency shifts.",
"To evaluate the relative change in frequency for a given word w we take FreqShift ( w ) = log x 2020 ( w ) x 2010 ( w ) (4) where, x k ( w ) is the frequency of word w in year k .",
"This has been shown to be the only metric for relative change that is symmetric, additive, and normed (Tornqvist et al., 1985).",
"Importantly, this measure symmetrically reflects both increases and decreases in relative frequency.",
"The mean relative changes in frequency were 0 .",
"486( 1 . 644) for slang words and 0 .",
"533( 1 . 070) for nonslang words, where a positive score corresponds to an increase in frequency.",
"As evident in Figure 2, not only did more slang words exhibit a decrease in frequency than nonslang ones, the words that showed the highest frequency increase are also slang.",
"slang words have significantly higher changes in frequency than nonslang words ( p < 0 . 05 ).",
"See Appendix C for more details.",
"We define a word's polysemy score as the number of distinct senses it has 4 .",
"For nonslang words, we take the number of senses the word has in Merriam Webster and for slang words we take the number of definitions on the Online Slang Dictionary.",
"We use two separate resources as we find that no dictionary encapsulates both slang and nonslang words.",
"The mean polysemy scores are (2 . 074 2 . 595) for slang words and (3 . 079 2 . 780) for nonslang words with a significant difference in distribution ( p < 0 . 05) according to a permutation test, implying that the latter are used with a larger variety of meanings.",
"In addition, the slang senses of the hybrid words exhibit a distribution similar to those of the slang words (Appendix C).",
"More polysemous words tend to have a higher word frequency in our dataset the log transform of frequency and polysemy display a highly significant ( p < 0 . 001 ) linear correlation coefficient of 0 .",
"350 .",
"For each word, we retrieve four binary variables, indicating whether a word can be used as noun, verb, adverb or adjective, which were the four major POS tags observed in our data.",
"To calculate these variables we run the NLTK POS tagger (Loper and Bird, 2002) on the tweets, and collect the distribution of POS tags for each word.",
"Note that a word may have more than one POS tag, depending on the context in which it is used.",
"Each of the binary variables is then set to be 1 if the word had the corresponding POS tag in at least 5% of its tweets and 0 otherwise.",
"In this section we explain the details of how we obtain the semantic change scores.",
"We start by fine-tuning a bi-directional language model on a slang-dense corpus (Section 5.6.1), after which we survey the literature and propose metrics (Sec-tion 5.6.2) that we use to perform an extensive experimentation study to find the most suitable one (Section 5.6.3).",
"Finally, we apply this metric to our 4 Note that this definition also encapsulates potential cases of homonymy.",
"We familiarize the bi-directional language model with slang words and the contexts in which they are used by fine-tuning it on the masked language modeling task.",
"For this purpose we use a web-scraped dataset from the Urban Dictionary, previously collected by Wilson et al. (2020).",
"After preprocessing and subsampling, the details of which can be found in Appendix A.1, we are left with a training set of 200 , 000 slang-dense text sequences.",
"As our bi-directional language model we select RoBERTa (Liu et al., 2019).",
"Beyond performance gains compared to the original BERT (Devlin et al., 2019), we select this model since it allows for more subword units.",
"We reason, that this could be useful in the context of slang words since potentially some of the sub-units used in these words would not have been recognized by BERT.",
"We choose the smaller 125M parameter base version for computational reasons.",
"We train the model using the Adam optimizer (Kingma and Ba, 2015) with different learning rates .",
"The lowest loss on the test set was found with = 10 6 , which we proceed with for scoring semantic change.",
"For more details on training configurations, we refer to Appendix A.2.",
"In order to select a change detection metric, we evaluate our model on the SemEval-2020 Task 1 on Unsupervised Lexical Semantic Change Detection (Schlechtweg et al., 2020).",
"This task provides the first standard evaluation framework for semantic change detection, using a large-scale labeled dataset for four different languages.",
"We restrict ourselves to English and focus on subtask 2, which concerns ranking a set of 37 target words according to their semantic change between two time periods.",
"The ranking is evaluated using Spearman's rank-order correlation coefficient .",
"5 Our space of configurations includes layer representations, dimensionality reduction techniques and semantic change metrics.",
"Layer Representations: Previous work (Etha-yarajh, 2019) has shown that embeddings retrieved from bi-directional language models are not 5 We note the caveat that our model is fine-tuned on Urban Dictionary text, while the older of the two English datasets of SemEval consists of text from 1810-1860.",
"isotropic, but are rather concentrated around a high-dimensional cone.",
"Moreover, the level of isotropy may vary according to the layer from which the representations are retrieved (Ethayarajh, 2019; Cai et al., 2021).",
"This leads us to experiment with representations from different layers in our fine-tuned RoBERTa model, namely, taking only the first layer, only the last layer or summing all layers.",
"Dimensionality Reduction: To the best of our knowledge, only one previous semantic change detection approach (Rother et al., 2020) has incorporated dimensionality reduction, more specifically UMAP (McInnes et al., 2018).",
"As the Euclidean distances in the UMAP-reduced space are very sensitive to hyperparameters and it does not retain an interpretable notion of absolute distances, it might be unsuitable for pure distance-based metrics like APD, and we therefore also experiment with PCA.",
"Metrics for Semantic Change: Given representations X t = { x 1 ,t , ..., x n t ,t } for a particular word in time period t , we define the average pairwise distance (APD) between two periods as APD( X t 1 , X t 2 ) = 1 n t 1 n t 2 (cid:88) x i,t 1 X t 1 x j,t 2 X t 2 d ( x i,t 1 , x j,t 2 ) , (5) for some distance metric d ( , ) , where n t 1 , n t 2 are the number of words in each time period.",
"We experiment with Euclidean distance d 2 ( x 1 , x 2 ) , cosine distance d cos ( x 1 , x 2 ) and Manhattan distance d 1 ( x 1 , x 2 ) .",
"Furthermore, we propose a novel combined metric.",
"Note that d 2 ( , ) [0 , ] and d cos ( , ) [0 , 2] .",
"Further note that || x 1 x 2 || 22 || x 1 || 22 + || x 2 || 22 (6) Normalizing both metrics for a support in [0 , 1] , we get a combined metric with the same unit support to be the following average: d 2 , cos ( x 1 , x 2 ) = 0 .",
"5 d 2 ( x 1 , x 2 ) (cid:112) || x 1 || 2 + || x 2 || 2 (7) + d cos ( x 1 , x 2 ) 4 (8) We argue that this provides a more complete metric, capturing both absolute distance and the angle between vectors.",
"We first compare the results for the three types of layer representations for different APD metrics, and note that summing all layer representations yields the best results.",
"Consequentially, we proceed with the rest of the experiments using only these representations.",
"For both PCA and UMAP, we experiment with projecting the representations down to h { 2 , 5 , 10 , 20 , 50 , 100 } dimensions.",
"These combinations are tested together with the APD metrics as presented in Section 5.6.2 as well as the distribution-based metrics described in Appendix B. The latter do not however in general display significant correlations.",
"We present a small subset of the scores resulting from the APD configurations in Table 1, highlighting our finding that both PCA dimensionality reduction and using a combined metric improve the performance.",
"More results and comparisons to baselines are presented in Appendix B.3.",
"We observe that the proposed combined metric consistently outperforms both d 2 and d cos across values of h for PCA.",
"We also note that UMAP projections perform poorly with the APD metrics and that projecting down to 50-100 dimensions seems to be optimal, which maintains 70-85% of the variance as we illustrate in Appendix B.2.",
"In addition, both norm-based metrics d 1 and d 2 perform worse with dimensionality reduction.",
"As our final metric, we choose the best performing configuration on SemEval, with PCA h = 100 and the combined metric, as seen in Table 1.",
"We obtain semantic change scores using the Twitter dataset described in Section 5.1.",
"For the semantic change analysis, we exclude words that have less than 150 tweets in each time period within the dataset, which leaves us with 80 slang and 81 non-Figure 3: Semantic change scores between 2010 and 2020.",
"We see that nonslang words typically underwent larger changes in meaning throughout the decade.",
"slang words.",
"We also normalize the scores according to the sample.",
"The resulting semantic change scores are shown in Figure 3. The mean semantic change scores are 0 .",
"564( 0 . 114) for slang words and 0 .",
"648( 0 . 084) for nonslang words.",
"The difference in semantic change score distributions is significant ( p < 0 . 001 ) via a permutation test.",
"The word with the highest semantic change score of 1 is anticlockwise, and the word with the lowest score of 0 is whadja. 6 Causal Analysis 6.1 Preparation for Causal Discovery PC-stable is constraint-based and thus makes use of conditional independence tests.",
"In the case of continuous Gaussian variables, we can perform partial correlation tests to assess conditional independence, since zero partial correlation in this case is equivalent to conditional independence (Baba et al., 2004).",
"As word frequency has been suggested to follow a lognormal distribution (Baayen, 1992), we take the log transform of it.",
"The continuous variables semantic change , frequency change and log-frequency are then all assumed to be approximated well by a Gaussian distribution, which is confirmed by diagnostic density and Q-Q plots (displayed in Appendix D.2).",
"We categorize the discrete polysemy variable, experimenting with nine different plausible categorizations for the sake of robustness of the results.",
"Word type and POS are categorical in na-ture.",
"For the categorical variables and for mixes of categorical and continuous variables, we perform chi-squared mutual information based tests 1428 Figure 4: DAG representing the causal relationships in our dataset.",
"(Edwards, 2000), since the approximate null distribution of the mutual information is chi-squared (Brillinger, 2004).",
"For all conditional independence tests we experiment with significance levels { 0 .",
"01 , 0 .",
"03 , 0 .",
"05 } .",
"In Figure 4 we see the result from the above approach, with dashed lines representing edges that were apparent in most but not all of the configurations.",
"See Appendix D.3 for a sensitivity analysis.",
"We first observe that word type has a direct causal effect on both the semantic change score and the frequency shift, without any confounding from the other variables.",
"We also note a direct influence of word polysemy on frequency.",
"Moreover, none of the four POS categories, which are all gathered in one node in Figure 4, have a causal link to any of the other variables.",
"We additionally observe a dependency between word type and polysemy.",
"This edge could not be oriented by the PC-stable algorithm, however we manually orient it as outgoing from type and ingoing to polysemy, since an intervention on type should have a causal effect on the number of word senses and not vice versa.",
"It is also interesting to note that polysemy does not seem to have a causal effect on semantic change.",
"Its association with semantic change ( p < 0 . 05 , rejecting the null hypothesis of independence between polysemy and semantic change) is instead confounded by word type.",
"In our case of no confounders, evaluating the ACE of word type on semantic change is straightforward, as it reduces to the difference between the",
"We estimate the expectations by the sample means on the normalized values and get an average causal effect of 0 .",
"084 , which is a highly significant value ( p < 0 . 001 ) based on a t-test.",
"For the observed changes in relative frequency, calculated according to Eq.",
"(4), we get an average causal effect of 1 .",
"017 ( p < 0 . 001 via a t-test).",
"We analyze the dynamics of frequency shift and semantic change in slang words, and compare them to those of nonslang words.",
"Our analysis shows that slang words change slower in semantic meaning, but adhere to more rapid frequency fluctuations, and are more likely to greatly decrease in frequency .",
"Our study is the first computational approach to confirm this property in slang words (Gonzlez, 1998; Carter, 2011).",
"To ensure that this is the result of a causal effect, and not mediated through another variable or subject to confounders, we model the data with a causal DAG, by also considering the potential interacting variables polysemy, frequency and POS.",
"We discover that there is no influence of confounders, nor are there mediators between a word's type and its semantic change or its frequency shift, which confirms a direct causal effect .",
"This means that if we could intervene on a word's type, i.e., by setting it to be slang instead of nonslang or vice versa, we would expect its change dynamics to differ.",
"the law relating semantic change to frequency, polysemy (Hamilton et al., 2016) nor prototypicality (Dubossarsky et al., 2015) were found to be as strong as previously thought after a case-control study using a scenario without semantic change.",
"Indeed, there is no directed path from polysemy or frequency to semantic change in our causal graph, but they are both influenced by word type.",
"We leave for future research to explore whether other word categorizations, e.g., related to specific domains, languages or phonetic aspects, sustain this result.",
"In addition, our analysis does not support the claim that POS could underlie semantic change (Dubossarsky et al., 2016).",
"We note however that as our vocabulary contains 50% slang words, the results need not be consistent with results obtained with a word sample drawn from standard language.",
"Moreover, in the causal structure we discover that word polysemy has a direct effect on word frequency , which is in line with previous linguistic studies showing that a word's frequency grows in an S-shaped curve when it acquires new meanings (Kroch, 1989; Feltgen et al., 2017), as well as a known positive correlation between polysemy and frequency (Lee, 1990; Casas et al., 2019).",
"We emphasize that this relationship is not merely an artifact of contextualized word representations being affected by frequency (Zhou et al., 2021), since our polysemy score does not rely on word representations as in Hamilton et al. (2016).",
"Our approach is however not without drawbacks the polysemy variable is collected from dictionaries, which may be subjective in their assignments of word senses.",
"Our study, along with previous work on the dynamics of semantic change, is limited by mainly considering distributional factors.",
"Linguists have suggested that sociocultural, psychological and political factors may drive word change dynamics (Blank, 1999; Bochkarev et al., 2014), and slang words are not an exception.",
"Although challenging to measure, the influence of such factors on slang compared to nonslang words would be interesting to examine in future work.",
"In conclusion, we believe that a causal analysis as we have presented here provides a useful tool to understand the underlying mechanisms of language.",
"Complementing the recent emergence of research combining causal inference and NLP (Feder et al., 2021), we have shown that tools from causality can also be beneficial for gaining new insights in diachronic linguistics.",
"In this work, we have analyzed the diachronic mechanisms of slang language with a causal methodology.",
"This allowed us to establish that a word's type has a direct effect on its semantic change and frequency shift, without mediating effects from other distributional factors.",
"We would like to thank Steven R. Wilson for providing us with the Urban Dictionary data and Walter Rader for providing us with a curated set of slang words from the Online Slang Dictionary.",
"For the Twitter data, we are thankful to have been able to get access to Twitter's Academic Research Track.",
"Finally, we gratefully acknowledge feedback and helpful comments from Mario Giulianelli, Yifan Hou, Bernhard Schlkopf and three anonymous reviewers.",
"This material is based in part upon works supported by the John Templeton Foundation (grant #61156); by a Responsible AI grant by the Hasler-stiftung; by an ETH Grant (ETH-19 21-1); by the German Federal Ministry of Education and Research (BMBF): Tbingen AI Center, FKZ: 01IS18039B; and by the Machine Learning Cluster of Excellence, EXC number 2064/1 Project number 390727645.",
"Our dataset is composed solely of English text.",
"This means that our analysis applies uniquely to the English language, and results may differ in other languages.",
"Moreover, for the purpose of this study, we curated a dataset of 170 , 135 tweets.",
"We emphasize that in order to protect the anonymity of users, we remove all author IDs from the data, and replace all usernames with the general token user.",
"In the Urban Dictionary dataset we received from Wilson et al. (2020), we similarly remove the author IDs and only consider the entry text."
] | [
"abstain",
"abstain",
"method",
"method",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"result",
"method",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"objective",
"objective",
"objective",
"result",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method"
] |
[
"Recent studies have shown the advantages of evaluating NLG systems using pairwise comparisons as opposed to direct assessment.",
"Given k systems, a naive approach for identifying the top-ranked system would be to uniformly obtain pairwise comparisons from all (cid:0) k 2 (cid:1) pairs of systems.",
"However, this can be very expensive as the number of human annotations required would grow quadratically with k .",
"In this work, we introduce Active Evaluation , a framework to efficiently identify the top-ranked system by actively choosing system pairs for comparison using dueling bandit algorithms.",
"We perform extensive experiments with 13 dueling bandits algorithms on 13 NLG evaluation datasets spanning 5 tasks and show that the number of human annotations can be reduced by 80%.",
"To further reduce the number of human annotations, we propose model-based dueling bandit algorithms which combine automatic evaluation metrics with human evaluations.",
"Specifically, we eliminate sub-optimal systems even before the human annotation process and perform human evaluations only on test examples where the automatic metric is highly uncertain.",
"This reduces the number of human annotations required further by 89%.",
"In effect, we show that identifying the top-ranked system requires only a few hundred human annotations, which grow linearly with k .",
"Lastly, we provide practical recommendations and best practices to identify the top-ranked system efficiently.",
"Our code has been made publicly available at https: //github.com/akashkm99/duelnlg 1 Introduction In the last few years, the field of NLG has made rapid progress with the advent of large-scale models trained on massive amounts of data (Vaswani et al., 2017; Xue et al., 2020; Liu et al., 2020; Brown et al., 2020).",
"However, evaluation of NLG * Work done at Indian Institute of Technology Madras systems continues to be a challenge.",
"On the one hand, we have automatic evaluation metrics which are easy to compute but unreliable.",
"In particular, many studies have shown that they do not correlate well with human judgments (Novikova et al., 2017; Elliott and Keller, 2014; Sai et al., 2019, 2020a,b).",
"On the other hand, we have human evaluations, which are relatively more reliable but tedious, expensive, and time-consuming.",
"Further, recent studies have highlighted some limitations of human evaluations that involve direct assessment on an absolute scale, e.g. , Likert scale.",
"Specifically, human evaluations using direct assessment have been shown to suffer from annotator bias , high variance and sequence effects where the annotation of one item is influenced by preceding items (Kulikov et al., 2019; Sudoh et al., 2021; Liang et al., 2020; See et al., 2019; Mathur et al., 2017).",
"In this work, we focus on reducing the cost and time required for human evaluations while not compromising on reliability.",
"We take motivation from studies which show that selecting the better of two options is much easier for human annotators than providing an absolute score, which requires annotators to maintain a consistent standard across samples (Kendall, 1948; Simpson and Gurevych, 2018).",
"In particular, recent works show that ranking NLG systems using pairwise comparisons is a more reliable alternative than using direct assessment (See et al., 2019; Li et al., 2019; Sedoc et al., 2019; Dhingra et al., 2019).",
"While this is promising, a naive approach for identifying the top-ranked system from a set of k systems using uniform exploration is prohibitively expensive.",
"Specifically, uniform exploration obtains an equal number of annotations for all the (cid:0) k 2 (cid:1) system pairs; as a result, the required human annotations grows as O ( k 2 ) .",
"To reduce the number of pairwise annotations, we introduce Active Evaluation, a framework to efficiently identify the top-ranked NLG system.",
"Our Active Evaluation framework consists of a 8761 learner that selects a pair of systems to compare at each time step.",
"The learner, then, receives a feedback signal indicating the (human) preference between the selected systems on one input context, randomly sampled from the test dataset.",
"The learner's objective is to reliably compute the top-ranked system with as few human annotations as possible.",
"We adopt algorithms from the stochastic dueling bandits literature (Bengs et al., 2021) to decide which pair of NLG systems to compare at each time step.",
"To check if existing dueling bandits algorithms can indeed provide reliable top-rank estimates with minimal annotations, we evaluate 13 such algorithms on 13 NLG evaluation datasets spanning five tasks viz.",
", machine translation, summarization, data-to-text generation, paraphrase generation, and grammatical error correction.",
"We show that the best performing dueling bandit algorithm can reduce the number of human annotations by 80% when compared to uniform exploration.",
"To further reduce human annotations, we leverage automatic evaluation metrics in our Active Evaluation framework.",
"We utilize existing automatic metrics such as BLEU (Papineni et al., 2002), BertScore (Zhang et al., 2020), etc for pairwise evaluations by converting the direct evaluation scores into preference probabilities using pairwise probability models.",
"We also develop trained pairwise metrics that directly predict the comparison outcome given pairs of generated texts and context or reference as input.",
"To incorporate such evaluation metrics in our Active Evaluation framework, we propose three model-based dueling bandits algorithms, viz.",
",",
"(i) Random Mixing: human annotations and evaluation metric predictions are randomly mixed,",
"(ii) Uncertainty-aware selection: human annotations are obtained only when the predictions from the evaluation metric is highly uncertain,",
"(iii) UCB Elimination: poorly performing NLG systems are eliminated using an Upper Confidence Bound (UCB) on the evaluation metric scores.",
"Through our experiments, we show that the number of human annotations can be further reduced by 89% on average (this reduction is over and above the 80% reduction that we got earlier).",
"In effect, we show that given k systems, we can find the top-ranked NLG system efficiently with just a few hundred comparisons that vary as O ( k ) .",
"Lastly, we provide practical recommendations to efficiently identify the top-ranked NLG system based on our empirical study on various design choices and hyperparameters.",
"We introduce the problem and our Active Evaluation setup in section 2.1.",
"Later in section 2.2, we describe the different approaches to decide which pairs of NLG systems to compare at each time step.",
"Finally, in section 2.3, we formalize the notion of top-ranked system.",
"We consider the problem of finding the top-ranked NLG system from a given set of k systems, denoted by S = { 1 , 2 , . . . , k } .",
"Our Active Evaluation framework consist of a learner which at each time step t , chooses a pair of systems s ( t ) 1 , s ( t ) 2 S for comparison.",
"Then, we ask human annotators to compare the outputs of the chosen systems on a randomly sampled input context and provide the comparison outcome as feedback to the learner.",
"Specifically, we first sample an input context X ( t ) from the test dataset and obtain the generated texts Y ( t ) 1 , Y ( t ) 2 from the chosen systems s ( t ) 1 , s ( t ) 2 .",
"We then display the generated texts Y ( t ) 1 , Y ( t ) 2 along with the context X ( t ) to human annotators and obtain a comparison outcome w ( t ) = 1 , 0 , or 0 .",
"5 denoting whether Y ( t ) 1 is of better, worse, or equal (tie) quality as Y ( t ) 2 .",
"Note that the feedback w ( t ) indicates the preference on only one input sample and not the entire test dataset.",
"The overall framework is depicted in figure 1. The learner's objective is to find the top-ranked system with as few pairwise comparisons as possible.",
"The learner should decide the pair of systems ( s ( t ) 1 , s ( t ) 2 ) to compare at each time step t .",
"The naive approach is to uniformly explore all the (cid:0) k 2 (cid:1) system pairs.",
"Specifically, the probability of selecting a pair ( i, j ) , i = j at time t is given by P uniform (( s ( t ) 1 , s ( t ) 2 ) = ( i, j )) = 1 (cid:0) k 2 (cid:1) However, as we show in our experiments, the number of human annotations required to find the top-ranked system by this approach is very expensive and grows quadratically with the number of systems since we equally explore all (cid:0) k 2 (cid:1) pairs.",
"To reduce the number of annotations, we use dueling bandit algorithms to actively choose pairs of systems to compare based on the history of previous 8762 Figure 1: Our Active Evaluation framework consisting of a learner that chooses a pair of systems to compare at each time step.",
"observations.",
"We provide an overview of 13 dueling bandits algorithms proposed in the literature in appendix B. We refer the readers to (Bengs et al., 2021) for a complete survey.",
"We now formalize the notion of the top-ranked system.",
"Let p ij denote the preference probability of system i over system j i.e. the probability that a generated text from system i is preferred over system j in the test dataset.",
"We say that a system i \"beats\" system j if p ij > 12 .",
"In other words, system i beats system j if the probability of winning in a pairwise comparison is larger for i than it is for j .",
"We define the top-ranked system i as the one that beats all other systems, i.e. p i j > 12 , j S i .",
"Our Active Evaluation framework, which we described in the previous section, completely relied on human annotators to compare pairs of generated texts ( Y 1 , Y 2 ) to provide the preference feedback w .",
"We can further reduce the number of required human annotations by estimating the human preference feedback using automatic evaluation metrics.",
"However, most existing evaluation metrics are designed for direct assessment and not directly suitable for pairwise evaluations.",
"In this section, we describe three pairwise probability models to convert direct evaluation scores into pairwise preference probabilities.",
"Let f ( Y ) denote the score provided by a direct assessment metric f to a generated text Y (The dependence of f on the reference/context is omitted for brevity).",
"The pairwise preference probability p ( Y 1 Y 2 ) between any two hypotheses Y 1 and Y 2 can be modeled in 3 different ways: Linear: p ( Y 1 Y 2 ) = 1 2 + ( f ( Y 1 ) f ( Y 2 )) Bradley-Terry-Luce (BTL) (Bradley and Terry, 1952; Luce, 1979): p ( Y 1 Y 2 ) = f ( Y 1 ) f ( Y 1 ) + f ( Y 2 ) BTL-logistic: : As detailed in appendix C.2, we appropriately preprocess the scores f ( Y ) to ensure that preference probability lies between 0 and 1. We can now predict the comparison outcome w by thresholding the preference probability at two thresholds 1 and 2 ( 1 ) to incorporate ties i.e. : w = 1 , if p ( Y 1 Y 2 ) > 2 0 , if p ( Y 1 Y 2 ) < 1 0 .",
"We choose 1 and 2 using grid search on the validation set.",
"Refer appendix C.2 for more details.",
"In the previous section, we discussed pairwise probability models to obtain the estimated preference probability p ( Y 1 Y 2 ) and the comparison outcome w using scores assigned by direct assessment metrics.",
"We now propose three model-based dueling bandit algorithms wherein we combine such predictions from evaluation metrics with human annotations in the Active Evaluation framework.",
"Here, we randomly provide either the real (human) or the evaluation metric predicted feedback to the learner.",
"Specifically, at any time t , we use the predicted comparison outcome w ( t ) as the feedback with probability p m and use human annotations w ( t ) as feedback with probability 1 p m .",
"The hyperparameter p m controls the ratio of estimated and real feedback given to the learner.",
"As with other hyperparameters, we tune p m on the validation set.",
"In this algorithm, we estimate uncertainty in the evaluation metric predictions and decide to ask for human annotations only when the evaluation metric is highly uncertain.",
"We specifically focus on trainable neural evaluation metrics such as Bleurt (Sellam et al., 2020) where we estimate the prediction uncertainty using recent advances in Bayesian deep learning.",
"Let p ( Y 1 Y 2 | ) denote the preference probability modelled by a neural evaluation metric with parameters .",
"Given a training dataset D tr , Bayesian inference involves computing the posterior distribution p ( |D tr ) and marginalization over the parameters : p ( Y 1 Y 2 |D tr ) = (cid:90) p ( Y 1 Y 2 | ) p ( |D tr ) d However, computing the true posterior and averaging over all possible parameters is intractable in practice.",
"Hence, several approximations have been proposed in variational inference such as finding a surrogate distribution q ( ) for the true posterior.",
"Gal and Ghahramani (2016) have shown that we can use the Dropout distribution (Srivastava et al., 2014) as the approximate posterior q ( ) .",
"Specifically, we can perform approximate Bayesian inference by applying Dropout during test time.",
"Hence, the posterior can now be approximated with Monte-carlo samples as follows: p ( Y 1 Y 2 |D tr ) 1 LL (cid:88) l =1 p ( Y 1 Y 2 | l ) where { l } Ll =1 are L samples from the Dropout distribution q ( ) (i.e. we apply Dropout L times independently during testing).",
"We now discuss two different Bayesian uncertainty measures: BALD: The Bayesian Active Learning by Disagreement (BALD) (Houlsby et al., 2011) is defined as the mutual information between the model predictions and the model posterior.",
"Let p l = p ( Y 1 Y 2 | l ) , where l q ( ) , be the evaluation metric prediction using the l th sample l from the Dropout distribution.",
"Also, let p = 1 L (cid:80) Ll =1 p l be the mean prediction.",
"As shown in (Gal et al., 2017), we can approximate the BALD measure using samples from the Dropout distribution as: I = H ( p ) 1 LL (cid:88) l =1 H ( p l ) where H is the binary cross entropy function.",
"The BALD uncertainty score is essentially the difference in entropy of the mean prediction p and the average entropy of the individual predictions { p l } Ll =1 .",
"Hence, the BALD uncertainty score is high when the metric's mean prediction is uncertain (high entropy) but the individual predictions are highly confident (low entropy), i.e. , when the metric produces disagreeing predictions with high confidence.",
"STD: We also adopt the standard deviation of the preference probability taken over the posterior distribution as a measure of uncertainty: = (cid:113) Var p ( |D tr ) ( p ( Y 1 Y 2 | )) Similar to BALD, we can approximate the above measure using the empirical standard deviation of samples drawn from the dropout distribution.",
"Our proposed algorithm asks for human annotations only if the uncertainty measure (BALD or STD) is above a particular threshold.",
"The key idea here is to eliminate a set of \"poorly performing\" NLG systems using the automatic metric and perform human evaluations with the remaining set of systems.",
"To eliminate sub-optimal systems, we first need to quantify a performance measure for the systems.",
"We use the Copeland score (Zoghi et al., 2015) which is defined as the normalized total number of pairwise wins for a system: C i = 1 k 1 (cid:80) j = i 1 ( p ij > 12 ) .",
"Copeland score is the highest for the top-ranked system with a value of 1 and it is less than 1 for all other systems.",
"To estimate the Copeland score, we first predict the pairwise preference probability between any two systems i and j as follows: p ij = 1 N (cid:88) Y 1 ,Y 2 D ij p ( Y 1 Y 2 | ) where D ij is the test dataset consisting of generated texts from systems i and j , N is the total number of test examples, is the learned model parameters.",
"We can now estimate the Copeland score C i using the estimated preference p ij and eliminate all systems with Copeland scores below a threshold.",
"However, a major problem with this approach is that evaluation metrics are often inaccurate and we could wrongly eliminate the true top-ranked system without performing any human evaluations.",
"For example, consider the example where i is the 8764 top-ranked system with p i j > 0 .",
"51 , j S i .",
"If several of the predicted probabilities p i j are less than 0 .",
"5 , our top-ranked system i will receive a low estimated Copeland score and will be incorrectly eliminated.",
"To overcome this problem, we define an Upper Confidence Bound (UCB) on the preference probability using uncertainty estimates that we described in 4.2.",
"Specifically, the upper confidence bound u ij is given by u ij = p ij + ij where is a hyperparameter that controls the size of the confidence region and 2 ij is the estimated variance given by: 2 ij = 1 N 2 (cid:88) Y 1 ,Y 2 D ij Var q ( ) p ( Y 1 Y 2 | ) where q ( ) is the Dropout distribution.",
"Using the upper confidence estimates u ij , we now define the optimistic Copeland score for a system i as C ui = 1 K 1 (cid:80) j = i 1 ( u ij > 1 2 ) .",
"Here, we consider a system i to beat another system j ( u ij > 0 . 5 ) if either the estimated preference is high ( p ij is high) or if there is an high uncertainty in the estimation ( ij is high).",
"In UCB Elimination, we eliminate a system only if the optimistic Copeland score is below a threshold.",
"In this section, we describe the",
"(i) NLG tasks and datasets used in our experiments,",
"(ii) automatic evaluation metrics used in our model-based algorithms, and",
"(iii) annotation complexity measure used for comparing dueling bandit algorithms.",
"We use a total of 13 datasets spanning 5 tasks in our experiments which are summarized in table 1. Machine Translation (MT): We use 7 human evaluation datasets collected from the WMT news translation tasks (Bojar et al., 2015, 2016) viz. fin eng, rus eng, deu eng language pairs in WMT 2015 and tur eng, ron eng, cze eng, deu eng language pairs in WMT 2016.",
"Grammatical Error Correction (GEC): We utilize two human evaluation datasets collected by (Napoles et al., 2019) where the source texts are from",
"(i) student essays (FCE), and",
"(ii) formal articles in Wikipedia (Wiki).",
"We also use another GEC dataset collected by (Napoles et al., 2015a) from the CoNLL-2014 Shared Task (Ng et al., 2014).",
"Paraphrase Generation: We use human evaluations of model generated English paraphrases released with the ParaBank dataset (Hu et al., 2019).",
"Summarization: We make use of the human evaluations (Stiennon et al., 2020) of GPT3-like transformers on the TL;DR dataset (Vlske et al., 2017).",
"We provide further details including preprocessing steps and downloadable links in appendix A.1.",
"We can predict the comparison outcome w using two approaches.",
"First, we can use pairwise probability models with existing direct assessment metrics as discussed in section 3. Alternatively, we can train evaluation metrics to directly predict the comparison outcome given pairs of generated texts and context/reference as input.",
"We discuss both these approaches below: Direct Assessment Metrics: We experiment with a total of 10 direct assessment metrics viz. chrF (Popovic, 2015), BLEU-4 (Papineni et al., 2002), ROUGE-L (Lin, 2004), Embedding Average (Wi-eting et al., 2016), Vector Extrema (Forgues et al., 2014), Greedy Matching (Rus and Lintean, 2012), Laser (Artetxe and Schwenk, 2019), BertScore (Zhang et al., 2020), MoverScore (Zhao et al., 2019) and Bleurt (Sellam et al., 2020).",
"We mention the implementation details in appendix A.2.",
"Pairwise Evaluation Metrics: We finetune the pretrained Electra-base transformer model (Clark et al., 2020) to directly predict the comparison outcome w .",
"We curate task-specific human evaluation datasets consisting of tuples of the form (con-text/reference, hypothesis 1, hypothesis 2, label) for finetuning.",
"Due to space constraints, we mention 8765 Algorithm WMT 2016 WMT 2015 Grammarly CoNLL'14Task E2ENLG Para-Bank TL;DR tur-eng ron-eng cze-eng deu-eng fin-eng rus-eng deu-eng FCE Wiki Uniform 19479 24647 10262 3032 2837 12265 17795 8115 34443 61369 65739 825211 5893 SAVAGE 10289 18016 6639 2393 2675 12806 12115 5767 22959 39208 41493 255208 4733 DTS 10089 9214 8618 4654 4850 13317 16473 4355 11530 18199 19940 170467 1354 CCB 7017 11267 5389 2884 4092 11548 10905 4386 10020 21392 16960 87138 2518 Knockout 3415 7889 4723 3444 5104 5809 5956 3134 3777 8055 7708 17418 4953 RUCB 3125 5697 3329 1636 1655 4536 6222 2732 5617 19024 10924 41149 1647 RCS 2442 3924 3370 1537 2662 3867 5296 1816 4606 12678 7263 34709 1903 RMED 2028 5113 1612 864 1707 1929 4047 2093 5647 9364 3753 24132 1162 Table 2: Annotation complexity of the top 7 best performing dueling bandit algorithms along with the uniform exploration algorithm on 13 datasets spanning 5 NLG tasks details on the datasets and finetuning in appendix A.3 and A.4.",
"For the summarization task alone, we couldn't find any pairwise human judgment dataset sufficient for finetuning the Electra model.",
"To evaluate the performance of dueling bandit algorithms, we define annotation complexity as the minimum number of human annotations needed by an algorithm to identify the top-ranked NLG system with high confidence.",
"Let i be the actual top-ranked system, and i ( n ) denote the estimated winner by the algorithm after n human annotations, then annotation complexity is defined as: min n : n n , P ( i ( n ) = i ) > 1 acc where acc is the allowable failure probability i.e. the learner can make a mistake with at most acc probability.",
"To compute the annotation complexity, we run each dueling bandit algorithm with 200 different random seeds and find the minimum number of human annotations after which the algorithm correctly returns the top-ranked NLG system in at least 190/200 runs (we set acc = 0 . 05 ).",
"We discuss the performance of dueling bandits algorithms in 6.1, automatic metrics in 6.2 and our proposed model-based algorithms in 6.3.",
"Lastly in 6.4, we analyze the variation of annotation complexity with the number of NLG system.",
"We report the annotation complexity of the top 7 dueling bandit algorithms along with uniform exploration on 13 datasets in table 2. We observe that the annotation complexity of uniform exploration is consistently high across all 13 datasets.",
"In particular, the required human annotations become prohibitively expensive when the number of NLG 0 1000 2000 3000 4000 Number of Human Annotations 0 .",
"systems is high, e.g. E2E NLG (16 systems) and ParaBank (28 systems) datasets.",
"On the other hand, dueling bandit algorithms such as RUCB (Zoghi et al., 2014b), RCS (Zoghi et al., 2014a), RMED (Komiyama et al., 2015) are able to effectively identify the top-ranked system with much fewer annotations.",
"In particular, RMED performs the best with a reduction of 80.01% in human annotations compared to uniform exploration.",
"We also examine an alternative approach to assess the performance of dueling bandit algorithms.",
"Here, we fix the number of human annotations (fixed annotation budget) and compute the accuracy in predicting the top-ranked system.",
"As we show in figure 2, RMED achieves the highest top-rank prediction accuracy for any given number of human annotations.",
"We provide the complete results in appendix F.2.",
"Before we utilize automatic evaluation metrics using our proposed model-based algorithms, we analyze the effectiveness of these metrics for pairwise NLG evaluations.",
"In table 3, we report the sentence-level accuracy in predicting the comparison outcome w using direct assessment metrics with the Linear probability model (as discussed in section 3) along with our trained Electra metric.",
"Across the tasks, we observe that metrics that utilize con-8766 Metric WMT(Avg.)",
"textualized word embeddings, such as BertScore, perform much better than n -gram and static word embedding-based metrics.",
"In MT, we observe that Bleurt, specifically finetuned on WMT human judgment data, performs the best.",
"In Data-to-Text and Paraphrase generation, our trained Electra metric finetuned on task-specific data significantly outperforms the existing metrics.",
"Interestingly, on the summarization task, all the existing metrics perform much worse than random predictions i.e. they do not add any useful value in evaluation.",
"Hence, we exclude the TLDR dataset from our analysis on model-based algorithms.",
"Finally, as we show in appendix F.3, we observed that the performance is largely similar across all the three probability models: Linear, BTL, and BTL-logistic.",
"We use our proposed model-based algorithms and incorporate the two best-performing evaluation",
"metrics, viz.",
", Bleurt and Electra with the best performing dueling bandit algorithm, viz.",
", RMED.",
"We compare the annotation complexity of various model-based algorithms in table 4. We observe that the Random Mixing algorithm with Bleurt and Electra reduces annotation complexity by 70.43% and 73.15%, respectively, when compared to the standard (model-free) RMED algorithm (row 1).",
"Our Uncertainty-aware selection algorithm with the BALD measure further reduces the annotation complexity by around 37% (compared with Random Mixing).",
"We notice that our UCB Elimination algorithm also provides significant improvements over standard RMED.",
"Since UCB Elimination is complementary to Uncertainty-aware selection, we apply both these algorithms together and observe the lowest annotation complexity with a reduction of 89.54% using Electra and 84.00% using Bleurt over standard RMED.",
"Lastly, in figure 3, we analyze the effect of using other evaluation metrics such as BLEU, BertScore, etc.",
", in Random Mixing.",
"Interestingly, we notice that using metrics such as BLEU, which have low accuracy values, results in a higher annotation complexity than standard (model-free) RMED in some datasets.",
"That is, we may even require a greater number of human annotations to over-compensate for the inaccurate predictions from metrics like BLEU.",
"However, with Laser, MoverScore, and BertScore, we observe significant reductions in annotation complexity.",
"Please refer appendix F.4 for further results.",
"We analyze how annotation complexity varies with the number of NLG systems.",
"Specifically, we chose a subset of k systems out of the total 28 systems in the ParaBank dataset and computed the annotation complexity among these k systems.",
"As shown in figure 4, the annotation complexity of uniform ex-8767 Model-basedAlgorithm EvaluationMetric WMT 2016 WMT 2015 Grammarly CoNLL'14Task E2ENLG Para-Bank tur-eng ron-eng cze-eng deu-eng fin-eng rus-eng deu-eng FCE Wiki None (Model free) None 2028 5113 1612 864 1707 1929 4047 2093 5647 9364 3753 24132 Random Mixing Bleurt 237 1222 315 161 275 304 771 406 671 9584 1151 15874 Electra 728 3213 385 152 236 512 650 1529 237 3302 326 1044 Uncertainty-awareSelection(STD) Bleurt 103 1012 192 84 204 239 530 270 185 9356 1291 22876 Electra 978 7251 478 210 388 962 1259 477 234 4708 199 2137 Uncertainty-awareSelection(BALD) Bleurt 101 653 136 48 181 162 405 204 128 9356 1167 22619 Electra 737 1648 223 114 207 538 488 281 75 1557 67 858 UCB Eliminination Bleurt 711 2684 1131 573 419 843 3556 967 1115 8382 2005 14098 Electra 264 649 1131 414 294 1126 3556 3970 1115 2943 1112 9870 Uncertainty(BALD)+UCB Elim.",
"ploration grows quadratically with k as it explores all system pairs equally.",
"However, for (model-free) dueling bandit algorithms such as RMED, the annotation complexity is much lower and only varies as O ( k ) .",
"As shown in appendix F.1, we observed similar trends with model-based algorithms.",
"We summarize the key insights from this study and provide practical recommendations on efficiently",
"identifying the top-ranked NLG system.",
"1. Use RMED dueling bandit algorithm to actively choose system pairs for comparison.",
"2. If human evaluation datasets are available, train a metric to predict the comparison outcome directly.",
"Otherwise, use Bleurt with any of the Linear, BTL, BTL-logistic models.",
"3. Manually annotate a few examples from the test dataset and evaluate the sentence-level accuracy of the metric.",
"If the performance is poor (e.g., accuracy near the random baseline), do not use model-based approaches, obtain feedback only from human annotators.",
"4. If the metric is reasonably accurate, use UCB Elimination with Uncertainty-aware Selection (BALD).",
"Tune the hyperparameters of these algorithms, if possible.",
"Otherwise, refer appendix D for best practices developed based on analyzing the sensitivity of model-based algorithms to hyperparameters.",
"5. We can reduce the annotation time if we use multiple annotators in parallel.",
"We observed that dueling bandit algorithms, though originally proposed for sequential annotations, are robust to asynchronous feedback from multiple annotators (Refer appendix E for details).",
"Several works (Bojar et al., 2014, 2015; Sakaguchi et al., 2014, 2016) in Machine translation and Grammatical Error Correction adopt the TrueSkill algorithm (Herbrich et al., 2006), originally used for ranking Xbox gamers, to efficiently rank NLG systems from pairwise annotations.",
"A recent work (Sakaguchi and Durme, 2018) proposes an online algorithm to rank NLG systems when we receive pairwise preference feedback in the form of a continuous scalar with bounded support.",
"The key difference in our work is that we focus on the problem of identifying the top-rank system instead of ranking all the systems.",
"Experimental study of dueling bandit algorithms have been limited to synthetic simulations in a few works (Yue and Joachims, 2011; Urvoy et al., 2013).",
"Most others (Zoghi et al., 2014b,a; Komiyama et al., 2015; Zoghi et al., 2015; Wu and Liu, 2016) focus on information retrieval applications that involve evaluating search retrieval algorithms (Radlinski et al., 2008).",
"To the best of our knowledge, ours is the first work to extensively study the effectiveness of dueling bandit algorithms for NLG evaluation.",
"In this work, we focused on the problem of identifying the top-ranked NLG system with few pairwise annotations.",
"We formulated this problem in an Active Evaluation framework and showed that dueling bandit algorithms can reduce the number of human annotations by 80%.",
"We then proposed model-based algorithms to combine automatic metrics with human evaluations and showed that human annotations can be reduced further by 89%; thereby requiring only a few hundred human annotations to identify the top-ranked system.",
"In future work, we would like to extend our analysis to the general problem of finding the top-k ranked systems.",
"Evaluating Natural Language Generation (NLG) models accurately and reliably with few human annotations is an important aspect of NLG research and its real-world applications.",
"Our work shows that we can significantly reduce the number of human annotations required to find the top-ranked NLG system with high confidence.",
"We envision that our work will benefit a wide range of applications such as translation systems, grammatical checkers, etc., where practitioners can find the best NLG model among a set of candidates more accurately and with fewer human annotations.",
"Despite these improvements, there are still several challenges towards reliable NLG evaluation.",
"For example, our model-based approaches, which use automatic metrics, may be subject to biases and other undesirable mistakes, depending on the metric and how they are trained in practice.",
"Our approach may be used to evaluate models that generate fake news, toxic content, or other harmful applications, even though it is not specifically designed for such cases.",
"We thank the Department of Computer Science and Engineering, IIT Madras, and the Robert Bosch Center for Data Science and Artificial Intelligence, IIT Madras (RBC-DSAI), for providing us resources required to carry out this research.",
"We also wish to thank Google for providing access to TPUs through the TFRC program.",
"We thank the anonymous reviewers for their constructive feedback in enhancing the work."
] | [
"abstain",
"abstain",
"abstain",
"method",
"result",
"objective",
"method",
"abstain",
"result",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"result",
"method",
"method",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"other",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"method",
"other",
"other",
"objective",
"method",
"result",
"objective",
"objective",
"abstain",
"result",
"result",
"abstain",
"method",
"method",
"other",
"other",
"other"
] |
[
"Unsupervised domain adaptation (UDA) is the task of modifying a statistical model trained on labeled data from a source domain to achieve better performance on data from a target domain, with access to only unlabeled data in the target domain.",
"Existing state-of-the-art UDA approaches use neural networks to learn representations that can predict the values of subset of important features called pivot features.",
"In this work, we show that it is possible to improve on these methods by jointly training the representation learner with the task learner, and examine the importance of existing pivot selection methods.",
"Unsupervised domain adaptation (UDA) is the task of modifying a statistical model trained on labeled data from a source domain to achieve better performance on data from a target domain, without access to any labeled data in the target domain.",
"Supervised domain adaptation methods can obtain excellent performance from a small number of labeled examples in the target domain (Daume III, 2007), but UDA is attractive in cases where annotation requires specialized expertise or the number of meaningfully different sub-domains is large (e.g., both are true for clinical NLP).",
"Structural correspondence learning (Blitzer et al., 2006) (SCL) is one widely-used method for UDA in natural language processing.",
"The key idea in SCL is that a subset of features, believed to be predictive across domains, are selected as pivot features .",
"For each selected pivot feature, SCL creates an auxiliary classification task of predicting the value of that feature in an instance, given the values of all the non-pivot features for that instance.",
"The auxiliary classifiers therefore learn important cross-domain information about the structure of the feature space, which the SCL algorithm uses to create an augmented representation that aligns features from different domains (further details in Section 2).",
"Meanwhile, recent advances in neural network learning have shown that training regimens that jointly consider evidence from multiple sources can improve performance both multi-task learning (Sgaard and Goldberg, 2016) and fine tuning (Howard and Ruder, 2018; Devlin et al., 2018).",
"However, existing SCL-based methods treat the representation learning and task learning as separate tasks, so the parameters of the representation learning machinery are fixed before training for the downstream task.",
"Jointly learning the representationand task-specific parameters can potentially allow a learning algorithm to find representations that are better suited for the task.",
"In this work, we describe a new UDA algorithm that is trained to jointly maximize two objectives: the primary supervised task in the source domain, and a pivot feature reconstruction task that can be trained on unlabeled data.",
"We also explore the importance of pivot feature selection to this algorithm, in experiments that quantitatively and qualitatively examine the quality of existing pivot selection methods.",
"We find that our joint neural approach to SCL improves unsupervised domain adaptation substantially on a standard sentiment classification task.",
"Our results also show that while existing pivot selection methods perform well, they are below an oracle-provided ceiling for many source-target pairs for the sentiment classification task we examine.",
"This work builds off of existing work in unsupervised domain adaptation, starting with Blitzer's work on structural correspondence learning (SCL) (Blitzer et al., 2006, 2007).",
"In the UDA task setup, one is given two datasets, the source DS = { X s , y s } , with labels for each instance, and the target DT = { X t } , with unlabeled instances only.",
"The goal of UDA is to learn a function f u ( X s , y s , X t ) that improves on the classification performance over a function f l ( X s , y s ) when applied to new data drawn from the target distribution.",
"SCL is essentially a representation learning algorithm that works by creating a number of auxiliary classification tasks from the unlabeled source and target training instances (inspired by Ando and Zhang 2005).",
"First, a set of p pivot features are selected, intended (in Blitzer's words) to be features which behave in the same way for discriminative learning in both domains.",
"Then, SCL creates p auxiliary tasks of predicting the value of pivot features in an instance given the non-pivot features in the instance.",
"The weights of these linear classifiers are then aligned as columns in a matrix W , and the k left singular vectors are cho-sen from the singular value decomposition W = U V (cid:62) to reduce its dimensionality, leading to a projection matrix R n k that maps instances from the original feature space into the learned space.",
"Most practical implementations find the best performance of SCL is obtained when projected features are concatenated with the original feature space; for some tasks and datasets other combinations have been tested and proved superior (Sapkota et al., 2016).",
"Recently, neural-network-based domain adaptation algorithms have been successful, including domain adversarial methods (Ganin et al., 2016) and auto-encoder-based methods (Glorot et al., 2011; Chen et al., 2014).",
"However, a neural version of SCL still obtains near state-of-the-art performance (Ziser and Reichart, 2017).",
"In that work, the AE-SCL system uses a multi-layer perceptron to replace the SVD for learning the feature projection.",
"This network takes non-pivot features as input, has one hidden layer, and predicts the value of the pivot features at the output layer.",
"Since it obtains supervision from the values of features, it can be trained on unlabeled instances from the source and target domains.",
"To train for the downstream sentiment classification task, the source instances are first passed into the trained representation learning network, and the values of the hidden layer are considered an additional set of features.",
"These features are combined with all the original features, and the authors use a logistic regression classifier for the final sentiment classifier.",
"One standard corpus used to develop new domain adaptation algorithms is the Amazon sentiment analysis dataset.",
"1 This corpus was created by Blitzer et al. (2007), but we use the version included in the software release from Ziser and Reichart (2017) 2 , along with their pre-processing steps, for ease of comparison with their results.",
"This dataset contains reviews from four product categories on Amazon.com books, DVDs, electronics, and kitchen appliances.",
"Reviews are mapped to binary categories: positive if the review assigns the product > 3 stars (out of 5) and negative if it assigns the product < 3 stars.",
"This dataset also contains additional unlabeled instances for each category, used for training the pivot predictor.",
"The current work has two motivating factors.",
"First, we would like to improve the performance of SCL using joint training.",
"Existing SCL-based methods are successful in treating pivot prediction as a pre-training phase, but joint training may improve UDA by allowing the network to find representations that are equally good at pivot reconstruction but better for downstream task performance.",
"Second, we would like to evaluate the quality of pivot selection methods and explore whether this step might be eliminated to simplify SCL.",
"We focus on feature-based UDA methods, as opposed to approaches that rely on embeddings (e.g., Barnes et al., 2018; Ziser and Reichart, 2018), since our primary interest is in improving existing models developed with a feature engineering approach.",
"Such methods allow us to quickly adapt a number of different models to new datasets (e.g., for already-existing NLP pipeline software), rather than engineering new neural models from scratch for each of the pipeline tasks.",
"For that reason, we compare to the AE-SCL model of Ziser and Reichart, rather than their subsequent models that take embeddings as input.",
"In any case, we show that with some tuning the AE-SCL model can obtain state-of-the-art performance for many pairs.",
"Figure 1 graphically depicts our proposed joint model.",
"The input to the model x R n is the set of all features extracted from the text to compare with Ziser and Reichart (2017) we use unigrams and bigrams, extracted using scikit-learn (Pedregosa et al., 2011).",
"We experimented with a few different hidden layer sizes, and settled on d = 2000 this balances the need of the representation to predict more output variables than the AE-SCL method with run-time constraints.",
"The representation is generated with h ( x ) = ReLU ( W h x ) , for W h R d n .",
"The task prediction is f task ( x ) = Sigmoid ( W t h ( x )) ( W t R 1 d ) and the pivot prediction is f pivot ( x ) = Sigmoid ( W p h ( x )) ( W p R p d ).",
"The joint loss function for labeled source data D l , all data D a , and model parameters is: L ( D ; ) = (cid:88) x ,y D l BCE ( f task ( x ) , y )+ (cid:88) x D a BCE ( f pivot ( x ) , pivots ( x ))+ R ( ) (1) where BCE is the binary cross-entropy loss, controls the weight of pivot prediction loss, pivots is a function that selects the indices from an instance that are the pivot features to be predicted, and is the weight of the regularization term R .",
"To train this model, we alternate passing labeled source data and unlabeled data from the source and target domains into the network.",
"For the labeled data, the error term is the sum of taskand pivot-prediction tasks, while for the unlabeled data only the pivot-prediction loss is computed.",
"Training proceeds for 30 epochs, with mini-batch size of 50 instances, using the Adam optimizer (Kingma and Ba, 2014) with learning rate 0 .",
"001 .",
"For the loss function weight, we used = 0 .",
"1 and = 100 .",
"We used held-out source data to compute validation loss after each epoch and selected the trained model with the lowest validation loss.",
"One standard way of choosing pivot features is by calculating mutual information (MI) between the source features and labels, and selecting the features with the highest MI.",
"It is far from clear, however, that this technique is always optimal.",
"Earlier experiments with the POS tagging task (Blitzer et al., 2006) used feature frequency instead, and the extent of the correlation between frequency and MI for that task is not established.",
"Here, we attempt to provide some evidence about the quality of MI for the task of sentiment classification, using the classification pair of books to electronics.",
"First, we wanted to rule out the null hypothesis that prediction of MI pivots is essentially a generic representation learning algorithm in other words, that a network learning structure between any sets of sufficiently common features may improve adaptation performance.",
"We modified Ziser et",
"al.'s code to simply select random feature indices from the subset of those that occurred frequently enough to be pivot candidates.",
"With this setup, adaptation performance averaged 0 .",
"724 across ten runs, well below their reported 0 .",
"744 , casting doubt on the null hypothesis.",
"Next, we want to examine the contention that features with high MI relative to source labels are general.",
"To do this, we simply compare the list of MI features used when the source is books to those when the source is electronics .",
"We find that, out of the 100 pivot features selected by MI in either cases, there is overlap of 26 features, some examples of which are shown in Table 1 (left).",
"Table 1 also shows a number of MI-selected pivots from the books domain that are not general (mid-dle), and then a set of features MI-correlated with the target domain that seem general (right).",
"These latter two columns are essentially precision and recall errors of the MI pivot selection algorithm.",
"Finally, we perform an oracle-based adaptation experiment, where we select the pivot feature indices using MI against the gold labels of the target domain, but then proceed with training looking only at source labels, with results in Table 2 discussed below.",
"We follow the standard setup for the Amazon sentiment task, splitting each source dataset into 1600 training and 400 validation instances, and evaluating on the entire labeled target dataset for each pair.",
"We compare against two baselines: First, the reported results of Ziser and Reichart, and second, our replication of their results using their code.",
"Our replication changed their code by replacing the stochastic gradient descent optimizer with Adam (Kingma and Ba, 2014), and increasing the training batch size from 1 to 50 .",
"These changes were made to speed training runs during development; we found they produced better-than-reported results and include these superior results as an even stronger baseline.",
"We report results of two configurations of our joint learner.",
"The first configuration ( Joint MI ) uses the MI between source labels and features to select 100 pivot features.",
"The second configuration ( Joint Oracle ) is an oracle-informed system where we use the MI between target labels and features to select pivot features, but only use source labels while training the network.",
"Both the AE-SCLR model and our Joint MI model were run for 10 iterations to minimize differences due to random initialization and to calculate significance statistics.",
"proves upon their reported results in 8 of 12 pairs, often by substantial margins, and is only worse in one pair (Kitchen Books).",
"Our Joint MI method is superior to the reported AE-SCL results in all pairs, 1.7 points (absolute) on average, and significantly better than the AE-SCLR in 9 of 12 pairs, using Welch's one-tailed t-test.",
"This is, to our knowledge, the best result on this task using a feature-based approach (i.e., excluding systems that use embeddings).",
"Despite constraining our system to adapting feature-based models, this result is competitive with the best-known result using a pure neural approach with embeddings as input, as Ziser and Reichart (2018) report an average accuracy of 0.804.",
"The Joint Oracle configuration shows that, despite the large gains of joint training, there is still significant improvement available with better pivot selection.",
"Our results show that by jointly learning representations and task networks, UDA can be greatly improved over existing neural UDA methods.",
"We note that there are existing domain adaptation methods that use joint training with auxiliary tasks.",
"Yu and Jiang (2016) use an auxiliary task of predicting whether a masked pivot word in a sentence is positive or negative sentiment, where they introduce a new technique to select pivots that still is based on correlations with source labels.",
"Our work is unique in showing that the standard task of mutual-information-selected pivot prediction is a high quality auxiliary task, though future work should explore whether their pivot selection algorithm is superior to MI in our joint model.",
"We also showed that existing neural UDA methods can be improved significantly with minor changes to the training regimen.",
"Finally, we show that mutual information pivot selection is quite far from the performance ceiling provided by oracle-based pivot selection.",
"This work evaluated on the widely-used Amazon sentiment dataset from Blitzer et al. (2007).",
"However, we believe that future work on domain adaptation should phase out the use of this dataset.",
"3 The test set for this setup is flawed in two important ways: first, it is artificially balanced with positive and negative reviews, when the problem is not actually balanced; it also has 3-star reviews removed, which is not a realistic test set setup without looking at labels.",
"For these reasons, we recommend that future work use different domain adaptation datasets.",
"produce",
"A framework for learning predictive structures from multiple tasks and unlabeled data.",
"Journal of Machine Learning Research , 6(Nov):18171853.",
"Jeremy Barnes, Roman Klinger, and Sabine im Walde.",
"Projecting Embeddings for Domain Adaptation: Joint Modeling of Sentiment in Diverse Domains.",
"In Proceedings of COLING 2018, the 27th 3 Thanks to the anonymous reviewer who made this argument which we found convincing.",
"We credit them with these points while accepting the blame for any poor communication of these points.",
"Minmin Chen, Kilian Weinberger, Fei Sha, and Yoshua Bengio.",
"2014.",
"Marginalized denoising auto-encoders for nonlinear representations.",
"In International Conference on Machine Learning , pages 14761484.",
"Hal Daume III.",
"2007.",
"Frustratingly Easy Domain Adaptation.",
"In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics , pages 256263.",
"Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova.",
"2018.",
"Bert: Pre-training of deep bidirectional transformers for language understanding.",
"arXiv preprint arXiv:1810.04805 .",
"Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, Franois Lavi-olette, Mario Marchand, and Victor Lempitsky.",
"2016.",
"Domain-adversarial training of neural networks.",
"Journal of Machine Learning Research , 17(59):135.",
"Xavier Glorot, Antoine Bordes, and Yoshua Bengio.",
"2011.",
"Domain adaptation for large-scale sentiment classification: A deep learning approach.",
"In Proceedings of the 28th international conference on machine learning (ICML-11) , pages 513520.",
"Jeremy Howard and Sebastian Ruder.",
"2018.",
"Universal language model fine-tuning for text classification.",
"In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , volume 1, pages 328339.",
"Diederik P Kingma and Jimmy Lei Ba.",
"2014.",
"Adam: Amethod for stochastic optimization.",
"In Proc.",
"3rd Int.",
"Conf.",
"Learn.",
"Representations .",
"Research reported in this publication was supported by the National Library Of Medicine of the National Institutes of Health under Award Number R01LM012918.",
"The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health."
] | [
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"result",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"objective",
"result",
"result",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other"
] |
[
"Natural language processing models learn word representations based on the distributional hypothesis, which asserts that word context (e.g., co-occurrence) correlates with meaning.",
"We propose that n -grams composed of random character sequences, or garble , provide a novel context for studying word meaning both within and beyond extant language.",
"In particular, randomly generated character n -grams lack meaning but contain primitive information based on the distribution of characters they contain.",
"By studying the embeddings of a large corpus of garble, extant language, and pseudowords using CharacterBERT, we identify an axis in the model's high-dimensional embedding space that separates these classes of n -grams.",
"Furthermore, we show that this axis relates to structure within extant language, including word part-of-speech, morphology, and concept concreteness.",
"Thus, in contrast to studies that are mainly limited to extant language, our work reveals that meaning and primitive information are intrinsically linked.",
"What primitive information do character sequences contain?",
"Modern natural language processing is driven by the distributional hypothesis (Firth, 1957), which asserts that the context of a linguistic expression defines its meaning (Emerson, 2020).",
"Because existing wordswhich represent an extremely small fraction of the space of possible character sequencesappear in context together, the distributional paradigm at this level is limited in its ability to study the meaning of and information encoded by arbitrary character level n -grams (word forms).",
"Furthermore, state-of-the-art computational language models operating within the distributional paradigm, such as BERT (Devlin et al., 2019), are mainly trained on extant words.",
"Yet, a plethora of insights into language learning have emerged from inquiries into language beyond extant words, such as the grammatical errors and inference patterns that children exhibit when distinguishing extant words from non-linguistic auditory signals, including emotional expressions, auditory gestures, and other forms of paralinguistic speech (Yang, 2006; Carey, 2000).",
"We therefore propose that character n -grams (i.e., sequences of alphabetic characters) outside the space of extant language can provide new insights into the meaning of words and how they are represented by these models, beyond that captured by word and sub-word-based distributional semantics alone.",
"We explore this by studying the embeddings of randomly generated character n -grams (referred to as garble ), which contain primitive communicative information but are devoid of meaning, using the CharacterBERT model (El Boukkouri et al., 2020).",
"Such randomly generated character n -grams are textual analogues of paralinguistic vocalizationsvocal extra-speech sounds and noises.",
"Our analyses contribute to the growing understanding of BERTology (Rogers et al., 2020) by identifying a dimension, which we refer to as the information axis , that separates extant and garble n -grams.",
"This finding is supported by a Markov 7120 model that produces a probabilistic information measure for character n -grams based on their statistical properties.",
"Strikingly, this information dimension correlates with properties of extant language; for example, parts of speech separate along the information axis, and word concreteness varies along a roughly orthogonal dimension in our projection of CharacterBERT embedding space.",
"Although the information axis we identify separates extant and randomly generated n -grams very effectively, we demonstrate that these classes of n -grams mix into each other in detail, and that pseudowords i.e., phonologically coherent character n -grams without extant lexical meaninglie between the two in our CharacterBERT embeddings.",
"This paper is organized as follows.",
"We first discuss concepts from natural language processing, information theory, and linguistics relevant to our study.",
"We then analyse CharacterBERT representations of extant and randomly generated character sequences and how the relation between the two informs the structure of extant language, including morphology, part-of-speech, and word concreteness.",
"Finally, we ground our information axis in a predictive Markov language model.",
"Models in computational linguistics often represent words in a high-dimensional embedding space based on their co-occurrence patterns according to the distributional hypothesis (Landauer and Dumais, 1997; Mikolov et al., 2013).",
"Embeddings that capture the semantic content of extant words are used for many natural language applications, including document or sentence classification (Kowsari et al., 2019), information retrieval and search (Mitra et al., 2018), language modelling and translation (Devlin et al., 2019), language generation (Brown et al., 2020), and more (Jurafsky and Martin, 2021).",
"In these cases, vector operations performed on word embeddings are used for higher-level tasks such as search or classification.",
"Word embeddings have largely concerned themselves with extant languagethat is, commonly used words which carry consistent meaningand thus cannot represent character n -grams outside of this space.",
"The few models that encompass character n -grams, which naturally include n -grams beyond extant words, often use RNNs (Mikolov et al., 2010) or encoder-decoder architectures (Sutskever et al., 2014) to represent character-level sequences.",
"In parallel, the ubiquitous use of Transformer models has led to studies of their inner representations, weights, and attention mechanism (Rogers et al., 2020; Clark et al., 2019).",
"Most Transformer models are trained using extant words and sub-words, largely focusing on their semantics and syntax; however, some recent models operate at the character level, such as CharacterBERT (El Boukkouri et al., 2020) and CharBERT (Ma et al., 2020).",
"Strikingly, character-level models excel at character-level tasks (e.g., spelling correction; Xie et al. 2016; Chollampatt and Ng 2018) and perform comparably to word-level models at language-modelling tasks (Kim et al., 2016).",
"Character-level models are therefore an ideal tool for studying the information and meaning encoded in n -grams beyond the realm of extant language.",
"Given that the current state-of-the-art is driven by Transformer-based models, throughout our study, we use the CharacterBERT model.",
"CharacterBERT is uniquely suited for our study as it uses a CharacterCNN module (Peters et al., 2018) to produce single embeddings for any input token, built as a variant to BERT which relies on sub-word tokenization (El Boukkouri et al., 2020).",
"Before presenting our results, we discuss general characteristics of the space beyond extant words; we reiterate that this space is missed by word and sub-word-based models.",
"Due to CharacterBERT's use of English characters, we restrict our analysis to English character n -grams, and we study the properties of CharacterBERT embeddings including English-based n -grams outside of extant language.",
"By studying CharacterBERT's representations of meaning encoded in n -grams that do not appear in consistent (or any) context in its training data, our framework goes beyond the traditional distributional hypothesis paradigm.",
"In this way, we seek to understand core properties of information encoded in n -grams beyond their lexicalized semantics by simultaneously studying n -grams that contain different types of information.",
"1 We use randomly generated character sequences to create n -grams that contain primitive informa-1 In analogy, the theory of ensemble perception in developmental psychology offers a framework to understand the human ability to understand the gist' of multiple objects at once (Sweeny et al., 2015).",
"tion but no meaning.",
"We adapt Marr's notion of primitive visual information for primitive textual information (Marr and Hildreth, 1980), and make the analogue between vision and language because information is substrate independent (Deutsch and Marletto, 2015).",
"In our case, primitive textual information is lower-level communicative information which is present in both text with and without meaning.",
"Being textual, our randomly generated n -grams are not bound by the constraints of human speech, and may be phonologically impossible; these garble n -grams may be seen as an example of textual noise.",
"In the following subsections, we provide three examples of languagedistorted speech, paralanguage, and pseudowordswhich motivate our study of character-level embeddings for randomly generated character n -grams.",
"We then describe the complementary information encoded by word morphology.",
"In popular use, garble refers to a message that has been distorted (garbled), such as speech where meaning is corrupted by phonological distortions.",
"For example, the phrase reading lamp may become eeling am when garbled.",
"Garbled speech contains lesser, or zero, meaning compared to ungarbled speech, but the signal of speech media is nonetheless present as information, which according to Shannon (1951) may contain no meaning at all.",
"Garbled speech satisfies the classical five-part definition of communication provided by Shannon (2001); an information source (speaker) can transmit (verbalize) an informationally primitive message through the channel of speech media through the receiver (ears) to the destination (listener).",
"Paralinguistic vocalizations are specifically identifiable sounds beyond the general characteristics of speech (Noth, 1990) and present another example of communication beyond lexicalized semantics.",
"Paralinguistic vocalizations include characterizers , like moaning; and segregates , like uh-huh for affirmation.",
"The border between such paralinguistic vocalizations and lexicalized interjections with defined meanings is fuzzy (Noth, 1990).",
"Pseudowords are phonologically possible character n -grams without extant lexical meaning.",
"Word-likeness judgments reveal that human distinctions between pseudowords and phonologically impossible nonwords are gradational (Needle et al., 2020).",
"As a unique informational class, pseudowords have been used in language neuronal activation studies (Price et al., 1996), infant lexical-semantic processing (Friedrich and Friederici, 2005), in poetry through nonsense (Ede, 1975), and in literary analyses (Lecercle, 2012).",
"Pseudowords can also elicit similar interpretations and associations across independent participants (Davis et al., 2019a).",
"To consider pseudowords generatively, it is helpful to note that an alphabetic writing system covers not only every word but every possible word in its language (Deutsch, 2011); pseudowords can thus be thought of as possible-but-uninstantiated (coun-terfactual) extant wordse.g., cyberspace was a pseudoword before the internet.",
"We embed randomly generated pseudowords into our model to study their information content and relation to both extant words and randomly generated n -grams.",
"Morphology deals with the systems of natural language that create words and word forms from smaller units (Trost, 1992).",
"Embedding spaces and the distributional hypothesis offer insights into the relationship between character combination, morphology and semantics.",
"Notably, morphological irregularities complicate the statistics of global character-level findings in the embedding space, like through suppletion where word forms change idiosynchratically e.g. go 's past tense is went , or epenthesis where characters are inserted under certain phonological conditions e.g. fox pluralizes as fox e s (Trost, 1992); so too do the multiple correct' spellings of pseudowords under conventional phoneme-to-grapheme mappings (Needle et al., 2020).",
"Distinctions between morphological phenomena can also be hard to define; for example, the boundary between derivation and compounding is fuzzy (Trost, 1992).",
"As described above, state-of-the-art language models serve as a tool to study meaning as it emerges though the distributional hypothesis paradigm.",
"Existing work on the analysis of Transformers and BERT-based models have explored themes we are interested in, such as semantics (Ethayarajh, 2019), 7122 Figure 1: UMAP projection of CharacterBERT embeddings for extant words (blue), pseudowords (magenta), and randomly generated character n -grams (black).",
"syntax (Goldberg, 2019), morphology (Hofmann et al., 2020, 2021), and the structure of language (Jawahar et al., 2019).",
"However, all of this work limits itself to the focus of extant words due to the word and sub-word-based nature of these models.",
"We study the structure of the largely unexplored character n -gram space which includes extant language, pseudowords and garble character n -grams, seen through the representations created by CharacterBERT, as follows.",
"To explore how the character n -gram space is structured in the context of character based distributional semantics, we embed 40,000 extant English words, 40,000 randomly generated character n -grams, and 20,000 pseudowords.",
"We choose the 40,000 most used English words that have been annotated for concreteness/abstractness ratings (Brysbaert et al., 2014).",
"Randomly generated character n -grams are forced to have a string length distribution that matches the corpus of extant words we analyze.",
"To generate pseudowords, we use a popular pseudoword generator.",
"2 2 http://soybomb.com/tricks/words/ The CharacterBERT (El Boukkouri et al., 2020) general model has been trained on nearly 40 GB of Reddit data using character sequences.",
"We leverage this model to create representations of character n grams that may not have been seen in the training data.",
"This allows us to use the resulting 512 dimensional embeddings for exploration via visualisation, topology modelling via distances and projections, and classification error analysis.",
"To guide our exploration of the high-dimensional topology of the resulting embeddings, we use the UMAP dimensionality reduction technique (McInnes et al., 2018).",
"UMAP creates a low-dimensional embedding by searching for a low-dimensional projection of the data that has the closest possible equivalent fuzzy topological structure as the original representations, thereby preserving both local and global structure.",
"In Appendix A, we demonstrate that our key results are not sensitive to this choice of dimensionality reduction method.",
"formation axis that captures most variance among extant and randomly generated n -grams.",
"To assign n -grams an information axis score,' we minmax-normalize the UMAP coordinates along this axis.",
"Thus, our information axis establishes a link between extant language and garble, thereby connecting meaning and primitive information.",
"Figure 1 shows how CharacterBERT embeddings of extant, pseudoword, and randomly generated character n grams arrange themselves in this space.",
"We perform several statistical tests to differentiate between categories of character n -grams along the information axis.",
"First, Table 1 lists the median and standard deviation of minmax-normalized position along the information axis, demonstrating that extant words, pseudowords, and garble are clearly separated.",
"Note that the scatter within each n -gram class is much smaller than the distances between classes, indicating that our results are robust to variations in the garble and pseudoword samples.",
"Next, we use the Kolmogorov-Smirnov (KS; Massey Jr 1951) two-sample test to assess differences between the information axis distributions of our n -gram classes.",
"All of the KS tests very significantly indicate differences between types of character n -gram and parts of speech along the information axis ( p 0 . 001 ).",
"Furthermore, the KS statistic score is 0.94 for (extant, random), 0.83 for (pseudoword, random), and 0.70 for (extant, pseudoword), indicating that extant and random n -grams differ most significantly along the information axis (consistent with Figures 12).",
"The visualisation of the character n -grams suggests that a hyperplane classifier is suitable for separating",
"We use a support vector machine (Cortes and Vapnik, 1995) trained on half of our 40,000 commonly-used extant words and half of our computer-generated garble to classify unseen extant, garble and pseudoword character n -grams.",
"We use this method to explore the information axis in the high-dimensional embedding space.",
"The classifier achieves an accuracy of 98.9% on unseen extant language and garble character n -grams, suggesting we can learn about the embeddings through error analysis.",
"In particular, we found similarities among extant words classified as garble.",
"74 .",
"4% (270/363) were compound or derivative words, similar to many extant language terms that lie near the midpoint of the information axis.",
"19% (69/363) were foreign words like hibachi or dialect words like doohickey.",
"The garble classification errorsgarble classified as extant languagewere in small part due to our randomization method inadvertently creating extant language labelled as garble, accounting for 9 .",
"5% (36/377) errors we identify.",
"The garble classified as extant language mostly contained phonologically impossible elements, though some were pseudowords.",
"When pseudowords were forcibly classified into extant or garble character n -grams, more pseudowords were classified as extant language than garble (12894 as extant to 7106 as garble).",
"Labelling affirms these intuitions, with pseudowords 7124 like flought looking intuitively familiar and being readable.",
"Given CharacterBERT's massive Reddit training data, typos and localized language may account for the classifier's tendency to classify pseudowords as extant language.",
"Also, our embedding space only uses the 40,000 most common English words out of 208,000 distinct lexicalized lemma words (Brysbaert et al., 2016), which may impact spatial structure if included.",
"We use this section to discuss the structure of language across the information axis derived from our low-dimensional UMAP space.",
"We structure our analysis across this axis as it organises the relative structure of extant words vs. randomly generated character n -grams, while also distinguishing internal structure within the extant word space.",
"At the scale of global structure, the information axis highlights that extant words are separated from randomly generated character n -grams (Figure 1).",
"We note that the midpoint of all character n -gram classes is 0.5 on our information axis.",
"Pseudowords populate the region near the midpoint of the information axis, and also overlap with both extant English and garble character n -grams (Figure 2).",
"There is no distinct boundary between the three classes of n -grams, consistent with both morphological descriptions of compound and derivational words and descriptions of paralanguage as fuzzy.",
"This global structureand the structure internal to extant language (Figure 3)goes beyond the distributional hypothesis by including n -grams that do not appear in consistent (or any) contexts, like pseudowords and garble.",
"Pseudowords lie between extant and garble character n -grams, but there is no distinct boundary between pseudowords and the other classes of n -grams.",
"Extant language, pseudoword, and garble regions have different internal structure (Figure 1).",
"The garble region has comparatively less structure than the extant language region, though there is some internal variation, notably a cluster of character n -grams ending in the character s separated from the main garble region.",
"We qualitatively explore the classes of garble and pseudoword embeddings revealed by our analysis in Appendix B, which includes supplementary discussion of the potential relevance of these findings for linguistic theory.",
"In our UMAP projection, detailed structure emerges for extant words split by part-of-speech (Figure 3).",
"In particular KS statistics between all part-of-speech pairs significantly indicate that their distributions differ along the information axis.",
"Furthermore, KS statistic values are 0.12 for (noun, verb), 0.11 for (noun, adjective), 0.64 for (noun, adverb), 0.22 for (verb, adjective), 0.72 for (verb, adverb), and 0.64 for (adjective, adverb).",
"This suggests that adverbs are most cleanly separated from other parts of speech along the information axis (consistent with Figure 3), which may indicate that morphemes such as affixes have important effects in embedding space.",
"A detailed investigation is beyond the scope of this paper and may require analyses through alternative heuristics such as pseu-domorphology and lexical neighborhood density (Needle et al., 2020).",
"Many extant words near the midpoint of the information axis are, or may be, compound words; the boundary between derivative and compound words is thought to be fuzzy because many derivational suffixes developed from words are frequently used in compounding (Trost, 1992).",
"Both derivative and compound words populate other spaces of the extant language region, but conflicting definitions hamper straightforward statistical analysis.",
"Morphological traits such as adjectival suffixes ness , ism , and able , or the adverbial suffix ly correlate to clear embedding mappings, but the boundaries for morphological classes are not distinct.",
"Garble ending in s occupies a closer region to extant language than most other garble, arguably due to the semantic associations of ending in s (e.g. regarding pluralization) derived from the suffix s .",
"Note, morphological heuristics like affixation apply to lexicalized words but not garble.",
"Pseudowords ending in s share that region of garble ending in s, however, such seemingly plural pseudowords tend closer to extant language, reflecting the notion that word form similarity increases with semantic similarity (Dautriche et al., 2017).",
"Given the fuzziness of morphology and the opaqueness of English spelling (Needle et al., 2020), pseudowords ending in s may or may not be due to affixation.",
"We calculate the center of extant UMAP coordinates with no weighting and with weighting by minmax-normalized concreteness and used those points to define a concreteness axis , which demonstrates that concreteness varies in a direction roughly orthogonal to our information axis (see Figure 4).",
"The bootstrap-resampled angle distribution between information and concreteness axes is 86 .",
"6 1 .",
"2 degrees.",
"Thus, the information axis and word concreteness capture two crucial and largely distinct aspects of the many latent features underlying CharacterBERT representations.",
"This finding is particularly relevant in light of recent work showing not only that word concreteness is a psychologically rich dimension that shapes semantic processing (Brys-baert et al., 2016; Guilbeault et al., 2020), but also that word concreteness is surprisingly effective at enriching the predictive capacities of word embedding models, such as for the purpose of automated metaphor detection (Srinivasa Desikan et al., 2020).",
"We leave a detailed investigation of this finding, including its relation to the visual information (Brys-baert et al., 2016) carried by concrete and abstract words, to future work.",
"We also create a language model using the Prediction by Partial Matching ( PPM ) variable order Markov model ( VOMM ) to estimate the probability of each of these character n -grams (Begleiter et al., 2004).",
"The model calculates the logpdf for each character n -gram in which more commonly occurring character n -grams have a lower score, and less commonly occurring character n -grams receive a higher score.",
"The model is trained on extant words, then used to score all of the extant, pseudowords and garble character n -grams.",
"We use this score to capture the likelihood of character n -grams in our character sequence space (Figure 5).",
"These Markov model values correlate with our information axis measure.",
"In particular, the Spearman correlation coefficient between information axis and Markov chain information content is 0.4 (highly significant) for randomly generated n grams, and 0.007 (not significant) for extant words.",
"Thus, for random character n -grams, our information axis measure is correlated with statistical properties of the character n -grams from the Markov model (see the left panel of Figure 5).",
"However, our information axis measure more clearly separates extant and garble n -grams, indicating that it incorporates information beyond purely statistical properties of n -gram classes (see the right panel of Figure 5).",
"This suggests that the CharacterBERT model learns information beyond character-level statistical information, even for n -grams that never explicitly appear in the training data.",
"Using the CharacterBERT model, we embedded a large corpus of character level n -grams outside of extant language to study how the primitive information they contain relates to the semantic information carried by extant language.",
"The key findings of this paper are:",
"1. Extant words and randomly generated character n -grams are separated along a particular axis in our UMAP projection of CharacterBERT embedding space (Figures 12);",
"2. Pseudowords lie between extant and randomly generated n -grams along this axis, but there is no distinct boundary between these classes of n -grams (Figures 12);",
"3. The structure of CharacterBERT embeddings of extant language, including structure based on part-of-speech and morphology, is correlated with the information axis (Figure 3);",
"4. Word concreteness varies along a dimension that is roughly orthogonal to the information axis in our UMAP projection (Figure 4);",
"5. Separation between extant and randomly generated n -grams captured by CharacterBERT is correlated with and more coherent than that based purely on the statistical properties of n -grams (Figure 5).",
"These findings suggest that character-based Transformer models are largely able to explore the relation between extant words and randomly generated character strings.",
"In particular, character-level models capture complex structure in the space of words, pseudowords, and randomly generated n grams.",
"These findings are consistent with work suggesting that character-level and morpheme-aware representations are rich in meaning, even compared to word or sub-word models (Al-Rfou et al., 2019; El Boukkouri et al., 2020; Ma et al., 2020; Hofmann et al., 2020, 2021).",
"Our study is limited to extant words in English and randomly generated character n -grams using the English alphabet.",
"Given the unique impact of a specific language and alphabet on representation spaces, there is motivation to see whether the relationships we identify generalise to other languages and alphabets.",
"Finally, we reiterate that our analysis was limited to the last embedding layer of the CharacterBERT model; future work may focus on weights in earlier layers, including attention mechanisms explored by other BERTology studies (Clark et al., 2019; Jawahar et al., 2019).",
"By only analysing the final embedding layer, we study 7127 Figure 5: Left panel : Minmax-normalized position along the information axis shown in Figure 1 vs. minmax-normalized information content from our Markov Chain model, for extant words (blue) and randomly generated character n -grams (black).",
"the psychology' of such character-level models; in analogy, much may be gained by studying the neuroscience' of such models encoded in their attention weights (Wang, 2020).",
"Our study also has important practical implications for the widespread use of pseudowords as an experimental tool in psycholinguistic research.",
"Pseudowords are frequently used as stimuli to observe the psychological and neurocognitive processes underlying the interpretation of novel words (Price et al., 1996; Stark and McClelland, 2000; Keuleers and Brysbaert, 2010; Lupyan and Casasanto, 2015; Davis et al., 2019b).",
"However, the lion's share of this research treats all pseudoword stimuli as equivalent in their novelty, based on prima facie human judgments.",
"By contrast, our method shows that not all pseudowords are created equal.",
"Due to various features of character sequences, including morphological structure, some pseudowords encode disproportionately more information according to character-aware language models, and are therefore represented as significantly more similar to extant words, whereas other pseudowords are recognized by these models as random character sequences.",
"This variation is especially striking given that the algorithms used to generate pseudowords are highly constrained and designed to produce morphologically coherent words (Keuleers and Brysbaert, 2010); that some pseudowords are evaluated as random by CharacterBERT reveals not only asymmetries in the coherence of pseudowords that may be of psychological relevance, but also assumptions and limitations in terms of which morphological units CharacterBERT and related models recognize as signatures of extant words.",
"Our study thus provides a quantitative method for evaluating pseudoword plausibility, without relying on variable human judgments, while also revealing insights into key differences between how humans and contemporary language models evaluate the plausibility of pseudowords.",
"To allow for further explorations and replicabil-ity, we release all of our data and code on GitHub 3 .",
"Our findings reveal new avenues for future work using character-aware embeddings of extant, pseudoword, and garble n -grams, including analyses of nonsense poetry like Lewis Carroll's Jabber-wocky or of the innovative idiosyncrasies of rap lyricists and graffiti artists.",
"The embeddings we study may also complement philological studies (especially if dynamic analyses are employed), as well as research into novel category formation (Lupyan and Casasanto, 2015; Guilbeault et al., 2021).",
"Also, language acquisition studies of the distinction between language and noise may benefit from character-level embeddings beyond the realm of extant language (Yang, 2006; Carey, 2000).",
"By investigating a broadened embedding space to include randomly generated n -grams, we found new structures of meaning through the context of meaningless information; further studies may extend our garble-based approach across different media and modes to contribute to more general understandings of human meaning."
] | [
"abstain",
"objective",
"abstain",
"method",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"method",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"result",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"abstain",
"abstain",
"objective"
] |
[
"Understanding procedural language requires reasoning about both hierarchical and temporal relations between events.",
"For example, boiling pasta is a sub-event of making a pasta dish, typically happens before draining pasta, and requires the use of omitted tools (e.g. a strainer, sink...).",
"While people are able to choose when and how to use abstract versus concrete instructions, the NLP community lacks corpora and tasks for evaluating if our models can do the same.",
"In this paper, we introduce KIDSCOOK , a parallel script corpus, as well as a cloze task which matches video captions with missing procedural details.",
"Experimental results show that state-of-the-art models struggle at this task, which requires inducing functional commonsense knowledge not explicitly stated in text.",
"The level of detail used in natural language communication varies: descriptive or instructive text for experts may elide over details the reader can seamlessly infer, while text for more novice audiences may be more verbose.",
"A given document typically adheres to a single level of verbosity suited to its presumed audience (Grice, 1975), so learning correspondences between abstract and detailed descriptions of similar concepts from text is a challenging problem.",
"Commonsense knowledge of how complex events decompose into stereotypical sequences of simpler events is a necessary component of a system that can automatically understand and reason about different types of discourse.",
"Hierarchical correspondences between abstract and detailed representations of concepts and events were an important aspect of the original formulation of scripts for natural language understanding (Schank and Author now at Google.",
"Abelson, 1977; DeJong, 1981) but required handwritten data structures encoding world knowledge.",
"However, the automatic induction of such commonsense knowledge from open-domain noisy text corpora remains an open problem (Chambers, 2013; Weber et al., 2018; Zellers et al., 2018).",
"As a step towards solving this problem we consider textual descriptions of actions in a cooking domain.",
"We introduce a dataset, KIDSCOOK , targeted at exploring the automatic acquisition of correspondences between abstract and concrete descriptions of actions.",
"The dataset consists of higher-level single-sentence imperative descriptions paired with lower-level descriptions with elided details included.",
"Descriptions come from real grounded actions, built on top of the YouCookII video caption dataset (Zhou et al., 2017).",
"Figure 1 gives an example annotation from the dataset: the phrase drain the pasta, presented to an annotator with its corresponding video clip, was annotated as corresponding to four constituent steps appropriate as instruction for a child.",
"The constituent steps are simpler in the sense that they correspond to more atomic actions, but not necessarily in their linguistic complexity.",
"We identify over 1,500 procedures and tools which KIDSCOOK makes explicit but are assumed as commonsense world knowledge by YouCookII.",
"The KIDSCOOK dataset allows us to learn mappings between abstract and concrete descriptions via sequence-to-sequence prediction.",
"We apply several standard neural sequence-to-sequence models; however, since these models do not expose explicit, interpretable correspondences between abstract and concrete descriptions, we also propose the application of neural transduction models which capture correspondences with latent hard alignment variables.",
"We define a cloze-style evaluation to complement our dataset, in which models must predict the values of held-out tokens which target knowledge of tool usage, temporal ordering, and kitchen commonsense.",
"We find that our neural transduction models are able to match the predictive power of traditional neural sequence models while providing interpretable alignments between abstract and concrete subsequences useful for our primary goal of analysis of implicit hierarchical script knowledge.",
"Our approach situates script learning as a case of grounding.",
"For simplicity of exposition, let us assume there are three levels of abstraction to grounding: abstract concrete motor control .",
"Most prior work in grounding treats language monolithically 1 and ignores the issue of audience.",
"In practice, this means the task formulation or exposed API may implicitly bias the language to be more concrete.",
"By viewing the task as purely linguistic, we have no API or robot that constrains our language; instead, we define our audience as children.",
"By eliciting child-directed instructions, we collect concrete language capturing otherwise implicit world knowledge that a child would not know.",
"Because annotators assume a smart and capable but uninformed listener, we posit this language corresponds closely to the most concrete form in which language naturally occurs.",
"We construct a task on Amazon's Mechanical Turk, where workers are asked to explain a video action caption to a child.",
"2 Every instruction is paired with the original YouTube video and YouCook caption so the annotator could see how 1 Notable exceptions include the hierarchical instructions of (Regneri et al., 2013) and (Bisk et al., 2016).",
"the action was performed, rather than hallucinating additional details.",
"All captions received three simplifications.",
"The instructions ask users to focus on missing information and allow them up to five steps.",
"Finally, we explicitly asked annotators to simplify complex actions (e.g. dice ) that can be defined by a series of more basic actions (e.g. cut ).",
"Our KIDSCOOK corpus statistics are shown in Table",
"1. In total we collected over 10K action sequences ( 400K tokens).",
"The average caption is approximately 4x longer than a YouCook caption.",
"Most importantly 1,536 lemmas and 2,316 lexical types from KIDSCOOK 's vocabulary do not appear in any of the original captions.",
"This indicates that there are over 1,500 new concepts, tools, and procedures that were assumed by YouCookII but are now explicit in KIDSCOOK .",
"To investigate what new knowledge is being introduced and whether a model has captured it, we construct a cloze-style slot-filling task (Cham-bers, 2017; Hermann et al., 2015).",
"We drop key content words from the concrete realization of an abstract instruction and ask the model to predict them.",
"Several examples from the validation set are shown in Table",
"2. Correctly predicting the missing words requires knowledge of the manner of executing a task and the tools required.",
"To choose candidate words to drop, we only allow words that occur primarily in the concrete instructions.",
"Additionally, we do not drop stop words, numbers, or words occurring fewer than five times.",
"We do, however, drop units of measure ( cup , minute , etc.).",
"This ensures we create blanks whose answers are previously omitted concrete details.",
"Relatedly, under this filter the answer to a blank is very rarely an ingredient, as our goal is not to memorize recipes, but to infer the tool knowledge necessary to execute them.",
"In total, we whitelist 1,000 words that can be dropped to create blanks.",
"We prefer longer blanks when available to give preference to compound nouns (e.g. wire whisk ).",
"Finally, we do not drop any words ABS chop garlic into small pieces .",
"CON put garlic on cutting board .",
"press on back of knife with hand , cutting into small pieces.",
"from the concrete sentence if they occur in the abstract description.",
"This restriction eliminates any benefits that might have been achieved via models with copy mechanisms.",
"Examples that do not meet our criteria are removed from the corpus.",
"We investigate the utility of sequence-to-sequence models with attention (Bahdanau et al., 2015) to generate concrete realizations of abstract task descriptions.",
"We hypothesize that models that learn explicit alignments are particularly amenable to interpretable analysis on the task.",
"Therefore, in addition to using the global attention model of (Luong et al., 2015), we adapt the transducer model proposed by Yu et al. (2016), which uses learned latent discrete variables to model phrase-to-phrase alignments.",
"In contrast to many standard neural models, this approach enables us to incorporate prior knowledge about the alignment structure, and to extract interpretable alignments between task phrases.",
"Closely related architectures have been proposed for segmental sequence modeling (Wang et al., 2017) and phrase-based neural machine translation (Huang et al., 2018).",
"We train the transducer models using Viterbi EM (after doing marginal likelihood training for the initial iterations), as we found it gave higher predictive accuracy than marginal likelihood training only.",
"Following Yu et al. (2016) we experiment with both a fixed alignment transition probability model and a transition model with a neural parameterization.",
"Cloze task prediction is performed greedily.",
"3 At each slot the Viterbi alignment of the prefix of the sequence up to that slot is computed.",
"See appendix 7 for model details.",
"4 We also evaluate the performance of a language modelling baseline and a seq2seq model without attention (Sutskever et al., 2014), to compare the 3 During preliminary experiments beam search did not improve performance.",
"effect of not modeling alignment at all.",
"We expect all the models to implicitly capture aspects of world knowledge.",
"However, the discrete latent variable models provide Viterbi alignments over the training data, from which we can compile a look-up table with the extracted knowledge.",
"In neural attention models, this knowledge is only weakly recoverable: extracting information requires hand tuning attention thresholds and there is no direct way to extract contiguous alignments for multi-word phrases.",
"During generation, we provide the model with the number of words in each blank to be predicted.",
"We consider two setups for evaluating examples with multiple blanks, both assuming that predictions are made left-to-right: Oracle, where the gold prediction of each blank is fed into the model to condition on for future predictions, and Greedy, where the model prediction is used for future predictions.",
"We compute the proportion of exact word matches over each blank and the precision of the top k = 5 predictions for both setups.",
"Additionally we compute the average surprisal of the gold prediction (conditioning on oracle predictions).",
"The surprisal of a word (Attneave, 1959; Hale, 2001) is its negative log probability under the model: log ( P ( w i | w 1: i 1 )) .",
"The higher the probability of the ground truth, the lower the model's sur-prise at seeing it in that context.",
"Finally, as a quantitative proxy for interpretability, we report the length of the transducer mod-els' average Viterbi alignment span: our goal is a model which balances low average alignment lengths and high matching or ranking scores.",
"We report results on the prediction task in Table 4.",
"First we consider models trained only on our dataset: All the models that incorporate a notion of alignment do substantially better than those who abstract concrete concrete abstract parmesan sprinkle grated, grate, hold a grater, ... whisk eggs, mayonnaise, milk, combine, pour, stir, ... macaroni stove on high heat, large pot, bowl, ... spatula colors, thickens, coated, simmer, grill, ... egg place the boiled, gently crack the, crack, ... tongs shrimp, bratwurst, turn, bun, marinate, ... sauce stir hot, pour gravy, lower setting, find a spoon, ... cutting board onions, bell pepper, meat, bok choy, ... oil spray cooking, splashing, slowly pour, ... preheat oven, broil, medium, degrees, ...",
"do not.",
"We see that our transducer model with fixed alignment transition probabilities performs best in terms of predictive accuracy (exact match and top-5 precision), while the seqseq model with attention is the next best in most comparisons.",
"The model with parameterized transitions has the lowest surprisal though, as it is more confident about the alignment predictions it is making.",
"Using average alignment length to quantify whether the phrase alignments exhibit desirable structure, we see that the alignments found by the unparameterized transition model (average length 6.18) are significantly shorter than those of the parameterized model (average length 16.61).",
"Investigation shows that the paramaterized model mostly learns degenerate alignments, aligning most of the concrete sequence to either the start or end of the abstract sentence.",
"In contrast, qualitative analysis of the unparameterized transition model show that its alignments learn desirable correspondences (see Figure 2).",
"Therefore among our proposed models (trained on in-domain data only) the transducer with unparameterized transitions satisfies our desiderata of displaying both good predictive power for word generation, and learning interpretable alignments.",
"trained language models (Peters et al., 2018), we are interested if these approaches transfer to our cloze task.",
"We evaluate the OpenAI GPT transformer language model (Radford et al., 2018) with and without fine-tuning.Without fine-tuning this model does slightly worse than our best domain-specific model.",
"With fine-tuning, its accuracy is substantially higher, but it still suffers from the same fundamental limitations as our other models (see Table 5).",
"The transformer (Vaswani et al., 2017) attention is multi-headed and multi-layered which prohibits direct interpretability.",
"We visualize alignments of our transduction model over two partial sequences in Fig.",
"2. This shows which hidden vector of the abstract sentence aligned to every region of the concrete sequence.",
"Specifically, we see how tools like the big bowl , spoon , and tongs are introduced to facilitate the actions.",
"There are also implications, e.g. that high indicates grill .",
"For further analysis we extract alignments over the training corpus, linking each decoded phrase with the word from the encoding it used during generation.",
"We then aggregate these tuples into a table which we can filter (based on our whitelist) and sort (with PMI).",
"This process is imprecise as it discards the context in which the alignment occurs, but it nonetheless extracts many Abs shape each dough ball into a circle and add tomato sauce .",
"of the phenomena we would hope to see (Table 3).",
"The left-hand side of the table shows words from the abstract YouCook annotations and corresponding phrases in the concrete annotation.",
"For the righthand side we searched for common concrete terms that may be preceded or followed by other terms, and present the abstract terms they were most often generated by.",
"Finally, Table 5 shows three randomly chosen examples (from the validation set) of greedy decodings for slot filling with GPT fine-tuned on our dataset.",
"These examples demonstrate that, first, there are cases where GPT is successful or produces a semantically valid answer (e.g. fully vs completely ).",
"Second, as is common with greedy decoding, the model can get stuck in a loop (e.g. cut, cutting, cutting, ... ).",
"Finally, note there are nonsensical cases where the model appears to have discarded the abstract context (e.g. knife to add tomato sauce or freezer on a cold water ).",
"Many script learning systems are based on event co-occurrence and language modeling in large text corpora, and can infer implicit events without creating explicit situation-specific frame structures (Chambers and Jurafsky, 2008; Rudinger et al., 2015; Pichotta and Mooney, 2016).",
"Other systems induce situation-specific frames from text (Che-ung et al., 2013; Balasubramanian et al., 2013).",
"However, these methods do not explicitly target the commonsense correspondence between differing levels of detail of complex events.",
"Most relevant to this paper is the pioneering work of Regneri et al. (2013) as extended by Senina et al. (2014) and Rohrbach et al. (2014).",
"These papers present the TACOS corpus, consisting of natural language descriptions of activities in videos paired with low-level activity labels.",
"Senina et al. (2014) collect an additional level of multi-sentence annotations on the corpus, which allowing for video caption generation at multiple levels of detail.",
"Rohrbach et al. (2014) describe a similar corpus of natural descriptions of composite actions, useful for activity recognition in video.",
"These corpora differ in a number of important ways from KIDSCOOK ; in particular, the language has somewhat limited complexity and nat-uralness when describing complex scenarios, a phenomenon also observed in the robotics literature (Scalise et al., 2018).",
"Our data collection process avoids more formulaic language by eliciting child-directed descriptions.",
"We introduce a new hierarchical script learning dataset and cloze task in which models must learn commonsense world knowledge about tools, procedures and even basic physics to perform well.",
"Our aim is to begin a conversation about abstraction in language, how it is modeled, and what is implicitly hidden.",
"Our abstract and concrete instructions are grounded in the same videos yet differ dramatically due to their assumed audiences.",
"We show that a neural transduction model produces interpretable alignments for analyzing these otherwise latent correlations and phenomena.",
"This work was supported in part by NSF (IIS-1524371 & 1703166) and through DARPA's CwC program through ARO (W911NF-15-1-0543)."
] | [
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"method",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"abstain",
"objective",
"objective",
"objective",
"result",
"other"
] |
[
"Extracting structured knowledge from product profiles is crucial for various applications in e-Commerce.",
"State-of-the-art approaches for knowledge extraction were each designed for a single category of product, and thus do not apply to real-life e-Commerce scenarios, which often contain thousands of diverse categories.",
"This paper proposes TXtract, a taxonomy-aware knowledge extraction model that applies to thousands of product categories organized in a hierarchical taxonomy.",
"Through category conditional self-attention and multi-task learning, our approach is both scalable, as it trains a single model for thousands of categories, and effective, as it extracts category-specific attribute values.",
"Experiments on products from a taxonomy with 4,000 categories show that TXtract outperforms state-of-the-art approaches by up to 10% in F1 and 15% in coverage across all categories.",
"Real-world e-Commerce platforms contain billions of products from thousands of different categories, organized in hierarchical taxonomies (see Figure 1).",
"Knowledge about products can be represented in structured form as a catalog of product attributes (e.g., flavor ) and their values (e.g., strawberry).",
"Understanding precise values of product attributes is crucial for many applications including product search, recommendation, and question answering.",
"However, structured attributes in product catalogs are often sparse, leading to unsatisfactory search results and various kinds of defects.",
"Thus, it is invaluable if such structured information can be extracted from product profiles such as product titles and descriptions.",
"Consider for instance the Ice Cream product of Figure",
"1. The corresponding title can potentially Work performed during internship at Amazon.",
"be used to extract values for attributes, such as Ben & Jerry's for brand , Strawberry Cheesecake for flavor , and 16 oz for capacity .",
"State-of-the-art approaches for attribute value extraction (Zheng et al., 2018; Xu et al., 2019; Rezk et al., 2019) have employed deep learning to capture features of product attributes effectively for the extraction purpose.",
"However, they are all designed without considering the product categories and thus cannot effectively capture the diversity of categories across the product taxonomy.",
"Categories can be substantially different in terms of applicable attributes (e.g., a Camera product should not have flavor ), attribute values (e.g., Vi-tamin products may have fruit flavor but Ba-nana products should not) and more generally, text patterns used to describe the attribute values (e.g., the phrase infused with is commonly followed by a scent value such as lavender in Hair Care products but not in Mattresses products).",
"In this paper, we consider attribute value extraction for real-world hierarchical taxonomies with thousands of product categories, where directly applying previous approaches presents limitations.",
"On the one extreme, ignoring the hierarchical structure of categories in the taxonomy and assuming a single flat category for all products does not capture category-specific characteristics and, as we will show in Section 5, is not effective.",
"On the other extreme, training a separate deep neural network for each category in the product taxonomy is prohibitively expensive, and can suffer from lack of training data on small categories.",
"To address the limitations of previous approaches under this challenging setting, we propose a framework for category-specific attribute value extraction that is both efficient and effective.",
"Our deep neural network, TXtract, is taxonomy-aware : it leverages the hierarchical taxonomy of product categories and extracts attribute values for a product conditional to its category, such that it automatically associates categories with specific attributes, valid attribute values, and category-specific text patterns.",
"TXtract is trained on all categories in parallel and thus can be applied even on small categories with limited labels.",
"The key question we need to answer is how to condition deep sequence models on product categories .",
"Our experiments suggest that following previous work to append category-specific artificial tokens to the input sequence, or concatenate category embeddings to hidden neural network layers is not adequate.",
"There are two key ideas behind our solution.",
"First, we use the category information as context to generate category-specific token embeddings via conditional self-attention.",
"Second, we conduct multi-task training by meanwhile predicting product category from profile texts; this allows us to get token embeddings that are discriminative of the product categories and further improve attribute extraction.",
"Multi-task training also makes our extraction model more robust towards wrong category assignment, which occurs often in real e-Commerce websites.",
"1 To the best of our knowledge, TXtract is the first deep neural network that has been applied to attribute value extraction for hierarchical taxonomies with thousands of product categories.",
"In particular, we make three contributions.",
"1 Examples: (1) an ethernet cable assigned under the Hair Brushes: https://www.amazon.com/dp/B012AE5EP4; (2) an eye shadow product assigned under Travel Cases: https://www.amazon.com/dp/B07BBM5B33.",
"Screenshots of these product profiles are taken in 12/2019 and available at the Appendix.",
"1. We develop TXtract, a taxonomy-aware deep neural network for attribute value extraction from product profiles for multiple product categories.",
"In TXtract, we capture the hierarchical relations between categories into category embeddings , which in turn we use as context to generate category-specific token embeddings via conditional self-attention.",
"2. We improve attribute value extraction through multi-task learning: TXtract jointly extracts attribute values and predicts the product's categories by sharing representations across tasks.",
"3. We evaluate TXtract on a taxonomy of 4,000 product categories and show that it substantially outperforms state-of-the-art models by up to 10% in F1 and 15% in coverage across all product categories.",
"Although this work focuses on e-Commerce, our approach to leverage taxonomies can be applied to broader domains such as finance, educa-tion, and biomedical/clinical research.",
"We leave experiments on these domains for future work.",
"The rest of this paper is organized as follows.",
"Section 2 discusses related work.",
"Section 3 presents background and formally defines the problem.",
"Section 4 presents our solution and Section 5 describes experimental results.",
"Finally, Section 6 concludes and suggests future work.",
"Here, we discuss related work on attribute value extraction (Section 2.1), and multi-task learning/meta-learning (Section 2.2).",
"Attribute value extraction was originally addressed with rule-based techniques (Nadeau and Sekine, 2007; Vandic et al., 2012; Gopalakrish-nan et al., 2012) followed by supervised learning techniques (Ghani et al., 2006; Putthividhya and Hu, 2011; Ling and Weld, 2012; Petrovski and Bizer, 2017; Sheth et al., 2017).",
"Most recent techniques consider open attribute value extraction: emerging attribute values can be extracted by sequence tagging, similar to named entity recognition (NER) (Putthividhya and Hu, 2011; Chiu and Nichols, 2016; Lample et al., 2016; Yadav and Bethard, 2018).",
"State-of-the-art methods employ deep learning for sequence tagging (Zheng et al., 2018; Xu et al., 2019; Rezk et al., 2019).",
"However, all previous methods can be adapted to a small number of categories and require many labeled datapoints per category.",
"2 Even the Active Learning method of Zheng et al. (2018) requires humans to annotate at least hundreds of carefully selected examples per category.",
"Our work differs from previous approaches as we consider thousands of product categories organized in a hierarchical taxonomy.",
"Our framework is related to multi-task learning (Caruana, 1997) as we train a single model simultaneously on all categories (tasks).",
"Traditional approaches consider a small number of different tasks, ranging from 2 to 20 and employ hard parameter sharing (Alonso and Plank, 2017; Yang et al., 2017; Ruder, 2019): the first layers of neural networks are shared across all tasks, while the separate layers (or heads) are used for each individual task.",
"In our setting with thousands of different categories (tasks), our approach is efficient as we use a single (instead of thousands) head and effective as we distinguish between categories through low-dimensional category embeddings.",
"Our work is also related to meta-learning approaches based on task embeddings (Finn et al., 2017; Achille et al., 2019; Lan et al., 2019): the target tasks are represented in a low-dimensional space that captures task similarities.",
"However, we generate category embeddings that reflect the already available, hierarchical structure of product categories in the taxonomy provided by experts.",
"We now provide background on open attribute value extraction (Section 3.1) and define our problem of focus (Section 3.2).",
"Most recent approaches for attribute value extraction rely on the open-world assumption to discover attribute values that have never been seen during training (Zheng et al., 2018).",
"State-of-the-art approaches address extraction with deep sequence tagging models (Zheng et al., 2018; Xu et al., 2 Zheng et al. (2018) considered 3 categories: Dog Dood, Cameras, and Detergent.",
"Xu et al. (2019) consider 1 category: Sports & Entertainment.",
"Rezk et al. (2019) considered 21 categories and trained a separate model for each category.",
"Input Ben & Jerry's black cherry cheesecake ice cream Output O O O B I E O O Table 1: Example of input/output tag sequences for the flavor attribute of an ice cream product.",
"2019; Rezk et al., 2019): each token of the input sequence x = ( x 1 , . . . , x T ) is assigned a separate tag from {B, I, O, E}, where B, I, O, and E represent the beginning, inside, outside, and end of an attribute, respectively.",
"(Not extracting any values corresponds to a sequence of O-only tags.)",
"Table 1 shows an input/output example of flavor value extraction from (part of) a product title.",
"Given this output tag sequence, black cherry cheesecake is extracted as a flavor for the ice cream product.",
"We represent the product taxonomy as a tree C , where the root node is named Product and each taxonomy node corresponds to a distinct product category: c C .",
"A directed edge between two nodes represents the category-to-subcategory relationship.",
"A product is assigned to a category node in C .",
"In practice, there are often thousands of nodes in a taxonomy tree and the category assignment of a product may be incorrect.",
"We now formally define our problem as follows.",
"DEFINITION: Consider a product from a category c and the sequence of tokens x = ( x 1 , . . . , x T ) from its profile, where T is the sequence length.",
"Let a be a target attribute for extraction.",
"Attribute extraction identifies subsequences of tokens from x , each sub-sequence representing a value for a .",
"For instance, given (1) a product title x = Ben & Jerry's Strawberry Cheesecake Ice Cream 16 oz, (2) a product category c = Ice Cream, and (3) a target attribute = flavor , we would like to extract Strawberry Cheesecake as a flavor for this product.",
"Note that we may not see all valid attribute values during training .",
"In this paper, we address open attribute value extraction using a taxonomy-aware deep sequence tagging model, TXtract.",
"Figure 2 shows the model architecture, which contains two key components: attribute value extraction and product category ProductEnc x 1 Product Profile x T y 1 y T h 1 h T Product Category CRF c Product Taxonomy e c CondSelfAtt Category Embedding h 1 h T h c \u0000 Product Embedding Predicted Category Taxonomy-Aware Product Category Prediction Taxonomy-Aware Attribute Value Extraction CategoryEnc Att CategoryCLF Figure 2: TXtract architecture: tokens ( x 1 , . . . , x T ) are classified to BIOE attribute tags ( y 1 , . . . , y T ) by conditioning to the product's category embedding e c .",
"prediction, accounting for the two tasks in multitask training.",
"Both components are taxonomy aware, as we describe next in detail.",
"TXtract leverages the product taxonomy for attribute value extraction.",
"The underlying intuition is that knowing the product category may help infer attribute applicability and associate the product with a certain range of valid attribute values.",
"Our model uses the category embedding in conditional self-attention to guide the extraction of category-specific attribute values.",
"The product encoder (ProductEnc) represents the text tokens of the product profile ( x 1 , . . . , x T as low-dimensional, real-valued vectors:",
"To effectively capture long-range dependencies between the input tokens, we use word embeddings followed by bidirectional LSTMs (BiL-STMs), similar to previous state-of-the-art approaches (Zheng et al., 2018; Xu et al., 2019).",
"Our category encoder (CategoryEnc) encodes the hierarchical structure of product categories",
"such that TXtract understands expert-defined relations across categories, such as Lager is a subcategory of Beer.",
"In particular, we embed each product category c (taxonomy node) into a low-dimensional latent space: e c = CategoryEnc( c ) R m .",
"To capture the hierarchical structure of the product taxonomy, we embed product categories into the m -dimensional Poincar ball (Nickel and Kiela, 2017), because its underlying geometry has been shown to be appropriate for capturing both similarity and hierarchy.",
"The key component for taxonomy-aware value extraction is category conditional self-attention (CondSelfAtt).",
"CondSelfAtt generates category-specific token embeddings ( h i R d ) by conditioning on the category embedding e c : h 1 , . . . h T = CondSelfAtt(( h 1 , . . . , h T ) , e c ) .",
"(3) To leverage the mutual interaction between all pairs of token embeddings h t , h t (cid:48) and the category embedding e c we use self-attention and compute pairwise sigmoid attention weights: t,t (cid:48) = ( w T g t,t (cid:48) + b ) , t, t (cid:48) = 1",
"We compute scores g t,t (cid:48) using both the token embeddings h t , h t (cid:48) and the category embedding e c :",
"where W 1 R p d , W 2 R p d , W 3 R p m , w R p are trainable attention matrices and b g R p , b R , are trainable biases.",
"The T T attention matrix A = a t,t (cid:48) stores the pairwise attention weights.",
"The contextualized token embeddings are computed as: h t = T (cid:88) t (cid:48) =1 t,t (cid:48) h t (cid:48) .",
"We feed the contextualized token representations h = ( h 1 , . . . , h T ) to CRFs to get the sequence of BIOE tags with the highest probability:",
"4.1.5 Training for Attribute Value Extraction Our training objective for attribute value extraction is to minimize the negative conditional log-likelihood of the model parameters on N training products x i with ground truth labels y i 1 . . . , y iT : L a = N (cid:88) i =1 log P r ( y i 1 , . . . , y iT | x i , c i ) (8) We train our model on all categories in parallel, thus leveraging for a given category products from related categories.",
"To generate training sequence labels from the corresponding attribute values, we use the distant supervision framework of Mintz et al. (2009), similar to Xu et al. (2019), by generating tagging labels according to existing (sparse) values in the Catalog.",
"We now describe how we train TXtract for the auxiliary task of product category prediction through multi-task learning.",
"Our main idea is that by encouraging TXtract to predict the product categories using only the product profile, the model will learn token embeddings that are discriminative of the product categories.",
"Thus, we introduce an inductive bias for more effective category-specific attribute value extraction.",
"Our attention component (Att) represents the product profile ( x 1 , . . . , x T ) as a single vector h R n computed through the weighted combination of the ProductEnc's embeddings ( h 1 , . . . , h T ) :",
"This weighted combination allows tokens that are more informative for a product's category to get higher attention weights 1 , . . . , T [0 , 1] .",
"For example, we expect x t = frozen to receive a relatively high t for the classification of a product to the Ice Cream category.",
"We compute the attention weights as: t = softmax( u Tc tanh( W c h t + b c )) , (10) where W c R q d , b c R q , u c R q are trainable attention parameters.",
"Our category classifier (CategoryCLF) classifies the product embedding h to the taxonomy nodes.",
"In particular, we use a sigmoid classification layer to predict the probabilities of the taxonomy nodes: p 1 , . . . , p | C | = sigmoid( W d h + b d ) , (11) where W d R | C | d and b d R | C | are trainable parameters.",
"We compute sigmoid (instead of soft-max) node probabilities because we treat category prediction as multi-label classification, as we describe next.",
"Training for flat classification of products to thousands of categories is not effective because the model is fully penalized if it does not predict the exact true category c while at the same time ignores parent-children category relations.",
"Here, we conduct hierarchical classification by incorporating the hierarchical structure of the product taxonomy into a taxonomy-aware loss function.",
"The insight behind our loss function is that a product assigned under c could also be assigned under any of the ancestors of c .",
"Thus, we consider hierarchical multi-label classification and encourage TXtract to assign a product to all nodes in the path from c to the root, denoted by ( c K , c K 1 , . . . , c 1 ), where K is the level of the node c in the taxonomy tree.",
"The model is thus encouraged to learn the hierarchical taxonomy relations and will be penalized less if it predicts high probabilities for ancestor nodes (e.g., \"Beer\" instead of Lager in Figure 1).",
"Our minimization objective is the weighted version of the binary cross-entropy (instead of unweighted categorical cross-entropy) loss: 3 L b = (cid:88) c C w c ( y c log p c + (1 y c ) log(1 p c )) , (12) For the nodes in the path from c to the root ( c K , c K 1 , . . . , c 1 ), we define positive labels y c = 1 and weights w c that are exponentially decreasing ( w 0 , w 1 , . . . , w K 1 ), where 0 < w 1 is a tunable hyper-parameter.",
"The remaining nodes in C receive negative labels y c = 0 and fixed weight w c = w K 1 .",
"We jointly train TXtract for attribute value extraction and product category prediction by combining the loss functions of Eq.",
"(8) and Eq.",
"(12): L = L a + (1 ) L b , (13) where [0 , 1] is a tunable hyper-parameter.",
"Here, we employ multi-task learning, and share ProductEnc across both tasks.",
"We empirically evaluated TXtract and compared it with state-of-the-art models and strong baselines for attribute value extraction on 4000 product categories.",
"TXtract leads to substantial improvement across all categories, showing the advantages of leveraging the product taxonomy.",
"Dataset: We trained and evaluated TXtract on products from public web pages of Amazon.com.",
"We randomly selected 2 million products from 4000 categories under 4 general domains (sub-trees) in the product taxonomy: Grocery, Baby product, Beauty product, and Health product.",
"Experimental Setup: We split our dataset into training (60%), validation (20%), and test (20%) sets.",
"We experimented with extraction of flavor , scent , and brand values from product titles, and 3 For simplicitly in notation, we define Eq 12 for a single product.",
"with ingredient values from product titles and descriptions.",
"For each attribute, we trained TXtract on the training set and evaluated the performance on the held-out test set.",
"Evaluation Metrics: For a robust evaluation of attribute value extraction, we report several metrics.",
"For a test product, we consider as true positive the case where the extracted values match at least one of the ground truth values (as some of the ground truth values may not exist in the text) and do not contain any wrong values.",
"4 We compute Precision (Prec) as the number of matched products divided by the number of products for which the model extracts at least one attribute value; Recall (Rec) as the number of matched products divided by the number of products associated with attribute values; and F1 score as the harmony mean of Prec and Rec.",
"To get a global picture of the model's performance, we consider micro-average scores (Mi*), which first aggregates products across categories and computes Prec/Rec/F1 globally.",
"To evaluate per-category performance we consider macro-average scores (Ma*), which first computes Prec/Rec/F1 for each category and then aggregates per-category scores.",
"To evaluate the capability of our model to discover (potentially new) attribute values, we also report the Value vocabulary (Vocab) as the total number of unique attribute values extracted from the test set (higher number is often better); and Coverage (Cov), as the number of products for which the model extracted at least one attribute value, divided by the total number of products.",
"For product category (multi-label) classification we reported the area under Precision-Recall curve (AUPR), Precision, Recall, and F1 score.",
"Model Configuration: We implemented our model in Tensorflow (Abadi et al., 2016) and Keras.",
"5 For a fair comparison, we consider the same configuration as OpenTag for the ProductEnc (BiLSTM) 6 and CRF components.",
"For model configuration details see the appendix.",
"4 For example, if the ground-truth is [ v 1 ] but the system extracts [ v 1 , v 2 , v 3 ], the extraction is considered as incorrect.",
"5 https://keras.io/ 6 We expect to see further performance improvement by considering pre-trained language models (Radford et al., 2018; Devlin et al., 2019) for ProductEnc, which we leave for future work.",
"introduced additional strong baselines:",
"1. OpenTag: the model of Zheng et al. (2018).",
"It is a special case of our system that consists of the ProductEnc and CRF components without leveraging the taxonomy.",
"2. Title+*: a class of models for conditional attribute value extraction, where the taxonomy is introduced by artificially appending extra tokens x (cid:48) 1 , . . . , x (cid:48) T (cid:48) and a special separator token (< SEP >) to the beginning of a product's text, similar to Johnson et al. (2017): x (cid:48) = ( x (cid:48) 1 , . . . , x (cid:48) T (cid:48) , < SEP >, x 1 , . . . , x T ) Tokens x (cid:48) 1 , . . . , x (cid:48) T (cid:48) contain category information such as unique category id (Title+id), category name (Title+name), or the names of all categories in the path from the root to the category node, separated by an extra token < SEP2 > (Title+path).",
"3. Concat-*: a class of models for taxonomy-aware attribute value extraction that concatenate the category embedding to the word embedding (-wemb) or hidden BiLSTM embedding layer (-LSTM) instead of using conditional self-attention.",
"We evaluate Euclidean embeddings (Concat-*-Euclidean) and Poincar embeddings (Concat-*-Poincar).",
"4. Gate: a model that leverages category embeddings e c in a gating layer (Cho et al., 2014; Ma et al., 2019): h t = h t ( W 4 h t + W 5 e c ) , where W 4 R p d , W 5 R p m are trainable matrices, and denotes element-wise multiplication.",
"Our conditional self-attention is different as it leverages pairwise instead of single-token interactions with category embeddings.",
"5. CondSelfAtt: the model with our conditional self-attention mechanism (Section 4.1.3).",
"CondSelfAtt extracts attribute values but does not predict the product category.",
"6. MT-*: a multi-task learning model that jointly performs ( not taxonomy-aware) attribute value extraction and category prediction.",
"MT-flat assumes flat categories, whereas MT-hier considers the hierarchical structure of the taxonomy (Section 4.2.3).",
"7. TXtract: our model that jointly performs taxonomy-aware attribute value extraction (same as CondSelfAtt) and hierarchical category prediction (same as MT-hier).",
"Here, we do not report previous models (e.g., BiLSTM-CRF) for sequence tagging (Huang et al., 2015; Kozareva et al., 2016; Lample et al., 2016), as OpenTag has been shown to outperform these models in Zheng et al. (2018).",
"Moreover, when considering attributes separately, the model of Xu et al. (2019) is the same as OpenTag, but with a different ProductEnc component; since we use the same ProductEnc for all alternatives, we expect/observe the same trend and do not report its performance.",
"Table 2 reports the results across all categories.",
"For detailed results see Figure 6 in Appendix.",
"Over all categories, our taxonomy-aware TXtract substantially improves over the state-of-the-art OpenTag by up to 10.1% in Micro F1, 14.6% in coverage, and 93.8% in vocabulary (for flavor ).",
"Table 3 shows results for the four domains of our taxonomy under different training granularities: training on all domains versus training only on the target domain.",
"Regardless of the configuration, TXtract substantially outperforms OpenTag, showing the general advantages of our approach.",
"Interestingly, although training a single model on all of the four domains obtains lower F1 for Flavor , it obtains better results for Scent : training fewer models does not necessarily lead to Domain OpenTag/TXtract Train Test Attr.",
"Table 4 reports the performance of several alternative approaches for flavor value extraction across all categories.",
"OpenTag does not leverage the product taxonomy, so it is outperformed by most approaches that we consider in this work.",
"Implicit vs. explicit conditioning on categories.",
"Title+* baselines fail to leverage the taxonomy, thus leading to lower F1 score than OpenTag: implicitly leveraging categories as artificial tokens appended to the title is not effective in our setting.",
"Representing the taxonomy with category embeddings leads to significant improvement over OpenTag and Title+* baselines: even simpler approaches such as Concat-*-Euclidean outperform OpenTag across all metrics.",
"However, Concat-* and Gate-* do not leverage category embeddings as effectively as CondSelfAtt: conditioning on the category embedding for the computation of the pair-wise attention weights in the self-attention layer appears to be the most effective approach for leveraging the product taxonomy.",
"Multi-task Learning.",
"In Table 4, both MT-flat and MT-hier, which do not condition on the product taxonomy, outperform OpenTag on attribute value extraction: by learning to predict the product category, our model implicitly learns to condition on the product category for effective attribute value extraction.",
"MT-hier outperforms MT-flat: leveraging the hierarchical structure of the taxonomy is more effective than assuming flat categories.",
"Table 5 shows that category prediction is more effective when considering the hierarchi-Model TX MT Micro F1 OpenTag -57.5 Title+id (cid:88) -55.7 3.1% Title+name (cid:88) -56.9 1.0% Title+path (cid:88) -54.3 5.6% Concat-wemb-Euclidean (cid:88) -60.1 4.5% Concat-wemb-Poincar (cid:88) -60.6 5.4% Concat-LSTM-Euclidean (cid:88) -60.1 4.5% Concat-LSTM-Poincar (cid:88) -60.8 5.7% Gate-Poincar (cid:88) -60.6 5.4% CondSelfAtt-Poincar (cid:88) -61.9 7.7 MT-flat -(cid:88) 60.9 5.9% MT-hier -(cid:88) 61.5 7.0% Concat & MT-hier (cid:88) (cid:88) 62.3 8.3% Gate & MT-hier (cid:88) (cid:88) 61.1 6.3% CondSelfAtt & MT-hier (cid:88) (cid:88) 63.3 10.1% Table 4: Ablation study for flavor extraction across 4,000 categories.",
"cal structure of the categories into our taxonomy-aware loss function than assuming flat categories.",
"Poincar embeddings effectively capture the hierarchical structure of the product taxonomy: Figure 3a plots the embeddings of product categories in the 2-dimensional Poincar disk.",
"7 Figure 3b plots the embeddings trained in the 50-dimensional Poincar ball and projected to the 2-dimensional Euclidean space through t-SNE (Maaten and Hinton, 2008).",
"Figure 4 shows examples of product titles and attribute values extracted by OpenTag or TXtract.",
"TXtract is able to detect category-specific values: in Figure 4a, Purple Lemonade is a valid flavor for Vitamin Pills but not for most of other categories.",
"OpenTag, which ignores product categories, fails to detect this value while TXtract 7 We train 2-dimensional Poincar embeddings only for visualization.",
"(a) Taxonomy embeddings in the 2-dimensional Poincar disk, where the distance of points grows exponentially to the radius.",
"Leaf nodes are placed close to the boundary of the disk.",
"(b) Taxonomy embeddings projected from the 50-dimensional Poincar ball to the 2-dimensional Euclidean space using t-SNE.",
"Small clusters correspond to taxonomy sub-trees.",
"successfully extracts it as a flavor .",
"TXtract also learns attribute applicability: in Figure 4d, OpenTag erroneously extracts palette as scent for an Eyeshadow product, while this product should not have scent ; on the other hand, TXtract, which considers category embeddings, does not extract any scent values for this product.",
"We present a novel method for large-scale attribute value extraction for products from a taxonomy with thousands of product categories.",
"Our proposed model, TXtract, is both efficient and effective: it leverages the taxonomy into a deep neural network to improve extraction quality and can extract attribute values on all categories in parallel.",
"TXtract significantly outperforms state-of-the-art approaches and strong baselines under a taxonomy with thousands of product categories.",
"Interesting future work includes applying our techniques to different taxonomies (e.g., biomedical) and training a model for different attributes.",
"The authors would like to sincerely thank Ron Benson, Christos Faloutsos, Andrey Kan, Yan Liang, Yaqing Wang, and Tong Zhao for their insightful comments on the paper, and Gabriel Blanco, Alexandre Manduca, Saurabh Deshpande, Jay Ren, and Johanna Umana for their constructive feedback on data integration for the experiments."
] | [
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"result",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"result",
"abstain",
"objective",
"objective",
"other",
"abstain",
"objective",
"method",
"result",
"result",
"method",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"other",
"method",
"abstain",
"method",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"method",
"abstain",
"method",
"other",
"other",
"abstain",
"other",
"other",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"method",
"other"
] |
[
"Existing pre-trained language models (PLMs) are often computationally expensive in inference, making them impractical in various resource-limited real-world applications.",
"To address this issue, we propose a dynamic token reduction approach to accelerate PLMs' inference, named TR-BERT, which could flexibly adapt the layer number of each token in inference to avoid redundant calculation.",
"Specially, TR-BERT formulates the token reduction process as a multi-step token selection problem and automatically learns the selection strategy via reinforcement learning.",
"The experimental results on several downstream NLP tasks show that TR-BERT is able to speed up BERT by 2 5 times to satisfy various performance demands.",
"Moreover, TR-BERT can also achieve better performance with less computation in a suite of long-text tasks since its token-level layer number adaption greatly accelerates the self-attention operation in PLMs.",
"The source code and experiment details of this paper can be obtained from https://github.com/ thunlp/TR-BERT .",
"Large-scale pre-trained language models (PLMs) such as BERT (Devlin et al., 2019), XLNet (Yang et al., 2019) and RoBERTa (Liu et al., 2019) have shown great competence in learning contextual representation of text from large-scale corpora.",
"With appropriate fine-tuning on labeled data, PLMs have achieved promising results on various NLP applications, such as natural language inference (Zhang et al., 2020b), text classification (Sun et al., 2019a) and question answering (Talmor and Berant, 2019).",
"Along with the significant performance improvements, PLMs usually have substantial computational cost and high inference latency, which presents challenges to their practicalities in resource-limited real-world applications, such as Corresponding author: M. Sun ([email protected]) real-time applications and hardware-constrained mobile applications.",
"Even worse, these drawbacks become more severe in long-text scenarios because self-attention operation in PLMs scales quadratically with the sequence length.",
"Therefore, researchers have made intensive efforts in PLM's inference acceleration recently.",
"The mainstream approach is to reduce the layer number of PLMs such as knowledge distillation models (Sanh et al., 2019; Sun et al., 2019b), and adaptive inference models (Xin et al., 2020; Liu et al., 2020).",
"Such layer-wise pruning reduces a tremendous amount of computation, but it sacrifices the models' capability in complex reasoning.",
"Previous works (Sanh et al., 2019; Sun et al., 2019b) have found that the shallow model usually performs much worse on the relatively complicated question answering tasks than text classification tasks.",
"It is straightforward that pruning the entire layer of PLMs may not be an optimal solution in all scenarios.",
"In this paper, we introduce a dynamic token reduction method TR-BERT to find out the well-encoded tokens in the layer-by-layer inference process, and save their computation in subsequent layers.",
"The idea is inspired by recent findings that PLMs capture different information of words in different layers (e.g., BERT focuses on the word order information (Lin et al., 2019) in the bottom layers, obtains the syntactic information (Hewitt and Manning, 2019) in the middle layers, and computes the task-specific information in the top layers (Rogers et al., 2020)).",
"Hence, we could adapt different tokens to different layers according to their specific roles in the context.",
"As shown in Figure 1, TR-BERT formulates the token reduction process as a multi-step selection problem.",
"Specially, for each selection phase, TR-BERT finds out the words that require high-level semantic representations, and then selects them to higher layers.",
"The main challenge in TR-BERT is how to determine each token's importance for text BERT Layers Classifier PolicyNetwork PolicyNetwork Action:Select/Skip Action:Select/Skip Token Reduction logPr = Y !",
"understanding in the token selection.",
"It is highly task-dependent and requires to consider the correlation and redundancy among various tokens.",
"TR-BERT employs the reinforcement learning (RL) method to learn the dynamic token selection strategy automatically.",
"After the token reduction, the RL reward involves the confidence of the classi-fier's prediction based on the pruned network to reflect the quality of token selection.",
"Moreover, we also add a penalty term about the number of selected tokens to the reward, by adjusting which, TR-BERT can utilize the different pruning intensities in response to various performance requirements.",
"In TR-BERT, by selecting a few important tokens to go through the entire pipeline, the inference speed turns much faster and no longer grows quadratically with the sequence length.",
"We conduct experiments on eleven NLP benchmarks.",
"Experimental results show that TR-BERT can accelerate BERT inference by 2 5 times to meet various performance demands, and significantly outperform previous baseline methods on question answering tasks.",
"It verifies the effectiveness of the dynamic token reduction strategy.",
"Moreover, benefiting from the long-distance token interaction, TR-BERT with 1 , 024 input length reaches higher performance with less inference time compared to the vanilla BERT in a suite of long-text tasks.",
"Transformer architecture.",
"After that, we conduct pilot experiments as well as empirical analyses for the lower and upper bound of the token reduction in this section.",
"The Transformer architecture (Vaswani et al., 2017) has been widely adopted by the pre-trained language models (PLMs) for inheriting its high capacity.",
"Basically, each Transformer layer wraps a Self-Attention module (Self-ATT) and a Feed-Forward-Network module (FFN) by the residual connection and layer normalization.",
"Formally, given a sequence of n words, the hidden state of the i -th layer, H i = ( h 1 , h 2 , . . . , h n ) , is computed from the previous layer state: M i 1 = LN ( H i 1 + Self-ATT ( H i 1 )) , H i = LN ( M i 1 + FFN ( M i 1 )) , (1) where i [1 , L ] , L is the number of stacked Transformer layers, LN denotes the LayerNorm layer.",
"For each Transformer layer, the complexity of the Self-Attention module scales quadratically with the sequence length.",
"Therefore, the speed of Transformer architecture will decline heavily when the sequences become longer.",
"Previous findings (Rogers et al., 2020) reveal that some words, such as function words, do not require high-layer modeling, since they store little information and have been well handled by PLMs in bottom layers.",
"Hence, selecting only the important words for high-layer computation may be a possible way to accelerate the PLMs' inference.",
"To verify this assumption, we conduct a theoretical token elimination experiment in question answering (on SQuAD 2.0 (Rajpurkar et al., 2018)) and text classification (on IMDB (Maas et al., 2011)).",
"We use the full-layer representations for the selected tokens and the early-layer representation of the deleted tokens for the prediction.",
"To be specific, we eliminate tokens immediately after the l = 4 th layer and adopt the following three strategies to select the retained tokens: Random Strategy (Lower Bound) selects tokens randomly, assuming that all tokens are equivalent for understanding.",
"Residual Strategy (Upper Bound) directly utilizes the model prediction of the original model to guide the token selection.",
"Specially, we define a token's importance according to the influence on the model prediction when it's not selected.",
"When substituting the r -th layer representation H r with the l -th layer representation H l ( r > l ) , we define the approximate variation to model loss as the token importance: I = loss H r ( H r H l ) .",
"Here, we set r = 9 since other values get a little worse results.",
"Note that we could not obtain the model loss in the prediction stage.",
"Hence, the Residual Strategy could be viewed as an upper bound of token selection to some extent when we ignore the correlation and redundancy among the selected tokens.",
"Attention Strategy is adopted by PoWER-BERT (Goyal et al., 2020) and L-Adaptive (Kim and Cho, 2020).",
"It accumulates the attention values from other tokens to a given token.",
"It selects the tokens receiving the greatest attentions, considering them responsible for retaining and disseminating the primary information of the context.",
"As shown in Figure 2, both Attention Strategy and Residual Strategy achieve considerable results, which demonstrates that to select important tokens is feasible for accelerating the inference of PLMs.",
"Besides, the Residual Strategy outperforms the Attention strategies by a margin, especially at the low token remaining proportion ( +31 . 8% F1 on SQuAD 2.0 and +9 . 5% accuracy on IMDB when selecting 10% tokens).",
"It suggests that the accumulated attention values still cannot well reflect tokens' importance in text understanding, which requires further explorations.",
"In this section, we present TR-BERT, which adopts a cascade token reduction to prune the BERT model at token-level granularity dynamically.",
"In a one-step token reduction process, TR-BERT estimates the importance of each token, reserves the important ones, and delivers them to the higher layer.",
"To better select important tokens for text understanding while satisfying various acceleration requirements, we employ the reinforcement learning (RL) method to automatically learn a dynamic token selection strategy.",
"Figure 1 shows the model architecture of TR-BERT.",
"To inherit the high capacity from the PLMs, TR-BERT keeps the same architecture as BERT.",
"Differently, as the layer gets deeper, TR-BERT gradually shortens the sequence length via token reduction modules, aiming to reduce the computational redundancy of unimportant tokens.",
"The token reduction modules are required to measure the importance of tokens and offer an integral selection scheme.",
"Due to the lack of direct supervision, we employ the policy network for training the module, which adopts a stochastic policy and uses a delayed reward to guide the policy learning.",
"In one-step reduction, we perform action sampling for the current sequence.",
"The selected tokens are conveyed to the next Transformer layer for further computation.",
"In contrast, the unselected tokens are terminated with their representation remaining unchanged.",
"After all the actions are decided, we fetch each token's representation from the layer where it terminated, and compute the golden label's likelihood as a reward.",
"To be specific, we introduce state, action, reward, and objective function as follows: State State s t consists of the token representations inherited from the previous layer before the t -th token reduction layer.",
"Action We adopt two alternative actions for each token, { Select , Skip }, where the token can be selected for further computation or be skipped to the final layer.",
"We implement the policy network as a two-layer feed-forward network with GeLU activation (Hendrycks and Gimpel, 2017): ( a t | s t ; ) = ( W 2 ( GeLU ( W 1 H s t + b 1 )) + b 2 ) , (2) where a t denotes the action at state s t for sequence representation H s t = { h 1 , h 2 , ..., h n } at t -th reduction, = { W 1 , W 2 , b 1 , b 2 } are trainable parameters, and ( . ) is sigmoid activation function.",
"For the selected token set { t 1 , t 2 , ..., t n } , where n n , we conduct a Transformer layer operation on their corresponding representations: H (cid:48) = Transformer ([ h t 1 , h t 2 , . . . , h t n ]) .",
"For the selected tokens, their representation H (cid:48) is conveyed to the next layer for further feature extraction and information aggregation.",
"For the other skipped tokens, their representations in the current layer are regarded as their final representations.",
"Reward Aiming to select significant tokens for making a precise decision in the prediction layer, we adopt the likelihood of predicting the golden label as a reward.",
"For example, when classifying the input sequence X , we use the models' predicting probability of the ground-truth label Y to reflect the quality of the token selection.",
"In addition, to encourage the model to delete more redundant tokens for accelerating, we include an additional punitive term by counting the number of selected tokens.",
"Hence, the overall reward R is defined as: R = log Pr( y = Y | X ) (cid:88) t |{ a t = Select }| , (4) where (cid:80) t |{ a t = Select }| denotes the total number of the selected tokens in all token reduction modules, and is a harmonic coefficient to balance two reward terms.",
"where T is the number of states.",
"According to the REINFORCE algorithm (Williams, 1992) and policy gradient method (Sutton et al., 1999), we update network with the policy gradient as below: J ( ) = T (cid:88) t =1 R log ( a t | s t ) .",
"Our policy network is integrated into the original Transformer network, and we train both of them simultaneously.",
"The entire training process involves three steps: (1) Fine-tune the PLM model for downstream tasks with the task-specific objective; (2) Freeze all the parameters except that of the policy network, conduct reinforcement learning (RL), and update the policy network to learn token reduction strategy; (3) Unfreeze all parameters and train the entire network with the task-specific objective and RL objective simultaneously.",
"Due to the large searching space, RL learning is difficult to converge.",
"We adopt imitation learning (Hussein et al., 2017) for warming up the training of the policy network.",
"To be specific, in the RL training, we sample several action sequences via the policy network to compute rewards.",
"And we guide the optimization direction by providing heuristic action sequences sampled by the Residual Strategy during the early training period, which could roughly select the most important tokens.",
"The heuristic action sequence is defined as selecting the top K important tokens and skipping the others, where K is defined as the expected selected number of the current policy network.",
"In our preliminary experiment, both the heuristic action sequence and expected selected number mechanism are beneficial to the stable training.",
"To further improve the performance of our pruned model, we also adopt Knowledge Distillation (KD) (Hinton et al., 2015) to transfer knowledge from the intact original fine-tuned model.",
"For a Transformer layer with a hidden size of d and an input sequence of n tokens, the Self-Attention module consumes O ( n 2 d ) time and memory complexity while the Feed-Forward Network takes O ( nd 2 ) .",
"That is, our token reduction gains near-linear speedup when n is relatively smaller than d .",
"Therefore, when the input sequence gets longer, such as up to 1 , 024 tokens, our method can enjoy a more effective speedup.",
"In the RL training, we compute loss on the pruned model, so the acceleration is still valid for this stage.",
"Since we focus on accelerating BERT inference, we consider the extra training consumption on the pruned model is acceptable.",
"In this section, we first introduce the baseline models and the evaluation datasets.",
"After that, we verify the effectiveness of TR-BERT on eleven NLP benchmarks.",
"Finally, we conduct a detailed analysis and case study on TR-BERT to investigate the selected tokens' characteristics.",
"BERT (Devlin et al., 2019) is a Transformer-based pre-trained model.",
"We use the BERTBASE model 1 , which consists of 12 Transformer layers and supports a maximum sequence length of 512 .",
"BERTL is our implemented BERT, which can support input sequences with up to 1 , 024 tokens.",
"We initialize the parameters of BERTL with that of BERT, where the additional position embedding is initialized with the first 512 ones.",
"After that, we continue to train it on Wikipedia 2 for 22 k steps.",
"DistilBERT (Sanh et al., 2019) is the most popular distilled version of BERT, which leverages the knowledge distillation to learn knowledge from the BERT model.",
"We use the 6 -layer DistilBERT released by Hugging Face 3 .",
"In addition, we use the same method to distill BERT with 3 layers to obtain DistilBERT 3 .",
"DeFormer (Cao et al., 2020) is designed for question answering, which encodes questions and passages separately in lower layers.",
"It precomputes all the passage representation and reuses them to speed up the inference.",
"In our experiments, we do not count DeFormer's pre-computation.",
"PoWER-BERT (Goyal et al., 2020) is mainly designed for text classification, which also decreases the length of a sequence as layer increases.",
"It adopts the Attention Strategy to measure the sig-nificance of each token and always selects tokens with the highest attention.",
"Given a length penalty, PoWER-BERT searchs a fixed length pruning configuration for all examples.",
"DynaBERT (Hou et al., 2020) can not only adjust model's width by varying the number of attention heads, but also provide an adaptive layer depth to satisfy different requirements.",
"For a given speed demand, we report its best performance with all the feasible width and depth combination options.",
"To verify the effectiveness of reducing the sequence length, we evaluate TR-BERT on several tasks with relatively long context, including question answering and text classification.",
"Table 1 shows the context length of these datasets.",
"We adopt seven question-answering datasets, including SQuAD 2.0 (Rajpurkar et al., 2018), NewsQA (Trischler et al., 2017), NaturalQA (Kwiatkowski et al., 2019), RACE (Lai et al., 2017), HotpotQA (Yang et al., 2018), TriviaQA (Joshi et al., 2017) and WikiHop (Welbl et al., 2018).",
"And we also evaluate models on four text classification datasets, including YELP.F (Zhang et al., 2015), IMDB (Maas et al., 2011), 20NewsGroups (20News.) (Lang, 1995), and Hyperpartisan (Hyperp.) (Kiesel et al., 2019).",
"Among them, HotpotQA, TriviaQA and WikiHop possess abundant contexts for reading, while the performance of question answering (QA) models heavily relys on the amount of text they read.",
"To fairly compare BERT and BERTL , we split the context into slices and apply a shared-normalization training objective (Clark and Gardner, 2018) to produce a global answer candidate comparison across different slices for the former two extractive QA datasets.",
"And we average the candidate scores in all slices for WikiHop.",
"Details of all datasets are shown in the Appendix.",
"We adopt a maximum input sequence length of 384 for SQuAD 2.0, 1 , 024 for long-text tasks and 512 for others.",
"We use the Adam optimizer (Kingma and Ba, 2015) to train all models.",
"The detailed training configuration is shown in the Appendix.",
"For the RL training, we sample 8 action sequences each time and average their rewards as the reward baseline.",
"In the second training process which aims to warm up the policy network, we employ 20% imitation learning steps for question answering tasks and 50% steps for text classification tasks.",
"We search the number of token reduction module T [1 , 2 , 3] .",
"And we find the models with T = 2 gets similar quality and speed trade-offs as the models with T = 3 , and both of them perform better than models with T = 1 .",
"Thus we adopt T = 2 for simplification.",
"We denote the pruned models from BERT, BERTL and DistilBERT 6 as TR-BERT 12 , TR-BERTL , TR-BERT 6 , respectively.",
"For BERT and BERTL , we attach the token reduction modules before the second and the sixth layers.",
"For DistilBERT 6 , we insert the token reduction modules before the second and the fourth layers.",
"To avoid the pseudo improvement by pruning padding for TR-BERT, we evaluate all models with input sequences without padding to the maximum length.",
"For each dataset, we report the F1 scores or accuracy (Acc.), and the FLOPs speedup ratio compared to the BERT model.",
"The model's FLOPs are consistent in the various operating environment.",
"Therefore, it is convenient to estimate and compare the models' inference time by FLOPs.",
"The comparison between TR-BERT and the baselines are shown in Table 2 and Figure 3.",
"We adjust the length penalty coefficient of TR-BERT for an intuitional comparison.",
"From the experimental results, we have the following observations: (1) TR-BERT 12 achieves higher performance while using less computation on all span-extraction QA datasets compared to all the baselines.",
"For example, TR-BERT 12 outperforms DynaBERT by 1 .",
"8 F1 with faster speed.",
"TR-BERT 12 even achieves better performance than BERT at low speedup rate, which demonstrates that discarding some redundant information in the top layer helps to find the correct answer.",
"For multiple-choice RACE, TR-BERT 12 achieves better performance than DeFormer while doesn't need to pre-compute the passage representation.",
"(2) TR-BERT 6 performs better than PoWER-BERT by a margin in text classification tasks.",
"It shows that the fixed pruning configuration and the attention-based selection strategy adopted by PoWER-BERT may not be flexible to accelerate inference for various input sequences.",
"In contrast, 1.0x 1.5x 2.0x 2.5x 3.0x 3.5x 4.0x 4.5x 5.0x FLOPs (speedup) 55 60 65 70 75 F 1 BERT SQuAD 2.0 TR-BERT 12 TR-BERT 6 DynaBERTDeFormerDistilBERT 1.0x 1.5x 2.0x 2.5x 3.0x 3.5x 4.0x 4.5x 5.0x \u0000)\u0000/\u00002\u00003\u0000V\u0000\u0003\u0000\u000b\u0000V\u0000S\u0000H\u0000H\u0000G\u0000X\u0000S\u0000\f 54 56 58 60 62 64 66 68 \u0000) \u0000\u0014 BERT \u00001\u0000H\u0000Z\u0000V\u00004\u0000$ \u00007\u00005\u0000\u0010\u0000%\u0000(\u00005\u00007 12 \u00007\u00005\u0000\u0010\u0000%\u0000(\u00005\u00007 6 \u0000'\u0000\\\u0000Q\u0000D\u0000%\u0000(\u00005\u00007\u0000'\u0000H\u0000)\u0000R\u0000U\u0000P\u0000H\u0000U\u0000'\u0000L\u0000V\u0000W\u0000L\u0000O\u0000%\u0000(\u00005\u00007 1.0x 1.5x 2.0x 2.5x 3.0x 3.5x 4.0x 4.5x 5.0x FLOPs (speedup) 68 70 72 74 76 78 F 1 BERT NaturalQA TR-BERT 12 TR-BERT 6 DynaBERTDeFormerDistilBERT 1.0x 1.5x 2.0x 2.5x 3.0x 3.5x 4.0x 4.5x 5.0x \u0000)\u0000/\u00002\u00003\u0000V\u0000\u0003\u0000\u000b\u0000V\u0000S\u0000H\u0000H\u0000G\u0000X\u0000S\u0000\f 68.00 68.25 68.50 68.75 69.00 69.25 69.50 69.75 70.00 \u0000$ \u0000F\u0000F \u0000X \u0000U \u0000D\u0000F \u0000\\ BERT \u0000<\u0000(\u0000/\u00003\u0000\u0011\u0000) \u00007\u00005\u0000\u0010\u0000%\u0000(\u00005\u00007 12 \u00007\u00005\u0000\u0010\u0000%\u0000(\u00005\u00007 6 \u0000'\u0000\\\u0000Q\u0000D\u0000%\u0000(\u00005\u00007\u00003\u0000R\u0000:\u0000(\u00005\u0000\u0010\u0000%\u0000(\u00005\u00007\u0000'\u0000L\u0000V\u0000W\u0000L\u0000O\u0000%\u0000(\u00005\u00007 Figure 3: Quality and efficiency trade-offs for TR-BERT 12 and TR-BERT 6 .",
"our dynamic token selection can automatically determine the proper pruning length and tokens for each example according to the actual situation, which leads to a more effective model acceleration.",
"Overall, TR-BERT retains most of BERT's performance though it omits lots of token interactions in the top layers.",
"It shows that TR-BERT learns a satisfactory token selection strategy through reinforcement learning, and could effectively reduce the redundant computation of tokens that have been extracted enough information in the bottom layers.",
"Since layer-wise pruning and token-wise pruning are compatible, we also explore the incorporation of these two pruning strategies.",
"We apply our dynamic token reduction on the 6 -layer DistilBERT to obtain TR-BERT 6 .",
"The trade-off comparison of Model HotpotQA TriviaQA F1 FLOPs F1 FLOPs BERT 57.33 1.00x 68.75 1.00x BERTL 65.45 0.91x 69.69 0.92x TR-BERTL 65.57 1.56x 70.41 1.20x Model WikiHop Hyperparisan Acc.",
"TR-BERT 12 and TR-BERT 6 is shown in Figure 3, from which we have the following findings: (1) In general, as the speedup ratio increases, the performance of all models decrease, which indicates that retaining more token information usually results in a more potent model.",
"(2) TR-BERT 6 consistently outperforms TR-BERT 12 on all tasks at a high speedup ratio.",
"In this situation, the budget doesn't allow enough tokens to go through the top layers.",
"TR-BERT 6 makes a more elaborate pruning than TR-BERT 12 at bottom layers to obtain a better effectiveness.",
"(3) At low speedup ratio, TR-BERT 12 performs better than TR-BERT 6 on the question answering tasks, but worse on the text classification tasks.",
"In general, a deep Transformer architecture can offer multi-turn feature extraction and information propagation, which can meet the complex reasoning requirements for question answering.",
"In contrast, the result of text classification usually depends on the keywords in the context, for which a shallow model is an affordable solution.",
"To obtain a better trade-off, we can flexibly employ a deep and narrow model for question answering and a shallow and wide model for text classification.",
"With token pruning, TR-BERT is able to process a longer sequence.",
"We apply our dynamic token pruning strategy on BERTL , which can process sequence with up to 1 , 024 tokens, to obtain TR-BERTL , and conduct experiments on four datasets with longer documents, including HotpotQA, TriviaQA, WikiHop and Hyperparisan.",
"Results on long-text tasks are shown in Table 3, from which we have the following observations: (1) BERTL achieves better performance than BERT, especially on HotpotQA and WikiHop, which require the long-range multi-hop reasoning; (2) Compared to the vanilla BERT, TR-BERTL achieves 8 .",
"2% F1 improvement with 1 .",
"56 x speedup on HotpotQA, obtains 1 .",
"7% F1 improvement with 1 .",
"24 x speedup on TriviaQA, gains 4 .",
"65 x speedup on WikiHop and 1 .",
"96 x speedup on Hyperparisan without performance drops.",
"Compared to BERT which can only deal with up to 512 tokens at a time, BERTL considers a longer-range token interaction and obtains a more complete reasoning chain.",
"However, the running time of BERTL also increase as the input sequence's length extends, which poses a challenge to the utilization of longer text.",
"TR-BERTL inherits the broader view from BERTL to get a better performance with a faster inference.",
"Moreover, the inference acceleration effect of TR-BERTL is relatively better than TR-BERT within 512 tokens, which is coincident to the above complexity analysis section.",
"With a longer sequence, TR-BERT can achieve extra speedup , because it significantly saves the time of the Self-Attention module, which demonstrates that TR-BERT can be further applied to process much longer tokens with limited computation.",
"To investigate the characteristics of the selected tokens, we conduct a detailed case study on various datasets.",
"As shown in Table 4, TR-BERT chooses to abandon the function word, such as the, and, with , in the first token reduction module as the first module is placed at the bottom layer of BERT.",
"The second token reduction module is placed at the middle layer of BERT, and we could observe that it is used to retaining task-specific tokens.",
"In the first example about question answering, the second token reduction module maintains the whole question and the question-related tokens from the context for further propagating messages.",
"In the second and third examples about movie review sentimental classification, the second token reduction module chooses to select sentimental words, such as great, excited, disappointed to determine whether the given sequence is positive or negative.",
"Although we train the token reduction module without direct human annotations, TR-BERT can remain the meaningful tokens in the bottom layer and select the higher layer's task-relevant tokens.",
"It demonstrates that the pruned network's ground-truth probability is an effective signal to facilitate the reinforcement learning for token selection.",
"Researchers have made various attempts to accelerate the inference of PLMs, such as quantization (Shen et al., 2020; Zhang et al., 2020a), attention head pruning (Michel et al., 2019; Hou et al., 2020), dimension reduction (Sun et al., 2020; Chen et al., 2020), and layer reduction (Sanh et al., 2019; Sun et al., 2019b; Jiao et al., 2019).",
"In current studies, one of the mainstream methods is to dynamically select the layer number of Transformer layers to make a on-demand lighter model (Fan et al., 2020; Xin et al., 2020; Liu et al., 2020).",
"However, these methods operate at the whole text and they cannot perform pruning operations in a smaller granularity, such as the token-level granularity.",
"To consider the deficiencies of layer-level pruning methods, researchers decide to seek solutions from a more meticulous perspective by developing methods to extend or accelerate the self-attention mechanism of the Transformer.",
"For example, Sparse Trasformer (Child et al., 2019), LongFormer (Beltagy et al., 2020) and Big Bird (Zaheer et al., 2020) employ the sparse attention to allow model to handle long sequences.",
"However, these methods only reduce the CUDA memory but cannot be not faster than the full attention.",
"Besides, researchers also explore the feasibility of reducing the number of involved tokens.",
"For example, Funnel-Transformer (Dai et al., 2020) reduces the sequence length with pooling for less computation, and finally up-samples it to the full-length representation.",
"Universal Transformer (Dehghani et al., 2019) builds a self-attentive recurrent sequence model, where each token uses a dynamic halting layer.",
"And DynSAN (Zhuang and Wang, 2019) applies a gate mechanism to measure the importance of tokens for selection.",
"Spurred by these attempts and positive results, we introduce TR-BERT in this study, which can creatively prune the network at the token level.",
"To be specific, our work aims to accelerate the Transformer by deleting tokens gradually as the layer gets deeper.",
"Compared with these models, TR-BERT is easy to adapt to the current PLMs models without a significant amount of pretraining and is flexible to adjust the model speed according to different performance requirements.",
"The main idea of TR-BERT is to select essential elements and infuse more computation on them, which is widely adopted in various NLP tasks.",
"ID-LSTM (Zhang et al., 2018) selects important and task-relevant words to build sentence representation for text classification.",
"SR-MRS (Nie et al., 2019) retrieves the question-related sentences to reduce the size of reading materials for question answering.",
"TR-BERT can be viewed as a unified framework on the Transformer for the important element selection, which can be easy to be applied in wide-range tasks.",
"In this paper, we propose a novel method for accelerating BERT inference, called TR-BERT, which prunes BERT at token-level granularity.",
"Specifi-cally, TR-BERT utilizes reinforcement learning to learn a token selection policy, which is able to select general meaningful tokens in the bottom layers and select task-relevant tokens in the top layers.",
"Experiments on eleven NLP tasks demonstrate the effectiveness of TR-BERT as it accelerates BERT inference by 2 5 times for various performance demand.",
"Besides, TR-BERT achieves a better quality and speed trade-off on long-text tasks, which shows its potential to process large amounts of information in the real-world applications.",
"In the future, we would like to attempting to apply TR-BERT in the pre-training process of PLMs.",
"Through the automatically learned token reduction module, it is possible to reveal how BERT stores syntactic and semantic information in various tokens and different layers.",
"And it's also worth speeding up the time-consuming pre-training process.",
"This research is mainly supported by Science & Tech Innovation 2030 Major Project \"New Generation AI\" (Grant no. 2020AAA0106500) as well as supported in part by a grant from the Institute for Guo Qiang, Tsinghua University."
] | [
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"method",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"objective",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other"
] |
[
"Masked language model and autoregressive language model are two types of language models.",
"While pretrained masked language models such as BERT (Devlin et al., 2019) overwhelm the line of natural language understanding (NLU) tasks, autoregressive language models such as GPT (Radford et al., 2018) are especially capable in natural language generation (NLG).",
"In this paper, we propose a probabilistic masking scheme for the masked language model, which we call probabilistically masked language model (PMLM).",
"We implement a specific PMLM with a uniform prior distribution on the masking ratio named u-PMLM.",
"We prove that u-PMLM is equivalent to an autoregressive permutated language model.",
"One main advantage of the model is that it supports text generation in arbitrary order with surprisingly good quality, which could potentially enable new applications over traditional unidirectional generation.",
"Besides, the pretrained u-PMLM also outperforms BERT on a set of downstream NLU tasks.",
"Large-scale pretrained language models (Raffel et al., 2019; Wang et al., 2019; Lan et al., 2019; Liu et al., 2019; Jiao et al., 2019) have drawn lots of research attention as these models have brought significant improvements to many NLU and NLG tasks.",
"As a major category of pretrained language models, masked language model (MLM) (Devlin et al., 2019; Joshi et al., 2019) is trained using a de-noising autoencoding objective.",
"In a typical MLM, some tokens in a sentence are replaced by a special token [MASK] .",
"The training objective is to predict the original tokens that are masked in the sentence.",
"As the first large-scale pretrained masked language model, BERT chooses to mask 15% of the tokens in sentences randomly.",
"Following BERT, various The wolf has an extraordinary speed , and it can often jump from a spot quick enough to escape a spot already occupied by an adult wolf .",
"Unlike the brown and black bear , where it is easily distracted by wolves , the gray fox does not run over a wolf , and is often driven mad .",
"Having jumps with high speed that breaks the wolf ' s legs before it is run over , a grey wolf could defend itself against an adult of other species as the best predator at any time .",
"The black bear may kill packs of four lazy , though the gray fox can inflict significant wounds on a dog .",
"While the pretrained masked language models achieve state-of-the-art performances in a line of downstream NLU tasks, researchers pay more attention to autoregressive language model when it comes to text generation.",
"Unlike predicting the masked tokens, the autoregressive language model learns a sequential generative process of text sequences.",
"Hence it naturally performs better for natural language generation.",
"For example, GPT-2 (Radford et al., 2019) as well as Transformer-XL (Dai et al., 2019), is able to generate fluent and coherent paragraphs of text that highly resembles human writings.",
"In this paper, we propose a probabilistically masked language model (PMLM) to bridge the gap between masked and autoregressive language mod-Predictions : Hidden States: Transformer Layers: Inputs: Figure 2: The structures of autoregressive language model (left) and masked language model (right).",
"els.",
"The basic idea behind the connection of two categories of models is similar to MADE (Germain et al., 2015).",
"PMLM is a masked language model with a probabilistic masking scheme, which de-fines the way sequences are masked by following a probabilistic distribution.",
"While the existing work proposes masking strategies aiming at improving the NLU abilities, PMLM addresses the generation capability in particular.",
"Besides, as a masked language model, PMLM maintains its strong ability in natural language understanding.",
"In addition to the traditional unidirectional (e.g., left-to-right) generation, a unique ability for PMLM is to autoregressively generate sequences in arbitrary order , and the generated sequences are still of high quality.",
"In contrast to traditional left-to-right generation, arbitrarily ordered text generation has two main characteristics.",
"First, the next token to be predicted could be in any position that is masked.",
"Second, the next token to be predicted depends on all the previous observed/generated tokens.",
"Arbitrarily ordered generation enables more interesting applications than unidirectional generation.",
"For example, Figure 1 shows an example of cloze test , where the prompted text The quick brown fox jumps over the lazy dog is distributed across a paragraph with a predefined length, and the task is to predict all the surrounding words and complete the paragraph.",
"This is actually very challenging for conventional generation models since when predicting each word, the fluency and coherence of text are hard to be guaranteed given the contextual constraints on both sides.",
"More applications may include acrostic poetry generation, news generation based on given facts, machine translation with lexical constraints, etc.",
"We employ a simple uniform distribution of the masking ratio and name the model as u-PMLM.",
"We prove that u-PMLM actually learns an autoregressive language model on random permutations of training sequences.",
"The experiments show that the quality of text generated by u-PMLM in arbitrary order is as good as that generated by GPT in sequential order.",
"Besides, u-PMLM outperforms BERT significantly on the GLUE benchmark for natural language understanding.",
"Transformer (Vaswani et al., 2017) is the backbone model for many pretrained language models.",
"Transformer is composed of a stack of multi-head self-attention and token-wise feed-forward layers.",
"At each layer, the hidden state of each token is updated based on the historical hidden states computed in the lower layer.",
"Let X = { x 1 , x 2 , ..., x N } denote the sequence of tokens, where N is the length of the sequence.",
"Fed with X as input, the final output of the Transformer, denoted as H = { h 1 , h 2 , ..., h N } , captures the contextual representation of the tokens in the sequence.",
"In autoregressive language model, the sequence generation process is modeled as a Markov chain, where the token to be predicted depends on all the previous tokens.",
"The training objective can be formulated as: L alm ( X ) = N (cid:88) n =1 log p ( x n | x 1 , ..., x n 1 ; ) , (1) where denotes the parameters of the model.",
"Figure",
"2(a) shows the diagram of autoregressive LM.",
"In the model, the n -th token can only attend on the tokens at positions less than n .",
"The autoregressive model is usually trained in the way of teacher-forcing , i.e., always using the ground-truth tokens as inputs and outputs in training.",
"Pretrained autoregressive models such as GPT (Radford et al., 2018, 2019) are especially capable of generating fluent and coherent text that highly resembles human-written text.",
"However, unidirectional attention brings two limitations.",
"Firstly, autoregressive model as in Figure",
"2(a) can only generate text from left to right; Secondly, unidirectional attention blocks the contextual information from the right side of the current token, affecting the completeness of the contextual representation.",
"To obtain complete representations of the tokens in a sequence, researchers resort to bidirectional attention as shown in Figure",
"2(b).",
"Specifically, the training instances are created by replacing a subset of tokens in the input X with a special token [MASK] , and the objective is to predict the masked tokens.",
"Such model is called masked language model (MLM).",
"Let = { 1 , 2 , ..., K } denote the indexes of the masked tokens in the sentence X , where K is the number of masked tokens.",
"Let X denote the set of masked tokens in X , and X denote the set of observed (unmasked) tokens.",
"The objective of MLM is: L mlm ( X | X ) = 1 KK (cid:88) k =1 log p ( x k | X ; ) .",
"The assumption in Equation 2 is that the probability of predicting a masked token is independent of each other.",
"BERT (Devlin et al., 2019) is a typical masked language model.",
"Due to the incorporation of bidirectional attention, masked language model can capture the contextual information on both sides.",
"Consequently, it usually achieves better performances when finetuned in downstream NLU tasks than the conventional autoregressive models.",
"However, the masking scheme and the independence assumption also affect its performance on text generation compared to autoregressive models (Wang and Cho, 2019).",
"Different masking schemes have been proposed for pretraining the masked language model.",
"The most straightforward masking scheme is to randomly mask tokens in sentences in a fixed ratio, e.g., 15% in BERT.",
"Following BERT, various models have proposed modifying the masking scheme to improve its NLU capability.",
"ERNIE (Sun et al., 2019) proposes the entity-level masking and phrase-level masking, where the words composing an entity or phrase are masked as a whole.",
"SpanBERT (Joshi et al., 2019) proposes to mask a continuous random span of text rather than random tokens.",
"These masking strategies have shown to be effective for certain classes of NLU tasks.",
"In contrast to the existing work, we propose a probabilistic masking scheme that tries to improve the text generation ability of the masked language model.",
"Probabilistically masked language mode (PMLM) is a natural generalization of the MLM with a probabilistic masking ratio.",
"It assumes that the masking ratio is drawn from a probabilistic distribution.",
"Therefore, each training instance is associated with a different masking ratio sampled from the given distribution.",
"To give a formal definition of the PMLM, we need to elaborate the training objective defined in Equation",
"2. Let M = { m 1 , m 2 , ..., m N } denote a sequence of binary variables indicating which token in X = { x 1 , x 2 , ..., x N } is masked.",
"m n = 1 indicates x n is masked, and m n = 0 otherwise.",
"Noted that since = { 1 , 2 , ..., K } denotes the indexes of masked tokens, m k = 1 holds for any k .",
"Considering M as latent variables, the expected log-likelihood function of observing X conditioning on X over all possible M is: L pmlm ( X | X ; ) = EM | X [log p ( X | X )] = (cid:88) M [log p ( X | X ; )] p ( M | X ) (3) The term log p ( X | X ; ) is identical to the objective function in Equation 2 for a deterministic mask M .",
"In the vanilla MLM, it is assumed that M are i.i.d. for each position and independent to X , namely, p ( M | X ) = p ( M ) = r K (1 r ) N K , (4) where r is the masking ratio.",
"PMLM, however, we assume r is a random variable drawn from a prior distribution p ( r ) .",
"Therefore, the distribution p ( M ) becomes: p ( M ) = M = (cid:90) p ( M | r ) p ( r ) dr = (cid:90) r K (1 r ) N K p ( r ) dr (5) With above derivations, we can formulate the expected log-likelihood function of PMLM as: L pmlm ( X | X ; ) = (cid:88) M [log p ( X | X ; )] M = (cid:88) M MKK (cid:88) k =1 log p ( x k | X ; ) (6) Equation 6 is optimized by sampling M according to the prior distribution over the training set.",
"By controlling the prior distribution, we can cover a wider range of sequence prediction tasks in training, which can potentially enhance the representation power of the pretrained model.",
"For instance, in the left-to-right autoregressive model, the masking ratio is uniformly distributed across different positions, which makes the model learn to generate the next token given the previous context of different lengths.",
"This inspires us to try the uniform prior on masking ratio for PMLM.",
"u-PMLM is an implementation of PMLM with continuous uniform distribution on the masking ratio: (cid:40)",
"Like most pretrained language models, the backbone",
"backbone model for u-PMLM is Transformer as well.",
"We prove that u-PMLM is equivalent to the autoregressive permutated language model (APLM) by recombination of the factorized log-likelihood function, which is basically the autoregressive language model trained on all possible permutations of the training instances: L aplm ( X ) = E (cid:34) N (cid:88) t =1 log p ( x t | x 1 , . . . , x t 1 ; ) (cid:35) , (8) where denote random permutations.",
"The detail derivation is included in the Appendix A. Ordinary autoregressive model can be regarded as a special case of the permutated model.",
"Therefore, we can expect that the u-PMLM is able to work as the autoregressive model in sequential prediction.",
"Moreover, since it can handle any permutation of the sequence, it should have the ability to generate sequences in arbitrary word order.",
"Algorithm 1 depicts the algorithm to autoregressively generate a sequence in random order with u-PMLM.",
"The process starts with a sequence containing full of the special token [MASK] .",
"Then the model iteratively replaces a [MASK] token in a random position with a predicted token, until all the tokens are predicted.",
"An example showing the states of the sequence during the generation process is presented in Table",
"1. The generation order could be arbitrary, which is much more flexible than the traditional unidirectional generation.",
"On the other hand, our model can not automatically determine a best generation order, which could be a interesting problem for future research.",
"Positional Embedding Most pretrained masked language models have employed absolute positional embedding to incorporate the positional information of the input tokens.",
"We train two variants for u-PMLM, one with absolute positional embedding and the other with relative positional embedding (Shaw et al., 2018).",
"The experiments show that NLG ability is not sensitive to relative or absolute positional embedding, while NLU ability is improved with relative positional embeddings.",
"Transformer, they are slightly different at inference time.",
"For u-PMLM, since we use the bidirectional Transformer, each time a token is generated, the hidden states of all the tokens need an update.",
"For GPT, since the unidirectional Transformer is employed, the latter generated token does not affect the hidden states of previous tokens.",
"This can result in different computational complexity.",
"However, since a typical Graphics Processing Unit (GPU) computes matrices in parallel, the actual difference in inference time is not that significant.",
"We report the comparison of time consumption in the experimental section.",
"Model Size : The size of our pretrained u-PMLM is identical to BERT-base, which contains 12 hidden layers and 12 attention heads.",
"The hidden size is 768, and the intermediate size is 3072.",
"The dropout rate is set to 0.1.",
"Training Data We employ the commonly adopted training data, namely BookCorpus and Wikipedia to train our u-PMLM model.",
"We obtain 4.1 Gb for the BookCorpus dataset and 11.9 GB for the Wikipedia dataset after data cleaning.",
"We further employ the same vocabulary and tokenization techniques as BERT for converting the text sequences to ID sequences.",
"The vocabulary contains 28,996 cased tokens.",
"We set the maximum sequence length to 128.",
"Training Platform We train u-PMLM using Horovod framework with 56 NVIDIA V100 (32GB) GPUs.",
"To speed up the training process, we employ mix-precision training technique.",
"The batch size is set to 150 for every single GPU, thus the total batch size is 8400.",
"The optimizer is Lamb Optimizer (You et al., 2019), which is more suitable for large batch size than Adam Optimizer.",
"We train u-PMLM for 600K steps, taking roughly 135 hours in total.",
"We evaluate both the natural language generation ability and natural language understanding ability of u-PMLM trained in the settings described in Section 3.4.",
"We train the BERT model and GPT model as the comparative models in the experiments.",
"BERT and GPT are representative models for masked language model and autoregressive language model, respectively.",
"To make fair comparisons, we train both models from scratch using the same settings described in Section 3.4, including the same training platform, model size, training data, vocabulary, and training steps.",
"Note that since BERT adopts absolute positional embedding, the variant for u-PMLM with absolute positional embedding is trained for a fair comparison with BERT.",
"Throughout the experimental section, u-PMLM-R and u-PMLM-A are short for the variants with relative and absolute positional embeddings, respectively.",
"Perplexity Evaluation Perplexity (PPL) measures the quality of a language model, where the task is to predict the next word or character in a document.",
"Typically, the predicting order follows Model PPL(sequential) PPL(random) BERT 23.12 25.54 GPT 21.23 N/A u-PMLM-R 19.58 21.51 u-PMLM-A 19.32 21.30 Table 2: Perplexity on Wikitext103.",
"the generation order.",
"However, as bidirectional u-PMLM and BERT supports text generation in arbitrary order.",
"Hence we also evaluate the perplexity when predicting words in arbitrary order.",
"We evaluate the perplexity using two datasets for evaluating perplexity.",
"The first dataset, Wikitext103, is a collection of over 100 million tokens extracted from the set of verified Good and Featured articles on Wikipedia.",
"The second dataset, One-Billion Words, consists of 829 million tokens derived from a news-commentary site.",
"Both datasets are widely adopted for evaluating language models.",
"However, there are significant differences between these two datasets in terms of the length of sequences.",
"The Wikitext103 dataset is more similar to the pretraining datasets, containing long articles.",
"On the other hand, the One-Billion Words dataset contains only single sentences, roughly half of which contain less than 24 tokens.",
"We have ensured that all the three models have the same context length, the same vocabulary, as well as the same tokenization method, which would affect the perplexity values.",
"For Wikitext103 dataset, the context length is set to 128, and each context containing multiple coherent sentences.",
"For the One-Billion Words dataset, context length is set to 50.",
"Short sentences are appended with [PAD] to reach length 50.",
"Actually, the context for nearly all the sentences is shorter than 50.",
"Both datasets provide training and test sets.",
"We first finetune the model using the training set before evaluating perplexity on the test set.",
"For each model, the algorithm for the finetune phase is the same as that for the pretraining phase.",
"The evaluation results of perplexity are shown in Table 2 and Table",
"3. Sequential refers to the traditional left-to-right text generation, while for random, each sentence in the test set is assigned a random generation order.",
"Smaller PPL indicates better language model performance.",
"We first investigate the performance on Wikitext103 dataset.",
"We observe that the PPL for u-PMLM is comparable to GPT on Wikitext103 dataset, indicating that the language model learned by u-PMLM is as good as GPT when the context length is suffi-ciently long.",
"In such case, the text generated by u-PMLM is as good as GPT.",
"Moreover, the PPL of u-PMLM for randomly ordered language model is comparable to the left-to-right generation, which implies that u-PMLM has a strong ability for arbitrarily ordered generation.",
"Besides, the results show that there are few differences between relative positional embedding and absolute positional embedding for u-PMLM.",
"On the other hand, although BERT supports generation in arbitrary word order as well, the PPL for BERT is significantly worse than our proposed u-PMLM for both sequential and random settings, demonstrating the effectiveness of the proposed probabilistic masking scheme.",
"We show more cases of text generation in random order for u-PMLM-A and BERT in Appendix B. However, for PPL on One-Billion Words, the performances of u-PMLM and BERT are not satisfactory in comparison with GPT.",
"Generally, PPL for all these models increases on One-Billion Words dataset as the context length becomes much smaller, which also reflects PPL's relationship to context length.",
"The reason might be the large portions of [PAD] in the One-Billion Words dataset, i.e., more than 50% of the context for nearly 50% of the training instances are filled by [PAD] .",
"We suspect that the [PAD] s affect the prediction process for bidirectional models.",
"On the other hand, unidirectional models such as GPT naturally ignore the effect of [PAD] tokens in the tail of context.",
"The results imply that u-PMLM could be further improved in the future to be more robust.",
"Latency As analyzed in Section 4, the time complexity for generation for masked language model is N times of autoregressive language model when computing the hidden states in each Transformer layer.",
"However, when employed for text generation on GPU, the difference might be less significant.",
"We test the latency for generating 100 128-length sentences for GPT and u-PMLM respectively.",
"The computational platform is NVIDIA V100 GPU.",
"Tom is a cat and Jerry is a mouse .",
"It ' s very sad ! .",
"The writers had wanted Tom to have something big to tell it . . . and a fun place to get excited .",
"The writers believed that the little animal and the little black dog at the end of the episode would have attracted more attention from viewers , but it never took place .",
"Tom ' s first television role was that of the boy scout Mr . Krabs in the 1978 NBC Western comedy pilot , The Search for Mr .",
"Krabs .",
"4. The results show that u-PMLM costs roughly 20.1% more time than GPT for generating sentences, which is much less than the theoretical time complexity difference.",
"Comparison With GPT for Generation In the introduction section, we have shown an example showing the application of arbitrarily ordered text generation, where the tokens in the input sentences are distributed across the generated sentences.",
"Indeed, the major difference with GPT is that the input text could be inserted anywhere in the generated text, which makes the generation process more controllable.",
"Meanwhile, the output text contains certain predefined tokens.",
"Figure 3 and Figure 4 shows the generated paragraphs of GPT and u-PMLM, respectively.",
"For GPT, the input text can only be placed in the beginning and the generation process become uncontrollable, resulting in generating sentences with topic drift.",
"In contrast, u-PMLM allows manually placing anchor sentences in the middle or end of the generated text to guide the topic of the generated text.",
"As shown in Figure 4, we place Tom is a cat and Jerry is a mouse . and Tom and Jerry become good friends in the end . at the beginning and end of the paragraph.",
"The middle parts are generated by u-PMLM from left-to-right.",
"Such generation method allows us to better retain the topic of the generated content.",
"Tom is a cat and Jerry is a mouse .",
"However , the two have a common .",
"The first part is a joke about Jerry and Tom fighting in the middle of the episode .",
"The two get on the run from the restaurant , and Tom ' s mother is shocked that they would have to do so .",
"After a few minutes , Jerry arrives and decides to have a fight .",
"The two go to the casino , where Jerry tries to fight them back by using a splint of grease and a bucket of wine in the bar .",
"They reunite at a restaurant dance , and Tom and Jerry become good friends in the end .",
"Two widely adopted tasks, GLUE (Wang et al., 2018) and SQUAD 2.0 (Rajpurkar et al., 2018), are employed for evaluating u-PMLM.",
"We have ensured that the evaluation for u-PMLM is influenced by as less model-irrelevant factors as possibles.",
"For example, we do not tune the hyper-parameters and just follow the settings of BERT, including warming-up steps, learning rate, etc.",
"In addition, since BERT employs absolute positional embed-dings, the variant with absolute positional em-beddings, u-PMLM-A, is intentionally trained for fairly evaluating the probabilistic masking scheme.",
"The results are shown in Table 5 and Table 6.",
"u-PMLM-A general performs better than BERT, demonstrating that the probabilistic masking scheme is more effective than the fixed masking scheme.",
"The reason could be that the probabilistic masking scheme covers more a wider range of masking patterns, which benefits pretraining for a masked language model.",
"Moreover, u-PMLM-R performs better than u-PMLM-A consistently.",
"The only difference between these two models is the way to handle positional embedding.",
"Relative positional embedding emphasizes more on the relative positions between two tokens, which could be a better option to capture contextual representation.",
"Recall that relative and absolute positional embedding do not make many differences regarding generation ability if the dataset is proper.",
"Hence we conclude u-PMLM-R is a better model than u-PMLM-A considering both NLU and NLG tasks.",
"In addition, u-PMLM-R*, finetuned with a commonly used technique by sharing data from multiple tasks, is the state-of-the-art base model (with 110M parameters) trained on the BookCorpus and Wikipedia datasets on GLUE leaderboard on the date of paper submission.",
"1 Comparison with XLNet We also compare our proposed model with XLNet-base, which adopts relative positional embedding.",
"As will be discussed in Section 5, XLNet is the most relevant model to u-PMLM.",
"We are not able to train an XLNet using the same settings except that we make sure both u-PMLM-R and XLNet-base are of the same model size and are both trained using the same datasets.",
"The comparison results shown in Table 7 demonstrate that the performance of our proposed u-PMLM-R is comparable to XLNet.",
"Conventionally, text is commonly generated autoregressively in the left-to-right direction.",
"Recently, some research works have proposed several models for non-autoregressive text generation (Welleck et al., 2019; Gu et al., 2019).",
"Stern et al. (2019) proposes insertion Transformer, where text are generated in an iterative and partially autoregressive manner based on insertion operations.",
"Ma et al. (2019) design a latent variable based method to generate all the tokens in one pass.",
"Ghazvinine-1 http://gluebenchmark.com/leaderboard/ jad et al. (2019) and Wang and Cho (2019) employ masked language model for refinement-based non-autoregressive text generation, when a subset of tokens in a sequence are refined iteratively.",
"Later, Mansimov et al. (2019) propose a generalized framework of sequence generation accommodating autoregressive, semi-autoregressive, and refinement-based non-autoregressive model.",
"Strictly speaking, our proposed arbitrarily ordered autoregressive text generation is a special case of this generalized framework.",
"We are the first work to address such kind of text generation, which enables a lot of new applications over tradition text generation.",
"UNILM (Dong et al., 2019) and MASS (Song et al., 2019) are another two works that combine masked language model and autoregressive language model.",
"However, UNILM only combines the training objective of GPT and BERT.",
"MASS employs mask mechanism to train sequence to sequence language model.",
"Both models do not address arbitrarily ordered text generation.",
"XLNet (Yang et al., 2019) is the most relevant pretrained language model to u-PMLM.",
"Both of them can be treated as an autoregressive permutated language model.",
"However, XLNet is trained by permutating only a small fraction of the sequences, which does not fully address the generation problem.",
"Though, we suppose that the training method for XLNet is feasible to train a model for arbitrarily ordered text generation as well.",
"The main difference between these two models is that XLNet employs unidirectional Transformer, while u-PMLM is based on bidirectional Transformer.",
"Regarding the training algorithm, XLNet shuffles the attention matrix and introduce two-stream self-attention, which is a bit complex and memory consuming.",
"On the other hand, PMLM takes the simple training objective of masked language model and approximates permutated language model.",
"We have proposed a probabilistically masked language model for autoregressive generation in arbitrary word order.",
"The experiments show that the text generated in arbitrary order has comparable quality with GPT.",
"Besides, the proposed probabilistic masking scheme also improves the NLU capability of a masked language model."
] | [
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"objective",
"abstain",
"abstain"
] |
[
"Recent years have seen a flourishing of neural keyphrase generation (KPG) works, including the release of several large-scale datasets and a host of new models to tackle them.",
"Model performance on KPG tasks has increased significantly with evolving deep learning research.",
"However, there lacks a comprehensive comparison among different model designs, and a thorough investigation on related factors that may affect a KPG system's generalization performance.",
"In this empirical study, we aim to fill this gap by providing extensive experimental results and analyzing the most crucial factors impacting the generalizability of KPG models.",
"We hope this study can help clarify some of the uncertainties surrounding the KPG task and facilitate future research on this topic.",
"Keyphrases are phrases that summarize and highlight important information in a piece of text.",
"Keyphrase generation (KPG) is the task of automatically predicting such keyphrases given the source text.",
"The task can be (and has often been) easily misunderstood and trivialized as yet another natural language generation task like summarization and translation, failing to recognize one key aspect that distinguishes KPG: the multiplicity of generation targets; for each input sequence, a KPG system is expected to output multiple keyphrases, each a mini-sequence of multiple word tokens.",
"Despite this unique nature, KPG has been essentially brute-forced into the sequence-to-sequence (Seq2Seq) (Sutskever et al., 2014) framework in the existing literature (Meng et al., 2017; Chen et al., 2018; Ye and Wang, 2018; Chen et al., 2019b; Yuan et al., 2020; Chan et al., 2019; Zhao and Zhang, 2019; Chen et al.,",
"2019a).The community has approached the unique challenges with much ingenuity in problem formulation, model design, and evaluation.",
"For example, multiple target phrases have been reformulated by either splitting into one phrase per data point or joining into a single sequence with delimiters (Figure 1), both allowing straightforward applications of existing neural techniques such as Seq2Seq.",
"In accordance with the tremendous success and demonstrated effectiveness of neural approaches, steady progress has been made in the past few years at least empirically across various domains, including sub-areas where it was previously shown to be rather difficult (e.g., in generating keyphrases that are not present in the source text).",
"Meanwhile, with the myriad of KPG's unique challenges comes an ever-growing collection of studies that, albeit novel and practical, may quickly proliferate and overwhelm.",
"We are therefore motivated to present this study as to the best of our knowledge the first systematic investigation on such challenges as well as the effect of interplay among their solutions.",
"We hope this study can serve as a practical guide to help researchers to gain a more holistic view on the task, and to profit from the empirical results of our investigations on a variety of topics in KPG including model design, evaluation, and hyper-parameter selection.",
"The rest of the paper is organized as follows.",
"We first enumerate specific challenges in KPG due to the multiplicity of its target, and describe general setups for the experiments.",
"We subsequently present experimental results and discussions to answer three main questions: 1. How well do KPG models generalize to various testing distributions?",
"2. Does the order of target keyphrases matter while training One2Seq ?",
"3. Are larger training data helpful?",
"How to better make use of them?",
"as summarization and translation.",
"In this section, we start from providing background knowledge of the KPG problem setup.",
"Then we enumerate the unique aspects in KPG model designing and training that we focus on in this work.",
"Problem Definition Formally, the task of keyphrase generation (KPG) is to generate a set of keyphrases { p 1 , . . . , p n } given a source text t (a sequence of words).",
"Semantically, these phrases summarize and highlight important information contained in t , while syntactically, each keyphrase may consist of multiple words.",
"A keyphrase is defined as present if it is a sub-string of the source text, or as absent otherwise.",
"Training Paradigms To tackle the unique challenge of generating multiple targets, existing neural KPG approaches can be categorized under one of two training paradigms: One2One (Meng et al., 2017) or One2Seq (Yuan et al., 2020), both based on the Seq2Seq framework.",
"Their main difference lies in how target keyphrase multiplicity is handled in constructing data points (Figure 1).",
"Specifically, with multiple target phrases { p 1 , . . . , p n } , One2One takes one phrase at a time and pairs it with the source text t to form n data points ( t, p i ) i =1: n .",
"During training, a model learns a one-to-many mapping from t to p i 's, i.e., the same source string usually has multiple corresponding target strings.",
"In contrast, One2Seq concatenates all ground-truth keyphrases p i into a single string: P = <bos> p 1 <sep> <sep> p n <eos> (i.e., prefixed with <bos> , joint with <sep> , and suf-fixed with <eos> ), thus forming a single data point ( t, P ) .",
"A system is then trained to predict the concatenated sequence P given t .",
"By default, we construct P follow the ordering strategy proposed in (Yuan et al., 2020).",
"Specifically, we sort present phrases by their first occurrences in source text, and append absent keyphrases at the end.",
"This ordering is denoted as PRES-ABS in 4.",
"Architecture In this paper, we adopt the architecture used in both Meng et al. (2017) and Yuan et al. (2020), using RNN to denote it.",
"RNN is a GRU-based Seq2Seq model (Cho et al., 2014) with a copy mechanism (Gu et al., 2016) and a coverage mechanism (See et al., 2017).",
"We also consider a more recent architecture, Transformer (Vaswani et al., 2017), which is widely used in encoder-decoder language generation literature (Gehrmann et al., 2018).",
"We replace both the encoder GRU and decoder GRU in RNN by Transformer blocks, and denote this architecture variant as TRANS .",
"Both the RNN and TRANS models can be trained with either the One2One or One2Seq paradigm.",
"In recent years, a host of auxiliary designs and mechanisms have been proposed and developed based on either One2One or One2Seq (see 6).",
"In this study, however, we focus only on the vanilla version of them and we show that given a set of carefully chosen architectures and training strategies, base models can achieve comparable, if not better performance than state-of-the-art methods.",
"We assume that KPG systems derived from either One2One or One2Seq model would be affected by these factors of model designing in similar ways.",
"Decoding Strategies KPG is distinct from other NLG tasks since it expects a set of multi-word phrases (rather than a single sequence) as model predictions.",
"Depending on the preference of po-Dataset Present ( F 1 @ O ) Present ( F 1 @10 ) Absent ( R@50 ) One2One One2Seq One2One One2Seq One2One One2Seq RNN TRANS RNN TRANS RNN TRANS RNN TRANS RNN TRANS RNN TRANS D 0 KP20 K 35.3 37.4 31.2 36.2 27.9 28.9 26.1 29.0 13.1 22.1 3.2 15.0 KRAPIVIN 35.5 33.0 33.5 36.4 27.0 26.4 26.9 28.1 13.7 23.8 3.3 16.6 D 0 Average 35.4 35.2 32.3 36.3 27.4 27.7 26.5 28.5 13.4 23.0 3.2 15.8 D 1 INSPEC 33.7 32.6 38.8 36.9 32.5 30.8 38.7 36.6 8.2 9.2 3.7 6.7 NUS 43.4 41.1 39.2 42.3 35.9 36.1 36.6 37.3 11.2 18.9 2.9 12.5 SEMEVAL 35.2 35.1 36.2 34.8 34.6 33.0 35.0 34.2 6.1 18.9 1.7 12.5 D 1 Average 37.4 36.3 38.1 38.0 34.4 33.3 36.7 36.0 8.5 12.7 2.8 9.2 D 2 DUC 13.4 7.8 15.0 11.0 13.7 8.4 16.0 11.4 0.0 0.2 0.0 0.0 All Average 32.8 31.2 32.3 32.9 28.6 27.3 29.9 29.4 8.7 14.0 2.5 9.8 Table 1: Testing scores across different model architectures, training paradigms, and datasets.",
"tential downstream tasks, a KPG system can utilize different decoding strategies.",
"For applications that favor high recall (e.g., generating indexing terms for retrieval systems), a common practice is to utilize beam search and take predictions from all beams 1 .",
"This is applicable in both One2One and One2Seq -based models to proliferate the number of predicted phrases at inference time.",
"In this work, we use a beam width of 200 and 50 for One2One and One2Seq , respectively.",
"On the contrary, some other applications favor high precision and small number of predictions (e.g., KG construction), a One2Seq -based model is capable of decoding greedily, thanks to its nature of generating multiple keyphrases in a sequential manner.",
"As an example, we illustrate the two decoding strategies in Figure 1. Specifically, a One2One model typically collects output keyphrases from all beams and use the top k phrases as the model output ( k = 5 in the example).",
"In One2Seq , either beam search or greedy decoding can be applied.",
"For beam search, we use both the order of phrases within a beam and the rankings of beams to rank the outputs.",
"In the shown example, top 5 beam search outputs are obtained from the 2 beams with highest rankings.",
"As for greedy decoding, the decoder uses a beam size of 1, and takes all phrases from the single beam as outputs.",
"In this way, the One2Seq model can determine the number of phrases to output by itself conditioned on t .",
"Evaluation Due to the multiplicity of targets in KPG task, the evaluation protocols are distinct from typical NLG tasks.",
"A spectrum of evaluation metrics have been used to evaluate KPG systems, including metrics that truncate model outputs at a fixed number such as F 1 @5 and F 1 @10 (Meng et al., 2017); metrics that evaluate a model's ability of generating variant number of phrases such as 1 This is in contrast to only taking the single top beam as in typical NLG tasks.",
"F 1 @ O and F 1 @ M (Yuan et al., 2020); metrics that evaluate absent keyphrases such as Recall@50 ( R@50 ).",
"Detailed definitions of the metrics are provided in Appendix A. Due to space limit, we mainly discuss F 1 @ O , F 1 @10 and R@50 in the main content, complete results with all common metrics are included in Appendix E. We save model checkpoints for every 5,000 training steps and report test performance using checkpoints that produce the best F 1 @ O or R@50 on the KP20 K validation set.",
"Datasets A collection of datasets in the domain of scientific publication ( KP20 K , INSPEC , KRAPIVIN , NUS , and SEMEVAL ) and news articles ( DUC ) have been widely used to evaluate KPG task.",
"Following previous work, we train models using the training set of KP20 K since its size is sufficient to support the training of deep neural networks.",
"Evaluation is performed on KP20 K 's test set as well as all other datasets without fine-tuning.",
"Details of the datasets are shown in Appendix B. 3 Generalizability In this section, we show and analyze the generalization performance of KPG systems from 2 dimensions: model architecture and training paradigm.",
"Specifically, we compare the two model architectures (i.e., RNN and TRANS ) as described in 2.",
"For each model architecture, we train the KPG model using either of the training paradigms (i.e., One2One or One2Seq ) also as described in 2.",
"To better understand model variants' generalization properties, we categorize the 6 testing sets into 3 classes according to their distribution similarity with the training data ( KP20 K ), as shown in Table 1. Concretely, KP20 K and KRAPIVIN are in-distribution test sets (denoted as D 0 ), since they both contain scientific paper abstracts paired with keyphrases provided by their authors.",
"INSPEC , NUS and SEMEVAL are out-of-distribution test sets (denoted as D 1 ), they share same type of source text with D 0 , but with additionally labeled keywords by third-party annotators.",
"DUC is a special test set which uses news articles as its source text.",
"Because it shares the least domain knowledge and vocabulary with all the other test sets, we call it out-of-domain test set (denoted as D 2 ).",
"Model Architecture: RNN vs TRANS The first thing to notice is that on present KPG, the models show consistent trends between F 1 @10 and F 1 @ O .",
"We observe that TRANS models significantly outperform RNN models when trained with the One2Seq paradigm on D 0 test sets.",
"However, when test data distribution shift increases, on D 1 test sets, RNN models starts to outperform TRANS ; eventually, when dealing with D 2 test set, RNN outperforms TRANS by a large margin.",
"On models trained with One2One paradigm, we observe a similar trend.",
"On D 0 data, TRANS models achieve comparable F 1 @10 and F 1 @ O scores with RNN , when data distribution shift increases, RNN models produce better results.",
"On the contrary, for absent KPG, TRANS outperforms RNN by a significant margin in all experiment settings.",
"This is especially obvious when models are trained with One2Seq paradigm, where RNN models barely generalize to any of the testing data and produce an average R@50 of 2.5.",
"In the same setting, TRANS models get an average R@50 of 9.8, which is 4 higher than RNN .",
"To further study the different generation behaviors between RNN and TRANS , we investigate the average number of unique predictions generated by either of the models.",
"As shown in Figure 12 in Appendix D, comparing results of order PRES-ABS in sub-figure a/b ( RNN ) with sub-figure c/d ( TRANS ), we observe that TRANS is consistently generating more unique predictions than RNN , in both cases of greedy decoding (4.5 vs 4.2) and beam search (123.3 vs 96.8).",
"We suspect that generating a more diverse set of keyphrases may have a stronger effect on in-distribution test data.",
"The generated outputs during inference are likely to represent the distribution learned from the training data, when the test data share the same (or similar) distribution, a larger set of unique predictions leads to a higher recall which further contributes to their F-scores.",
"In contrast, on test sets which data distribution is far from training distribution, the extra predictions may not be as useful, and even hurts precision.",
"Similarly, because we evaluate",
"ab-(a) Greedy Decoding, RNN d u c i n s p e c k p 20 k k r a p i v i n nu s s e m e v a l a v e r a g e F 1 @ M 8.2 8.3 5.7 9.8 7.3 6.2 8.1 7.7 5.5 22.6 22.3 19.6 21.4 21.8 18.0 24.9 22.2 17.6 28.9 29.3 31.0 27.8 30.4 28.3 31.7 29.0 28.5 28.1 27.9 30.1 27.2 30.4 26.8 31.2 28.9 27.1 31.5 31.0 31.9 28.4 31.6 26.9 35.2 31.0 29.4 28.2 27.8 25.8 25.0 24.5 22.2 30.3 26.7 21.0 24.6 24.4 24.0 23.3 24.3 21.4 26.9 24.2 21.5 A l p h a A l p h a -R e v S-> L L-> SO r i O r i -R e v P r e s -A b s A b s -P r e s R a n d o m",
"sent KPG by the models' recall, TRANS models produce more unique predictions can always outperform RNN models.",
"2 Training Paradigm: One2One vs One2Seq We observe that on present KPG tasks, models trained with the One2Seq paradigm outperforms One2One in most settings, this is particularly clear on D 1 and D 2 test sets.",
"We believe this is potentially due to the unique design of the One2Seq training paradigm where at every generation step, the model conditions its decision making on all previously generated tokens (phrases).",
"Compared to the One2One paradigm where multiple phrases can only be generated independently by beam search in parallel, the One2Seq paradigm can model the dependencies among tokens and the dependencies among phrases more explicitly.",
"However, on absent KPG, One2One consistently outperforms One2Seq .",
"Furthermore, only when trained with One2One paradigm, an RNN based model can achieve R@50 scores close to TRANS -based models.",
"This may because a One2Seq model tends to produce more duplicated predictions during beam search inference.",
"By design, every beam is a string that contains multiple phrases that concatenated by the delimiter <sep> , there is no guarantee that the phrase will not appear in multiple beams.",
"In the example shown in Figure 1, topic tracking is such a duplicate prediction that appears in multiple beams.",
"In fact, the proportion of duplicates in One2Seq predictions 2 Our TRANS and RNN models follow Vaswani et al. (2017) and Meng et al. (2017)'s hyper-parameter settings respectively.",
"RNN is significantly lighter than TRANS .",
"We conduct experiments with a much larger RNN but only observe marginal performance boost against Meng et al. (2017)'s setting.",
"is more than 90%.",
"This is in contrast with beam search on One2One models, where each beam only contains a single keyphrase thus has a much lower probability of generating duplication.",
"3 4 Does Order Matter in One2Seq ?",
"In the One2One paradigm (as shown in Figure 1), each data example is split to multiple equally weighted data pairs, thus it generates phrases without any prior on the order.",
"In contrast, One2Seq training has the unique capability of generating a varying number of keyphrases in a single sequence.",
"This inductive bias enables a model to learn dependencies among keyphrases, and also to implicitly estimate the number of target phrases conditioned on the source text.",
"However, the One2Seq approach introduces a new complication.",
"During training, the Seq2Seq decoder takes the concatenation of multiple target keyphrases as target.",
"As pointed out by Vinyals et al. (2016), order matters in sequence modeling tasks; yet the ordering among the target keyphrases has not been fully investigated and its effect to the models' performance remains unclear.",
"Several studies have noted this problem (Ye and Wang, 2018; Yuan et al., 2020) without further exploration.",
"RANDOM : Randomly shuffle the target phrases.",
"Because of the set generation nature of KPG, we expect randomly shuffled target sequences help to learn an order-invariant decoder.",
"ORI : Keep phrases in their original order in the data (e.g., provided by the authors of source texts).",
"This was used by Ye and Wang (2018).",
"S->L : Phrases sorted by lengths (number of tokens, from short to long).",
"L->S : Reversed order of S->L .",
"ORI-REV : Reversed order of ORI .",
"ALPHA : Sort phrases by alphabetical order.",
"ALPHA-REV : Reversed order of ALPHA .",
"PRES-ABS : Sort present phrases by their first occurrences in source text.",
"Absent phrases are shuffled and appended to the end of the present phrase sequence.",
"This was used by (Yuan et al., 2020).",
"Greedy Decoding In Figure 2, we show the RNN and TRANS model's F 1 @ M on present KPG task, equipped with greedy decoding.",
"In this setting, the model simply chooses the token with the highest probability at every step, and terminates either upon generating the <eos> token or reaching the maximum target length limit (40).",
"This means the model predicts phrases solely relying on its innate distribution learned from the training data, and thus this performance could somewhat reflect to which degree the model fits the training distribution and understands the task.",
"Through this set of experiments, we first observe that each model demonstrates consistent performance across all six test datasets, indicating that ordering strategies play critical roles in training One2Seq models when greedy decoding is applied.",
"When using the RNN architecture, RANDOM consistently yields lower F 1 @ M than other ordering strategies on all datasets.",
"This suggests that a consistent order of the keyphrases is beneficial.",
"However, TRANS models show a better resistance against randomly shuffled keyphrases and produce average tier performance with the RANDOM ordering.",
"Meanwhile, we observe that PRES-ABS outperforms other ordering strategies by significant margins.",
"A possible explanation is that with this order (of occurrences in the source text), the current target phrase is always to the right of the previous one, which can serve as an effective prior for the attention mechanism throughout the One2Seq decoding process.",
"We observe similar trends in greedy decoding models' F 1 @ O and F 1 @10 , due to space limit, we refer readers to Figure 9, 10 in Appendix D. Beam Search Next, we show results obtained from the same set of models equipped with beam search (beam width is 50) in Figure 3 (a/b).",
"Compared with greedy decoding (Figure 10, Appendix D), we can clearly observe the overall F 1 @10 scores have positive correlation with the beam width (greedy decoding is a special case where beam width equals to 1).",
"We observe that compared to the greedy decoding case, the pattern among different ordering strategies appears to be less clear, with the scores distributed more evenly across different settings (concretely, the absolute difference between max average score and min average score is lower).",
"We suspect that the uniformity among different ordering strategies with beam search may be due to the limitation of the evaluation metric F 1 @10 .",
"The metric F 1 @10 truncates a model's predictions to 10 top-ranked keyphrases.",
"By investigation, we find that during greedy decoding, the number of predictions acts as a dominant factor, this number varies greatly among different ordering.",
"With greedy decoding, PRES-ABS can generally predict more phrases than the others, which explains its performance advantage (Figure 13 (a/c), Appendix D).",
"However, as the beam width increases, all models can predict more than 10 phrases (Fig-ure 13 (b/d), Appendix D).",
"In this case, the F 1 @10 is contributed more by a model' ability of generating more high quality keyphrases within its top-10 outputs, rather than the amount of predictions.",
"Therefore, the performance gap among ordering strategies is gradually narrowed in beam search.",
"For instance, we observe that the F 1 @10 difference between PRES-ABS and S->L produced by RNN is 3.5/2.0/1.0/0.2 when beam width is 1/10/25/50.",
"To validate our assumption, we further investigate the same set of models' performance on F 1 @ O , which strictly truncates the generated keyphrase list by the number of ground-truth keyphrases O (where in most cases O < 10 ).",
"Under this harsher criterion, a model is required to generate more high quality keyphrases within its topO outputs.",
"From Figure 3 (c/d), we observe that the scores are less uniformly distributed, this indicates a larger difference between different order settings.",
"Among all orders, ORI produces best average F 1 @ O with RNN , whereas ALPHA-REV and ORI-REV produce best average F 1 @ O with TRANS .",
"In our curated list of order settings, there are 3 pairs of orderings with reversed relationship (i.e., S->L vs L->S , ALPHA vs ALPHA-REV , ORI vs ORI-REV ).",
"Interestingly, we observe that when beam search is applied, these orderings often show a non-negligible score difference with their counterparts.",
"This also suggests that order matters since specific model architecture and training paradigm often has its own preference on the phrase ordering.",
"It is also worth mentioning that when we manually check the output sequences in test set produced by ALPHA ordering, we notice that the model is actually able to retain alphabetical order among the predicted keyphrases, hinting that a Seq2Seq model might be capable of learning simple morphological dependencies even without access to any character-level representations.",
"Ordering in Absent KPG We report the performance of the same set of models on the absent portion of data in Figure 11, Appendix D. Although achieving relatively low R@50 in most settings, scores produced by various orderings show clear distinctions, normalized heat maps suggest that the rankings among different orderings tend to be consistent across all testing datasets.",
"In general, PRES-ABS produces better absent keyphrases across different model architectures.",
"Due to the Figure 4: Comparing models trained solely with KP20 K against with additional MAGKP data.",
"space limit, we encourage readers to check out Appendix D, which provides an exhaustive set of heat maps including all experiment settings and metrics discussed in this section.",
"In this section, we further explore the possibility of improving KPG performance by scaling up the training data.",
"Data size has been shown as one of the most effective factors for training language models (Raffel et al., 2019; Ott et al., 2018) but it has yet been discussed in the context of KPG.",
"MagKP Dataset We construct a new dataset, namely MAGKP , on the basis of Microsoft Academic Graph (Sinha et al., 2015).",
"We filter the original MAG v1 dataset (166 million papers, multiple domains) and only keep papers in Computer Science and with at least one keyphrase.",
"This results in 2.7 million data points ( 5 larger than KP20 K ).",
"This dataset remains noisy despite the stringent fil-tering criteria, this is because 1) the data is crawled from the web and 2) some keywords are labeled by automatic systems rather than humans.",
"This noisy nature brings many interesting observations.",
"General Observations The first thing we try is to train a KPG model with both KP20 K and MAGKP .",
"During training, the two dataset are fed to the model in an alternate manner, we denote this data mixing strategy as ALT.",
"In Figure 4, we compare models' performance when trained on both KP20 K and MAGKP against solely on KP20 K .",
"We observe the extra MAGKP data brings consistent improvement across most model architecture and training paradigm variants.",
"This suggests that Figure 5: A histogram showing the distribution of #(kp per document) on KP20 K , MAGKP and its subsets.",
"model KPG models discussed in this work can benefit from additional training data.",
"Among all the settings, F 1 @ O of the TRANS + One2Seq is boosted by 3 points on present KPG, the resulting score outperforms other variants by a significant margin and even surpass a host of state-of-the-art models (see comparison in Appendix E).",
"Again, the same setting obtains a 2.3 boost of R@50 score on the absent KPG task, makes TRANS + One2Seq the setting that benefits the most from extra data.",
"In contrast, the extra MAGKP data provide only marginal improvement to RNN -based models.",
"On present KPG, RNN + One2Seq even has an F 1 @ O drop when trained with more data.",
"As mentioned in 3, the RNN model is significantly lighter than TRANS .",
"To investigate if an RNN with more parameters can benefit more from MAGKP , we conduct experiments which use a GRU with much larger hidden size (dubbed BIGRNN ).",
"Results (in Appendix E) suggest otherwise, extra training data leads to negative effect on One2One and only marginal gain on One2Seq .",
"We thus believe the architecture difference between TRANS and RNN is the potential cause, for instance, the built-in self-attention mechanism may help TRANS models learning from noisy data.",
"Learning with Noisy Data To further investigate the performance boost brought by the MAGKP dataset on TRANS + One2Seq , we are curious to know which portion of the noisy data helped the most.",
"As a naturally way to cluster the MAGKP data, we define the noisiness by the number of keyphrases per data point.",
"As shown in Figure 5, the distribution of MAGKP (black border) covers a Figure 6: TRANS + One2Seq trained with KP20 K and different subsets of MAGKP , using four data mixing strategy.",
"much wider spectrum on the x-axis compared to KP20 K (red).",
"Because keyphrase labels are provided by human authors, a majority of its keyphrase numbers lie in the range of [3, 6]; however, only less than 20% of the MAGKP data overlaps with this number distribution.",
"We thus break MAGKP down into a set of smaller subset: 1) MAGKP-LN is a considerably L ess N oisy subset that contains data points that have 3~6 phrases.",
"2) MAGKP-Nlarge is the N oisy subset in which all data points have more than 10 keyphrases.",
"3) MAGKP-Nsmall is a randomly sampled subset of MAGKP-Nlarge with the same size as MAGKP-LN .",
"We also define a set of data mixing strategies to compare against ALT: ONLY : models are trained solely on a single set (or subset) of data; MX: KP20 K and MAGKP (or its subset) are split into shards (10k each) and they are randomly sampled during training; FT: models are pre-trained on MAGKP (or its subset) and fine-tuned on KP20 K .",
"In Figure 6, we observe that none of the MAGKP subsets can match KP20 K 's performance in the ONLY setting.",
"Because MAGKP-LN and MAGKP-Nsmall share similar data size with KP20 K , this suggest the distributional shift between MAGKP and the 6 testing sets is significant.",
"In the MX setting where KP20 K is mixed with noisy data, we observe a notable performance boost compared to ONLY (yet still lower than ALT), however, we do not see clear patterns among the 4 MAGKP subsets in this setting.",
"In the FT setting, we observe a surge in scores across all MAGKP subsets.",
"In present KPG, both MAGKP and MAGKP-Nlarge outperform the score achieved in the ALT setting; similarly, in absent KPG, MAGKP , MAGKP-Nlarge and MAGKP-Nsmall exceeds the ALT score.",
"This is to our surprise that the subsets considered as noisy provide a greater performance boost, while they perform poorly if O NLY trained on these subsets.",
"To sum up, during our investigation on augmenting KP20 K with the noisy MAGKP data, we obtain the best performance from a TRANS + One2Seq model that pre-trained on MAGKP and then fine-tuned on KP20 K , and this performance has outperformed current state-or-the-art models.",
"We conjecture that the performance gain may come from data diversity, because MAGKP contains a much wider distribution of data compared to the author keyword distribution as in KP20 K .",
"This inspires us to develop data augmentation techniques to exploit the diversity in unlabeled data.",
"Traditional Keyphrase Extraction Keyphrase extraction has been studied extensively for decades.",
"A common approach is to formulate it as a two-step process.",
"Specifically, a system first heuristically selects a set of candidate phrases from the text using some pre-defined features (Witten et al., 1999; Liu et al., 2011; Wang et al., 2016; Yang et al., 2017).",
"Subsequently, a ranker is used to select the top ranked candidates following various criteria.",
"The ranker can be bagged decision trees (Medelyan et al., 2009; Lopez and Romary, 2010), Multi-Layer Perceptron, Support Vector Machine (Lopez and Romary, 2010) or PageRank (Mi-halcea and Tarau, 2004; Le et al., 2016; Wan and Xiao, 2008).",
"Compared to the newly developed data driven approaches with deep neural networks, the above approaches suffer from poor performance and the need of dataset-specific heuristic design.",
"Neural Keyphrase Extraction On neural keyphrase extraction task, Zhang et al. (2016); Luan et al. (2017); Gollapalli et al. (2017) use sequence labeling approach; Subramanian et al. (2018) use pointer networks to select spans from source text; Sun et al. (2019) leverage graph neural networks.",
"Despite improved over tradition approaches, the above methods do not have the capability of predicting absent keyphrases.",
"Meng et al. (2017) first propose the CopyRNN model, which both generates words from vocabulary and points to words from the source text overcoming the barrier of predicting absent keyphrases.",
"Following this idea, Chen et al. (2018); Zhao and Zhang (2019) leverage the attention mechanism to help reducing duplication and improving coverage.",
"Ye and Wang (2018) propose a semi-supervised training strategy.",
"Yuan et al. (2020) propose One2Seq , which enables a model to generate variable number of keyphrases.",
"Chen et al. (2019b); Ye and Wang (2018); Wang et al. (2019) propose to leverage extra structure information (e.g., title, topic) to guide the generation.",
"Chan et al. (2019) propose an RL model, Swami-nathan et al. (2020) propose using GAN for KPG.",
"Chen et al. (2019a) retrieve similar documents from training data to help producing more accurate keyphrases.",
"Chen et al. (2020) introduce hierarchical decoding and exclusion mechanism to prevent from generating duplication.",
"ano and Bo-jar (2019) also propose to utilize more data, but their goal is to bridge KPG with summarization.",
"We present an empirical study discussing neural KPG models from various aspects.",
"Through extensive experiments and analysis, we answer the three questions (1).",
"Results suggest that given a carefully chosen architecture and training strategy, a base model can perform comparable with fancy SOTA models.",
"Further augmented with (noisy) data in the correct way, a base model can outperform SOTA models (Appendix E).",
"We strive to provide a guideline on how to choose such architectures and training strategies, which hopefully can be proven valuable and helpful to the community.",
"We conclude our discussion with the following takeaways: 1. One2Seq excels at present KPG, while One2One performs better on absent KPG.",
"See Section 3. 2. For present KPG, TRANS performs better on in-distribution data, when distribution or domain shift increase, RNN can outperform TRANS .",
"See Section 3. 3. On absent KPG, TRANS is the clear winner.",
"See Section 3. 4. For One2Seq , target ordering is important in greedy decoding (with PRES-ABS being an overall good choice).",
"See Section 4. 5. The effect of target ordering tends to diminish when beam search is performed.",
"See Section 4. 6.",
"Large and noisy data can benefit KPG.",
"Empirically, a decent way to leverage them is to pre-train on extra data then fine-tune on small in-domain data.",
"See Section 5. 7. Copy mechanism helps present prediction while worsening absent performance.",
"See Appendix C.1.",
"8. Larger beam width is beneficial, especially for absent KPG.",
"However, on present KPG tasks, the benefit is diminished past a certain point and thus computational efficiency needs to be carefully considered.",
"See Appendix C.2.",
"RM was supported by the Amazon Research Awards for the project Transferable, Controllable, Applicable Keyphrase Generation.",
"This research was partially supported by the University of Pittsburgh Center for Research Computing through the resources provided.",
"The authors thank the anonymous NAACL reviewers for their helpful feedback and suggestions."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other"
] |
[
"Recently, various response generation models for two-party conversations have achieved impressive improvements, but less effort has been paid to multi-party conversations (MPCs) which are more practical and complicated.",
"Compared with a two-party conversation where a dialogue context is a sequence of utterances, building a response generation model for MPCs is more challenging, since there exist complicated context structures and the generated responses heavily rely on both interlocutors (i.e., speaker and addressee) and history utterances.",
"To address these challenges, we present HeterMPC, a heterogeneous graph-based neural network for response generation in MPCs which models the semantics of utterances and interlocutors simultaneously with two types of nodes in a graph.",
"Besides, we also design six types of meta relations with node-edge-type-dependent parameters to characterize the heterogeneous interactions within the graph.",
"Through multi-hop updating, HeterMPC can adequately utilize the structural knowledge of conversations for response generation.",
"Experimental results on the Ubuntu Internet Relay Chat (IRC) channel benchmark show that HeterMPC outperforms various baseline models for response generation in MPCs.",
"Enabling dialogue systems to converse naturally with humans is a challenging yet intriguing problem of artificial intelligence and has attracted increasing attention due to its promising potentials and alluring commercial values (Kepuska and Bohouta, 2018; Berdasco et al., 2019; Zhou et al., 2020).",
"A large number of researchers have focused on building dialogue generation models with various neural networks.",
"At first, researchers mostly Work done during the internship at Microsoft.",
"(b) Graphical information flow in multi-party conversation Figure 1: Illustration of a graphical information flow in an MPC.",
"Pink rectangles denote utterances and blue circles denote interlocutors.",
"Each solid line represents the replied-by \" relationship between two utterances. Each dashed line indicates the speaker of an utterance. focused on dialogues between two participants (Shang et al., 2015; Serban et al., 2016; Wen et al., 2017; Young et al., 2018). Recently, researchers have paid more attention to a more practical and challenging scenario involving more than two participants, which is well known as multi-party conversations (MPCs) (Ouchi and Tsuboi, 2016; Zhang et al., 2018; Le et al., 2019; Hu et al., 2019b; Wang et al., 2020b; Gu et al., 2021). Utterances in a two-party conversation are posted one by one between two interlocutors, constituting a sequential information flow. Different from that, utterances in an MPC can be spoken by anyone and address anyone else in this conversation, which constitutes a graphical information flow as shown in Figure 1. Although sequence-to-sequence (Seq2Seq) models (Sutskever et al., 2014; Serban et al., 2016) are effective at modeling sequential dialogues, they fall short of modeling graph-structured ones. To overcome this drawback, Hu et al. (2019b) first proposed a graph-structured network (GSN) to encode utterances based on the graph topology rather than the sequence of their appearances. The graph established in GSN was homogeneous, where nodes represented only utterances. However, interlocutors are also important components of MPCs. There exist complicated interactions between interlocutors, and between an utterance 5086 and an interlocutor. Furthermore, when passing messages over a graph, a bidirectional information flow algorithm was designed for GSN. Since both the forward and backward information flows employed the same model structure and parameters, this algorithm cannot distinguish the reply \" or replied-by \" relations between two connected utterance nodes. Also, information flows along both directions are independently propagated, so that a graph node cannot be jointly updated at a single propagation step. On account of above issues, we propose a heterogeneous graph-based neural network for response generation in MPCs, named HeterMPC. First, a heterogeneous graph is designed which employs two types of nodes to represent utterances and interlocutors respectively. Different from previous methods that built a homogeneous graph modeling only utterances, utterances and interlocutors are modeled simultaneously in HeterMPC, so that the complicated interactions between interlocutors, between utterances, and between an interlocutor and an utterance can be explicitly described. In order to characterize the heterogeneous attention over each ( source, edge, target ) triple, model parameters dependent on both types of nodes and edges are introduced when calculating attention weights and passing messages. Specifically, we introduce six types of meta relations for modeling different edges including reply and replied-by between two utterances, speak and spoken-by between an utterance and a speaker, and address and addressed-by between an utterance and an addressee. With these node-edge-type-dependent structures and parameters, HeterMPC can better utilize the structural knowledge of conversations for node representation and response generation than conventional homogeneous graphs. Finally, Transformer is employed as the backbone of HeterMPC and its model parameters can be initialized with PLMs to take advantage of the recent breakthrough on pre-training. We evaluate HeterMPC on the Ubuntu Internet Relay Chat (IRC) channel benchmark released by Hu et al. (2019b). Experimental results show that HeterMPC outperforms GSN (Hu et al., 2019b), GPT-2 (Radford et al., 2019), BERT (Devlin et al., 2019) and BART (Lewis et al., 2020) by significant margins in terms of both automated and human evaluation metrics, achieving a new state-of-the-art performance for response generation in MPCs. In summary, our contributions in this paper are three-fold: 1) To the best of our knowledge, this paper is the first exploration of using heterogeneous graphs for modeling conversations; 2) A Transformer-based heterogeneous graph architecture is introduced for response generation in MPCs, in which two types of nodes, six types of meta relations, and node-edge-type-dependent parameters are employed to characterize the heterogeneous properties of MPCs; 3) Experimental results show that our proposed model achieves a new state-of-the-art performance of response generation in MPCs on the Ubuntu IRC benchmark. 2 Related Work Multi-Party Conversation Existing methods on building dialogue systems can be generally categorized into generation-based (Shang et al., 2015; Serban et al., 2016; Wen et al., 2017; Young et al., 2018; Zhang et al., 2020) or retrieval-based approaches (Lowe et al., 2015; Wu et al., 2017; Zhou et al., 2018; Tao et al., 2019a,b; Gu et al., 2019, 2020). In this paper, we study the task of response generation in MPCs, where in addition to utterances, interlocutors are also important components who play the roles of speakers or addressees. Previous methods have explored retrieval-based approaches for MPCs. For example, Ouchi and Tsuboi (2016) proposed the dynamic model which updated speaker embeddings with conversation streams. Zhang et al. (2018) proposed speaker interaction RNN which updated speaker embeddings role-sensitively. Wang et al. (2020b) proposed to track the dynamic topic in a conversation. Gu et al. (2021) proposed jointly learning who says what to whom\" in a unified framework by designing self-supervised tasks during pre-training.",
"On the other hand, Hu et al. (2019b) explored generation-based approaches by proposing a graph-structured network, the core of which was an utterance-level graph-structured encoder.",
"Heterogeneous Graph Neural Network Early studies on graph neural networks (GNNs) focused on homogeneous graphs where a whole graph is composed of a single type of nodes.",
"However, graphs in real-world applications usually come with multiple types of nodes, namely heterogeneous information networks (HINs) or heterogeneous graphs (Sun and Han, 2012).",
"Recently, researchers have attempted to extend GNNs to model heterogeneity.",
"For example, Zhang et al. 5087 (2019) adopted different RNNs for different types of nodes to integrate multi-modal features.",
"Wang et al. (2019) extended graph attention networks by maintaining different weights for different meta-path-defined edges.",
"Hu et al. (2020) proposed heterogeneous graph Transformer (HGT) to model heterogeneity by maintaining dedicated representations for different types of nodes and edges.",
"In addition, heterogeneous graphs have also been applied to many NLP tasks, such as multi-hop reading comprehension (Tu et al., 2019), text classification (Hu et al., 2019a) and document summarization (Wang et al., 2020a).",
"Previous studies have verified the superiority of modeling MPCs with homogeneous graphs considering only utterances.",
"We claim that it is indeed necessary to model a complex information flow in MPCs shown in Figure 1 with a heterogeneous graph, since a homogeneous one cannot explicitly model the relationships of multiple utterances spoken by or addressing an interlocutor.",
"Nowadays, HINs have been widely used in many NLP tasks.",
"To the best of our knowledge, this paper makes the first attempt to build a heterogeneous graph-based neural network considering utterances and interlocutors simultaneously for response generation in MPCs.",
"In addition, we introduce many task-specific modelings for MPCs such as graph construction and node updating which will be elaborated in the model section.",
"The task of response generation in MPCs is to generate an appropriate response r given the conversation history, the speaker of a response, and which utterance the response is going to reply to, which can be formulated as:",
"Here, G is a heterogeneous graph containing both history conversation and the response to be generated.",
"The speaker and addressee of the response are known and its contents are masked.",
"The response tokens are generated in an autoregressive way.",
"r k and r <k stand for the k -th token and the first ( k 1) tokens of response r respectively.",
"| r | is the length of r .",
"We will introduce how to construct the graph and how to model the probability in Eq.",
"(1) given the built graph in the next section.",
"HeterMPC adopts an encoder-decoder architecture consisting of stacked encoder and decoder layers for graph-to-sequence learning (Yao et al., 2020).",
"The graph encoder is designed to capture conversation structures and output the representations of all nodes in a graph that are fed to the decoder for response generation.",
"A heterogeneous graph is constructed to explicitly model the complicated interactions between interlocutors, between utterances, and between an interlocutor and an utterance in an MPC.",
"This graph models utterances and interlocutors by considering them as two types of nodes under a unified framework.",
"Given an MPC instance composed of M utterances and I interlocutors, a heterogeneous graph G ( V , E ) is constructed.",
"Specifically, V is a set of M + I nodes.",
"Each node denotes either an utterance or an interlocutor.",
"E = { e p,q } M + I p,q =1 is a set of directed edges.",
"Each edge e p,q describes the connection from node p to node q .",
"Inspired by Sun et al. (2011, 2012), we introduce six types of meta relations { reply , replied-by , speak , spoken-by , address , addressed-by } to describe the directed edge between two graph nodes as illustrated in Figure 2. Specifically, if an utterance represented by node n replies another utterance represented by node m , the edge e n,m = reply and the reversed edge e m,n = replied-by .",
"If an utterance represented by node m is spoken by an interlocutor represented by node i , e i,m = speak and e m,i = spoken-by .",
"If an utterance represented by node n addresses an interlocutor represented by 5088 Transformer L 1 X L 2 X Transformer Transformer ATT r e p l y WMSG r e p l y WATT r e p li e d b y W MSG r e p li e d b y W ATT s p e a k WMSG s p e a k WATT add r e ss e d b y W MSG add r e ss e d b y W FFNUTR Transformer QUTRWKUTRWVUTRWKUTRWVUTRWKITRWVITRWKITRWVITRW Target node Source nodes Transformer Transformer ATT s po ke n b y W MSG s po ke n b y W ATT add r e ss WMSG add r e ss WKUTRWVUTRWKUTRWVUTRWQITRWFFNITR Source nodes Target node",
"node i , e n,i = address and e i,n = addressed-by .",
"In other cases, e p,q = NULL indicating that there is no connection between these two nodes.",
"Note that it is necessary to distinguish the bidirectional edges between every two nodes that indicate the active and passive tense information respectively.",
"In HeterMPC, each node is represented as a vector.",
"These vectors are first initialized individually without considering graph edges.",
"Utterances When encoding utterances, a [CLS] token is inserted at the start of each utterance, denoting the utterance-level representation.",
"Besides, a [SEP] token is also inserted at the end of each utterance (Devlin et al., 2019).",
"Then each utterance is encoded individually by stacked Transformer encoder layers through the self-attention mechanism to derive the contextualized utterance representations.",
"1 The output of a Transformer encoder layer is used as the input of the next layer.",
"Readers can refer to Vaswani et al. (2017) for details of Transformer.",
"Formally, the calculation for an utterance at the l -th Transformer layer is denoted as: H l +1 m = TransformerEncoder( H lm ) , (2) 1 In our experiments, BERT or BART was selected to initialize the utterance encoder layers of HeterMPC.",
"Then, the built HeterMPC models were compared with the baseline models directly finetuning BERT or BART, respectively.",
"It is worth noting that the utterance encoder layers of HeterMPC can also be initialized by other types of PLMs, and the comparison across PLMs is not the focus of this paper.",
"where m { 1 , ..., M } , l { 0 , ..., L 1 1 } , L 1 denotes the number of Transformer layers for initialization, H lm R k m d , k m denotes the length of an utterance and d denotes the dimension of embedding vectors.",
"Interlocutors Different from an utterance composed of a sequence of tokens, an interlocutor is directly represented with an embedding vector.",
"Interlocutors in a conversation are indexed according to their speaking order and the embedding vector for each interlocutor is derived by looking up an order-based interlocutor embedding table (Gu et al., 2020) that is updated during end-to-end learning.",
"The first interlocutors in all conversation sessions share the same embedding vector in the interlocutor embedding table, so do all the second interlocutors.",
"2 Thus, this order-based embedding table can be shared across the training, validation and testing sets, and there is no need to estimate an embedding vector for each specific interlocutor in the dataset.",
"As shown in Figure 3, the initialized node representations are updated by feeding them into the built graph for absorbing context information (Kipf and Welling, 2017; Velickovic et al., 2018; Yun",
"2 In our experiments, the maximum interlocutor number was set to 10 and an embedding table sized 10*768 was learned during training. We did study initializing the embedding vector of an interlocutor node by averaging the representations of all utterance nodes it speaks, but no further improvement can be achieved.",
"et al., 2019).",
"We calculate heterogeneous attention weights between connected nodes and pass messages over the graph in a node-edge-type-dependent manner, inspired by introducing parameters to maximize feature distribution differences for modeling heterogeneity (Schlichtkrull et al., 2018; Wang et al., 2019; Zhang et al., 2019; Hu et al., 2020).",
"After collecting the information from all source nodes to a target node, a node-type-dependent feed-forward network (FFN) followed by a residual connection (He et al., 2016) is employed to aggregate the information.",
"Then, in order to let each token in an utterance have access to the information from other utterances, an additional Transformer layer is placed for utterance nodes specifically.",
"L 2 denotes the number of iterations for updating both utterance and interlocutor nodes.",
"Since the representations of two types of nodes are initialized in different ways, node-type-dependent linear transformations are first applied to node representations before attention calculation so that the two types of nodes share similar feature distributions (Wang et al., 2019; Hu et al., 2020).",
"Meanwhile, each of the six relation types is assigned a separate linear projection so that the semantic relationship between two connected nodes can be accurately described when calculating attention weights.",
"The forward and backward information flows between them can also be distinguished.",
"Formally, let the triple ( s, e, t ) denote an edge e connecting a source node s to a target node t .",
"The representations of the source and target nodes at the l -th iteration 3 are denoted as h ls and h lt , serving as a key ( K ) vector and a query ( Q ) vector of attention calculation respectively.",
"Then, the heterogeneous attention weight w l ( s, e, t ) before normalization for this triple is calculated as: k l ( s ) = h ls W K ( s ) + b K ( s ) , (3) q l ( t ) = h lt W Q ( t ) + b Q ( t ) , (4) w l ( s, e, t ) = k l ( s ) W ATTe s,t q l ( t ) T e s,t d .",
"(5) Here, ( s ) , ( t ) {UTR, ITR} distinguish utterance ( UTR ) and interlocutor ( ITR ) nodes.",
"Eqs.",
"(3) and (4) are node-type-dependent linear transformations.",
"Eq.",
"(5) contains an edge-type-dependent linear projection W ATTe s,t where e s,t is an adaptive 3 For an utterance, the representation for the [CLS] token is extracted as the utterance-level representation.",
"When passing the message of a source node that serves as a value ( V ) vector to a target node, node-edge-type-dependent parameters are also introduced considering the heterogeneous properties of nodes and edges.",
"Mathematically: v l ( s ) = h ls W V ( s ) + b V ( s ) , (6) v l ( s ) = v l ( s ) W MSGe s,t , (7) where v l ( s ) is the passed message and all W R d d and b R d are parameters to be learnt.",
"For a target node, the messages passed from all its connected source nodes need to be aggregated.",
"A softmax function is applied to normalize the attention weights and then the messages from all source codes are summarized as: h lt = (cid:88) s S ( t ) softmax( w l ( s, e, t )) v l ( s ) , (8) where S ( t ) denotes the set of source nodes for the target node t .",
"Then the summarized message h lt is aggregated with the original node representation h lt using a node-type-dependent FFN followed by a residual connection (He et al., 2016) as: h l +1 t = FFN ( t ) ( h lt ) + h lt , (9) where the output h l +1 t is used as the input of the next iteration of node updating.",
"One iteration can be viewed as a single-step information propagation along edges.",
"When stacking L 2 iterations, a node can attend to other nodes up to L 2 hops away.",
"A specific consideration on utterance nodes is that the tokens except [CLS] in an utterance have no access to other utterances during the node updating process introduced above.",
"To overcome this disadvantage and derive more contextualized utterance representations, an additional Transformer layer (Vaswani et al., 2017) is further placed for utterance nodes as shown in Figure 3. In detail, at the l -th iteration, the representations of an utterance node before and after node updating, i.e., h lt and h l +1 t , are concatenated and then compressed by a linear transformation as: h l +1 t = [ h lt ; h l +1 t ] W com + b com , (10) 5090 Masked Self Attention Add & Norm Cross Attention Add & Norm Feed Forward Add & Norm Linear Softmax Output Probabilities Graph Encoder L 3 x Input: Context Output: Response Node Representations Figure 4: The decoder architecture of HeterMPC.",
"where W com R 2 d d and b com R d are parameters.",
"Then, h l +1 t replaces the representation of [CLS] (i.e., h lt ) in the sequence representations of the whole utterance.",
"Finally, the updated sequence representations are fed into the additional Transformer layer for another round of intra-utterance self-attention, so that the context information learnt by the [CLS] representation can be shared with other tokens in the utterance.",
"The decoder is composed of a stack of identical layers as shown in Figure 4. We follow the standard implementation of Transformer decoder to generate responses.",
"In each decoder layer, a masked self-attention operation is first performed where each token cannot attend to future tokens to avoid information leakage.",
"Furthermore, a cross-attention operation over the node representations of the graph encoder output is performed to incorporate graph information for decoding.",
"It is notable that a residual connection along with layer normalization is followed by each attention operation.",
"We evaluated our proposed method on the Ubuntu IRC benchmark used in Hu et al. (2019b).",
"The data processing script provided by Hu et al. (2019b) was employed to derive the dataset.",
"4 In this dataset, both speaker and addressee labels were included for each utterance in a session.",
"When testing, the 4 We contacted the authors of Hu et al. (2019b) to obtain the data processing script.",
"As they claimed, it was an updated version which was a little different from that used in their paper.",
"Thus, we re-implemented all baselines on this updated dataset to ensure fair comparison.",
"speaker and addressee information was both given for response generation, i.e., the system knew who would speak next and which utterance should be responded to following the graph structure.",
"It contained 311,725/5,000/5,000 dialogues in the training/validation/testing sets respectively.",
"We compared our proposed methods with as many MPC models as possible.",
"Considering that there are only a few research papers in this field, several recent advanced models were also adapted to provide sufficient comparisons.",
"Finally, we compared with the following baseline models: (1) RNN-based Seq2Seq (Sutskever et al., 2014) took all utterances except the target utterance to generate as input, which were sorted according to their posting time and concatenated.",
"Thus, structured conversations were converted into sequential ones.",
"Seq2Seq modeling with attention was performed as that in Sutskever et al. (2014); Bahdanau et al. (2015) on the concatenated utterances.",
"(2) Transformer (Vaswani et al., 2017) took the same input utterances as those used for the Seq2Seq model.",
"(3) GPT-2 (Radford et al., 2019) was a uni-directional pre-trained language model.",
"Following its original concatenation operation, all context utterances and the response were concatenated with a special [SEP] token as input for encoding.",
"(4) BERT (Devlin et al., 2019) concatenated all context utterances and the response similarly as those for GPT-2.",
"To adapt BERT for response generation, a special masking mechanism was designed to avoid response information leakage during encoding.",
"Concretely, each token in the context utterances attended to all tokens in the context utterances, while each token in the response cannot attend to future tokens in the utterance.",
"(5) GSN (Hu et al., 2019b) achieved the state-of-the-art performance on MPCs.",
"The core of GSN was an utterance-level graph-structured encoder.",
"(6) BART (Lewis et al., 2020) was a denoising autoencoder using a standard Tranformer-based architecture, trained by corrupting text with an arbitrary noising function and learning to reconstruct the original text.",
"In our experiments, a concatenated context started with <s> and separated with </s> were fed into the encoder, and a response were fed into the decoder.",
"To ensure all experimental results were comparable, we used the same automated and human evaluation",
"metrics as those used in previous work (Hu et al., 2019b).",
"Hu et al. (2019b) used the evaluation package released by Chen et al. (2015) including BLEU-1 to BLEU-4, METEOR and ROUGEL , which was also used in this paper.",
"5 Human evaluation was conducted to measure the quality of the generated responses in terms of three independent aspects: 1) relevance, 2) fluency and 3) informativeness.",
"Each judge was asked to give three binary scores for a response, which were further summed up to derive the final score ranging from 0 to 3. 5.4 Training Details Model parameters were initialized with pre-trained weights of bert-base-uncased released by Wolf et al. (2020).",
"The AdamW method (Loshchilov 5 https://github.com/tylin/coco-caption and Hutter, 2019) was employed for optimization.",
"The learning rate was initialized as 6 .",
"25 e 5 and was decayed linearly down to 0 .",
"The max gradient norm was clipped down to 1 .",
"0 .",
"The batch size was set to 16 with 8 gradient accumulation steps.",
"The maximum utterance length was set to 50 .",
"The number of layers for initializing utterance representations L 1 was set to 9, and the number of layers for heterogeneous graph iteration L 2 was set to 3. L 1 and L 2 were validated on the validation set.",
"The number of decoder layers L 3 was set to 6, achieving the best performance out of {2, 4, 6, 8} on the validation set.",
"The strategy of greedy search was performed for decoding.",
"The maximum length of responses for generation was also set to 50 .",
"All experiments were run on a single GeForce RTX 2080 Ti GPU.",
"The maximum number of epochs was set to 15, taking about 40 hours.",
"The validation set was used to select the best model for testing.",
"All code was implemented in the PyTorch framework 6 and are published to help replicate our results.",
"7 5.5 Evaluation Results In our experiments, BERT and BART were selected to initialize HeterMPC.",
"HeterMPC BERT denoted that the utterance encoder was initialized with BERT and the decoder was randomly initialized.",
"HeterMPC BART denoted the encoder and decoder 6 https://pytorch.org/ 7 https://github.com/lxchtan/HeterMPC 5092 were initialized by those of BART respectively.",
"Automated Evaluation Table 1 presents the evaluation results of HeterMPC BERT , HeterMPC BART and previous methods on the test set.",
"Each model ran four times with identical architectures and different random initializations, and the best out of them was reported.",
"We ran the code released by Hu et al. (2019b) to reproduce the results of GSN for a fair comparison.",
"8 The results show that both HeterMPC BERT and HeterMPC BART outperformed all baselines in terms of all metrics.",
"HeterMPC BERT outperformed GSN by 2.38% BLEU-1 and 0.44% BLEU-4, and outperformed GPT-2 by 2.24% BLEU-1 and 0.48% BLEU-4.",
"HeterMPC BART outperformed GSN by 2.03% BLEU-1 and 0.52% BLEU-4, and outperformed GPT-2 by 1.89% BLEU-1 and 0.56% BLEU-4.",
"Furthermore, HeterMPC BERT outperformed BERT by 1.71% BLEU-1 and 0.52% BLEU-4, and HeterMPC BART outperformed BART by 1.01% BLEU-1 and 0.54% BLEU-4, illustrating the importance of modeling MPC structures.",
"To further verify the effectiveness of our proposed methods, ablation tests were conducted as shown in Table 1. First, all nodes or edges were considered equivalently by employing the same linear transformations in Eqs.",
"(3) to (9) for all node or edge types without distinguishing them.",
"The drop in performance illustrates the effectiveness of the node-edge-type-dependent parameters.",
"On the other hand, interlocutor nodes were removed out of a graph and only the meta relations of reply and replied-by were left.",
"The drop in performance illustrates the importance of modeling interactions between utterances and interlocutors, and the effectiveness of the heterogeneous architecture.",
"Human Evaluation Table 2 presents the human evaluation results on a randomly sampled test set.",
"200 samples were evaluated and the order of evaluation systems were shuffled.",
"Three graduate students were asked to score from 0 to 3 (3 for the best) and the average scores were reported.",
"The Fleiss's kappa value (Fleiss, 1971) for each model was also reported, indicating the inter-judge moderate agreement during evaluation.",
"It can be seen that HeterMPC BERT and HeterMPC BART achieved higher subjective quality scores than the baselines.",
"Their kappa values were also higher than the BERT and BART baselines, respectively.",
"The impact of numbers of iterations ( L 2 ).",
"Figure 5 illustrates how the performance of HeterMPC BERT changed with respect to different numbers of iterations ( L 2 ) on the test set.",
"It can be seen that the performance of HeterMPC BERT was significantly improved as L 2 increased at the beginning, which shows the effectiveness of incorporating the contextual information between nodes.",
"Then, the performance was stable and dropped slightly.",
"The reason might be that models begin to overfit due to a larger set of parameters.",
"The impact of conversation length.",
"9 Figure 6 illustrates how the performance of HeterMPC BERT changed according to the test samples with different session lengths.",
"As the session length increased, the performance of HeterMPC BERT dropped less than that of BERT, showing superiority of our method on dealing with longer conversations.",
"Case Study. Case studies were conducted by randomly sampling two MPC instances as shown in Table 3. Given the conversation graph of the first case, the response to generate addresses I.2. Thus, the information relevant to I.2 should be collected. We can see that gparted in the first utterance is two hops away from I.2 (the first utterance is replied by the second utterance which is spoken by I.2), and this word in the fourth utterance and install gparted in the third utterance are both one hop away from I.2 (these",
"two utterances directly address I.2).",
"The responses generated by HeterMPC BERT and HeterMPC BART both contain these keywords, showing that it can capture the conversation graph information accurately and generate a human-like response.",
"However, due to the lack of the interlocutor information and the conversation structure, GSN generated an irrelevant response.",
"BERT generated a response which seems replying to the third utterance.",
"Although BART captured gparted , it failed to handle the action install .",
"In the second case, we can see that the responses generated by GSN, BERT and BART are general and useless while HeterMPC BERT and HeterMPC BART can still generate a suitable response.",
"Due to the complicated interactions between utterances and interlocutors, the conversation flow might be led by some unnecessary information, which shows the importance of making models aware of the conversation structure.",
"Robustness.",
"The addressee labels are important for constructing a graph used in HeterMPC.",
"This kind of label is commonly available in real life such as A@B labels in group chatting, Twitter, Reddit and various forums that denote speaker A talking to addressee B. However, addressee labels of a part of utterances are missing in the existing MPC datasets since a speaker may forget to specify an addressee.",
"HeterMPC is robust since utterances without addressee labels can be assigned with a general addressee label To all interlocutors .",
"We leave evaluation on other datasets in future work.",
"We present HeterMPC to model complicated interactions between utterances and interlocutors in MPCs with a heterogeneous graph.",
"Two types of graph nodes and six types of edges are designed.",
"Node-edge-type-dependent parameters are introduced for better utilizing the structural knowledge of conversations during node updating.",
"Results show that HeterMPC outperforms baselines by significant margins, achieving a new state-of-the-art performance for response generation in MPCs on the Ubuntu IRC benchmark.",
"In the future, we will explore better ways of maximizing feature distribution differences to model heterogeneity.",
"We thank anonymous reviewers for their valuable comments."
] | [
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"other"
] |
[
"Capturing interactions among event arguments is an essential step towards robust event argument extraction (EAE).",
"However, existing efforts in this direction suffer from two limitations: 1) The argument role type information of contextual entities is mainly utilized as training signals, ignoring the potential merits of directly adopting it as semantically rich input features; 2) The argument-level sequential semantics, which implies the overall distribution pattern of argument roles over an event mention, is not well characterized.",
"To tackle the above two bottlenecks, we formalize EAE as a Seq2Seq-like learning problem for the first time, where a sentence with a specific event trigger is mapped to a sequence of event argument roles.",
"A neural architecture with a novel Bi-directional Entity-level Recurrent Decoder (BERD) is proposed to generate argument roles by incorporating contextual enti-ties' argument role predictions, like a word-by-word text generation process, thereby distinguishing implicit argument distribution patterns within an event more accurately.",
"Event argument extraction (EAE), which aims to identify the entities serving as event arguments and classify the roles they play in an event, is a key step towards event extraction (EE).",
"For example, given that the word fired triggers an Attack event in the sentence In Baghdad, a cameraman died when an American tank fired on the Palestine Hotel , EAE need to identify that Baghdad, camera-man, American tank, and Palestine hotel are arguments with Place , Target , Instrument , and Target as roles respectively.",
"achieved significant progress(Chen et al., 2015; Nguyen et al., 2016; Sha et al., 2018; Yang et al., 2019; Wang et al., 2019b; Zhang et al., 2020; Du and Cardie, 2020).",
"Many efforts have been devoted to improving EAE by better characterizing argument interaction, categorized into two paradigms.",
"The first one, named inter-event argument interaction in this paper, concentrates on mining information of the target entity (candidate argument) in the context of other event instances (Yu et al., 2011; Nguyen et al., 2016), e.g., the evidence that a Victim argument for the Die event is often the Target argument for the Attack event in the same sentence.",
"The second one is intra-event argument interaction , which exploits the relationship of the target entity with others in the same event instance (Yu et al., 2011; Sha et al., 2016, 2018).",
"We focus on the second paradigm in this paper.",
"Despite their promising results, existing methods on capturing intra-event argument interaction suffer from two bottlenecks.",
"(1) The argument role type information of contextual entities is underutilized.",
"As two representative explorations, dBRNN (Sha et al., 2018) uses an intermediate tensor layer to capture latent interaction between candidate arguments; RBPB (Sha et al., 2016) estimates whether two candidate argument belongs to one event or not, serving as constraints on a Beam-Search-based prediction algorithm.",
"Generally, these works use the argument role type information of contextual entities as auxiliary supervision signals for training to refine input representation.",
"However, one intuitive observation is that the argument role types can be utilized straightforwardly as semantically rich input features , like how we use entity type information.",
"To verify this intuition, we conduct an experiment on ACE 2005 English corpus, in which CNN (Nguyen and Grishman, 2015) is utilized as a baseline.",
"For an entity, we incorporate the ground-truth roles of its contextual arguments into the baseline model's input representation, obtaining model CNN(w. role type).",
"As expected, CNN(w. role type) outperforms CNN significantly as shown in Table 1 1 .",
"The challenge of the method lies in knowing the ground-truth roles of contextual entities in the inference (or testing) phase.",
"That is one possible reason why existing works do not investigate in this direction.",
"Here we can simply use predicted argument roles to approximate corresponding ground truth for inference.",
"We believe that the noise brought by prediction is tolerable, considering the stimulating effect of using argument roles directly as input.",
"(2) The distribution pattern of multiple argument roles within an event is not well characterized.",
"For events with many entities, distinguishing the overall appearance patterns of argument roles is essential to make accurate role predictions.",
"In dBRNN (Sha et al., 2018), however, there is no specific design involving constraints or interaction among multiple prediction results, though the argument representation fed into the final classifier is enriched with synthesized information (the tensor layer) from other arguments.",
"RBPB (Sha et al., 2016) explicitly leverages simple correlations inside each argument pair, ignoring more complex interactions in the whole argument sequence.",
"Therefore, we need a more reliable way to learn the sequential semantics of argument roles in an event.",
"To address the above two challenges, we formalize EAE as a Seq2Seq-like learning problem (Bahdanau et al., 2014) of mapping a sentence with a specific event trigger to a sequence of event argument roles.",
"To fully utilize both leftand right-side argument role information, inspired by the bidirectional decoder for machine translation (Zhang et al., 2018), we propose a neural architecture with a novel Bi-directional Entity-level Recurrent Decoder (BERD) to generate event argument roles entity by entity.",
"The predicted argument role of an entity is fed into the decoding module for the next 1 In the experiment we skip the event detection phase and directly assume all the triggers are correctly recognized.",
"or previous entity recurrently like a text generation process.",
"In this way, BERD can identify candidate arguments in a way that is more consistent with the implicit distribution pattern of multiple argument roles within a sentence, similar to text generation models that learn to generate word sequences following certain grammatical rules or text styles.",
"The contributions of this paper are:",
"1. We formalize the task of event argument extraction as a Seq2Seq-like learning problem for the first time, where a sentence with a specific event trigger is mapped to a sequence of event argument roles.",
"2. We propose a novel architecture with a Bidirectional Entity-level Recurrent Decoder (BERD) that is capable of leveraging the argument role predictions of leftand right-side contextual entities and distinguishing argument roles' overall distribution pattern.",
"3. Extensive experimental results show that our proposed method outperforms several competitive baselines on the widely-used ACE 2005 dataset.",
"BERD's superiority is more significant given more entities in a sentence.",
"Most previous works formalize EAE as either a word-level sequence labeling problem (Nguyen et al., 2016; Zeng et al., 2016; Yang et al., 2019) or an entity-oriented classic classification problem (Chen et al., 2015; Wang et al., 2019b).",
"We formalize EAE as a Seq2Seq-like learning problem as follows.",
"Let S = { w 1 , ..., w n } be a sentence where n is the sentence length and w i is the i -th token.",
"Also, let E = { e 1 , ..., e k } be the entity mentions in the sentence where k is number of entities.",
"Given that an event triggered by t S is detected in ED stage , EAE need to map the sentence with the event to a sequence of argument roles R = { y 1 , ..., y k } , where y i denotes the argument role that entity e i plays in the event.",
"We employ an encoder-decoder architecture for the problem defined above, which is similar to most Seq2Seq models in machine translation (Vaswani et al., 2017; Zhang et al., 2018), automatic text summarization (Song et al., 2019; Shi et al., 2021), and speech recognition (Tuske et al., 2019; Hannun et al., 2019) from a high-level perspective.",
"In particular, as Figure 1 shows, our architecture consists of an encoder that converts the sentence S with a specific event trigger into intermediate vec-torized representation and a decoder that generates a sequence of argument roles entity by entity.",
"The decoder is an entity-level recurrent network whose number of decoding steps is fixed, the same as the entity number in the corresponding sentence.",
"On each decoding step, we feed the prediction results of the previously-processed entity into the recurrent unit to make prediction for the current entity.",
"Since the predicted results of both leftand right-side entities can be potentially valuable information,we further incorporate a bidirectional decoding mechanism that integrates a forward decoding process and a backward decoding process effectively.",
"Given the sentence S = ( w 1 , ..., w n ) containing a trigger t S and k candidate arguments E = { e 1 , ..., e k } , an encoder is adopted to encode the word sequence into a sequence of continuous representations",
"where F ( ) is the neural network to encode the sentence.",
"In this paper, we select BERT (Devlin et al., 2019) as the encoder.",
"Considering representation H does not contain event type information, which is essential for predicting argument roles.",
"We append a special phrase denoting event type of t into each input sequence, such as # ATTACK #.",
"Different from traditional token-level Seq2Seq models, we use a bi-directional entity-level recurrent decoder (BERD) with a classifier to generate a sequence of argument roles entity by entity.",
"BERD consists of a forward and backward recurrent decoder, which exploit the same recurrent unit architecture as follows.",
"The recurrent unit is designed to explicitly utilize two kinds of information: (1) the instance infor-2",
"infor-2 The trigger word can be detected by any event detection model, which is not the scope of this paper.",
"mation which contains the sentence, event, and candidate argument (denoted by S, t, e ); and (2) contextual argument information which consists of argument roles of other entities (denoted by A ).",
"The recurrent unit exploits two corresponding feature extractors as follows: Instance Feature Extractor.",
"Given the representation H generated by encoder, dynamic multi-pooling (Chen et al., 2015) is then applied to extract max values of three split parts, which are decided by the event trigger and the candidate argument.",
"The three hidden embeddings are aggregated into an instance feature representation x as follows: [ x 1 ,p t ] i = max { [ h 1 ] i , ..., [ h pt ] i } [ x p t +1 ,p e ] i = max { [ h p t +1 ] i , ..., [ h pe ] i } [ x p e +1 ,n ] i = max { [ h p e +1 ] i , ..., [ h n ] i } x = [ x 1 ,p t ; x p t +1 ,p e ; x p e +1 ,n ] (2) where [ ] i is the i -th value of a vector, p t , p e are the positions of trigger t and candidate argument e 3 .",
"Argument Feature Extractor.",
"To incorporate previously-generated arguments, we exploit CNN network to encode the instance with arguments information as follows.",
"Input.",
"Different from Chen et al. (2015) where input embedding of each word consists of its word embedding, position embedding, and event type embedding, we append the embedding of argument roles into the input embedding for each word by looking up the vector A , which records argument role for each token in S .",
"In A , tokens of previously-predicted arguments are assigned with the generated labels, tokens of the candidate entity e are assigned with a special label To-Predict, and the other tokens are assigned with label N/A .",
"Convolution.",
"The convolution layer is applied to encode the word sequence into hidden embeddings: ( h a 1 , , ..., h an ) = CNN( w 1 , ..., t, ..., e, ..., w n ) (3) where the upperscript a denotes argument.",
"3 Equation 2 assumes that the entity mention lies after the trigger.",
"If the entity mention lies before the trigger, we switch p t and p e in the equation to get a right split.",
"as the input feature representation for the argument role classifier, and estimate the role that e plays in the event as follows: p = f ( W [ x ; x a ] + b ) o = Softmax( p ) (5) where W and b are weight parameters.",
"o is the probability distribution over the role label space.",
"For the sake of simplicity, in rest of the paper we use Unit( S, t, e, A ) to represent the calculation of probability distribution o by recurrent unit with S, t, e, A as inputs.",
"Given the sentence S with k candidate arguments E = { e 1 , ..., e k } , the forward decoder exploits above recurrent unit and generates the argument roles sequence in a left-to-right manner.",
"The conditional probability of the argument roles sequence is calculated as follows: P ( R | E, S, t ) = k (cid:89) i =1 p ( y i | e i ; R <i , S, t ) (6) where R <i denotes the role sequence { y 1 , ..., y i 1 } for { e 1 , ..., e i 1 } .",
"For i -th entity e i , the recurrent unit generates prediction as follows: y i = Unit( S, t, e i , A i ) (7) where y i denotes the probability distribution over label space for e i and A i denotes the contextual argument information of i -th decoding step, which contains previously-predicted argument roles R <i .",
"Then we update A i +1 by labeling e i as g ( y i ) for next step i +1, where g ( y i ) denotes the label has the highest probability under the distribution y i .",
"The argument feature extracted by recurrent units of forward decoder is denoted as x ai .",
"The backward decoder is similar to the forward decoder, except that it performs decoding in a right-to-left way as follows:",
"where R >i denotes the role sequence { y i +1 , ..., y k } for { e i +1 , ..., e k } .",
"The probability distribution over label space for i -th entity e i is calculated as follows: y i = Unit( S, t, e i , A i ) (9) where A i denotes the contextual argument information of i -th decoding step, which contains previously-predicted argument roles R >i .",
"We update A i 1 by labeling e i as g ( y i ) for next step i -1. The argument feature extracted by recurrent units of backward decoder is denoted as x ai .",
"To utilize both leftand right-side argument information, a classifier is then adopted to combine argument features of both decoders and make final prediction for each entity e i as follows:",
"where y i denotes the final probability distribution for e i .",
"W c and b c are weight parameters.",
"As seen, the forward decoder and backward decoder in BERD mainly play two important roles.",
"The first one is to yield intermediate argument features for the final classifier, and the second one is to make the initial predictions fed into the argument feature extractor.",
"Since the initial predictions of the two decoders are crucial to generate accurate argument features, we need to optimize their own classifier in addition to the final classifier.",
"We use p ( y i | e i ) and p ( y i | e i ) to represent the probability of e i playing role y i estimated by forward and backward decoder respectively.",
"p ( y i | e i ) denotes the final estimated probability of e i playing role y i by Equation 10.",
"The optimization objective function is defined as follows: J ( ) = (cid:88) S D (cid:88) t S (cid:88) e i ES log p ( y i | e i ; R (cid:54) = i , S, t ) + log p ( y i | e i ) + log p ( y i | e i ) (11) where D denotes the training set and t S denotes the trigger word detected by previous event detection model in sentence S .",
"ES represents the entity mentions in S .",
", and are weights for loss of final classifier, forward decoder and backward decoder respectively.",
"During training, we apply the teacher forcing mechanism where gold arguments information is fed into BERD's recurrent units, enabling paralleled computation and greatly accelerates the training process.",
"Once the model is trained, we first use the forward decoder with a greedy search to sequentially generate a sequence of argument roles in a left-to-right manner.",
"Then, the backward decoder performs decoding in the same way but a right-to-left manner.",
"Finally, the classifier combines both leftand right-side argument features and make prediction for each entity as Equation 10 shows.",
"Following most works on EAE (Nguyen et al., 2016; Sha et al., 2018; Yang et al., 2019; Du and Cardie, 2020), we evaluate our models on the most widely-used ACE 2005 dataset, which contains 599 documents annotated with 33 event subtypes and 35 argument roles.",
"We use the same test set containing 40 newswire documents, a development set containing 30 randomly selected documents and training set with the remaining 529 documents.",
"We notice Wang et al. (2019b) used TAC KBP dataset, which we can not access online or acquire from them due to copyright.",
"We believe experimenting with settings consistent with most related works (e.g., 27 out of 37 top papers used only the ACE 2005 dataset in the last four years) should yield convincing empirical results.",
"We adopt BERT (Devlin et al., 2019) as encoder and the proposed bi-directional entity-level recurrent decoder as decoder for the experiment.",
"The hyperparameters used in the experiment are listed.",
"BERT.",
"The hyperparameters of BERT are the same as the BERTBASE model 4 .",
"We use a dropout probability of 0.1 on all layers.",
"Argument Feature Extractor.",
"Dimensions of word embedding, position embedding, event type embedding and argument role embedding for each token are 100, 5, 5, 10 respectively.",
"We utilize 300 convolution kernels with size",
"3. The glove embedding(Pennington et al., 2014) are utilized for initialization of word embedding 5 .",
"Training.",
"Adam with learning rate of 6e-05, 1 = 4 https://github.com/google-research/bert 5 https://nlp.stanford.edu/projects/glove/ 0 .",
"9 , 2 = 0 .",
"999 , L2 weight decay of 0.01 and learning rate warmup of 0.1 is used for optimization.",
"We set the training epochs and batch size to 40 and 30 respectively.",
"Besides, we exploit a dropout with rate 0.5 on the concatenated feature representations.",
"The loss weights , and are set to 1.0, 0.5 and 0.5 respectively.",
"We compare our method against the following four baselines.",
"The first two are state-of-the-art models that separately predicts argument without considering argument interaction.",
"We also implement two variants of DMBERT utilizing the latest inter-event and intra-event argument interaction method, named BERT(Inter) and BERT(Intra) respectively.",
"1. DMBERT which adopts BERT as encoder and generate representation for each entity mention based on dynamic multi-pooling op-eration(Wang et al., 2019a).",
"The candidate arguments are predicted separately.",
"2. HMEAE which utilizes the concept hierarchy of argument roles and utilizes hierarchical modular attention for event argument extraction (Wang et al., 2019b).",
"3. BERT(Inter) which enhances DMBERT with inter-event argument interaction adopted by Nguyen et al. (2016).",
"The memory matrices are introduced to store dependencies among event triggers and argument roles.",
"4. BERT(Intra) which incorporates intra-event argument interaction adopted by Sha et al. (2018) into DMBERT.",
"The tensor layer and self-matching attention matrix with the same settings are applied in the experiment.",
"Following previous work (Wang et al., 2019b), we use a pipelined approach for event extraction and implement DMBERT as event detection model.",
"The same event detection model is used for all the baselines to ensure a fair comparison.",
"Note that Nguyen et al. (2016) uses the last word to represent the entity mention 6 , which may lead to insufficient semantic information and inaccurate evaluation considering entity mentions may consist of multiple words and overlap with each other.",
"We sum hidden embedding of all words when collecting lexical features for each entity mention.",
"6 Sha et al. (2018) doesn't introduce the details.",
"The performance of BERD and baselines are shown in Table 2 (statistically significant with p < 0 . 05 ), from which we have several main observations.",
"(1) Compared with the latest best-performed baseline HMEAE, our method BERD achieves an absolute improvement of 1.0 F1, clearly achieving competitive performance.",
"(2) Incorporation of argument interactions brings significant improvements over vanilla DMBERT.",
"For example, BERT(Intra) gains a 1.5 F1 improvement compared with DMBERT, which has the same architecture except for argument interaction.",
"(3) Intra-event argument interaction brings more benefit than inter-event interaction (57.8 of BERT(Inter) v.s. 58.7 of BERT(Intra) v.s. 60.3 of BERD).",
"(4) Compared with BERT(Inter) and BERT(Intra), our proposed BERD achieves the most significant improvements.",
"We attribute the solid enhancement to BERD's novel seq2seq-like architecture that effectively exploits the argument roles of contextual entities.",
"To further investigate how our method improves performance, we conduct comparison and analysis on effect of entity numbers.",
"Specifically, we first divide the event instances of test set into some subsets based on the number of entities in an event.",
"Since events with a specific number of entities may be too few, results on a subset of a range of entity numbers will yield more robust and convincing conclusion.",
"To make the number of events in all subsets as balanced as possible, we finally get a division of four subsets, whose entity numbers are in the range of [1,3], [4,6], [7,9], and [10,] and event quantities account for 28.4%, 28.2%, 25.9%, and 17.5%, respectively.",
"The performance of all models on the four subsets is shown in Figure 2, from which we can observe a general trend that BERD outperforms other baselines more significantly if more entities appear in an event.",
"More entities usually mean more com-Subset-1[1,3] Subset-2[4,6] Subset-3[7,9] Subset-4[10,] 52 54 56 58 60 62 64 F 1 -S c o r e ( % ) DMBERT HMEAE BERT(Inter) BERT(Intra) BERD Figure 2: Comparison on four subsets with different range of entity numbers.",
"plex contextual information for a candidate argument, which will lead to a performance degradation.",
"BERD alleviates degradation better because of its capability of capturing argument role information of contextual entities.",
"We notice that BERT(Intra) also outperforms DMBERT significantly on Subset-4 , which demonstrates the effectiveness of intra-event argument interaction.",
"Note that the performance on Subset-1 is worse than that on Subset-2 , looking like an outlier.",
"The reason lies in that the performance of the first-stage event detection model on Subset-1 is much poorer (e.g., 32.8 of F1 score for events with one entity).",
"Though performance improvement can be easily observed, it is nontrivial to quantitatively verify how BERD captures the distribution pattern of multiple argument roles within an event.",
"In this section, we partly investigate this problem by exploring the effect of overlapping entities.",
"Since there is usually only one entity serving as argument roles in multiple overlapping entities, we believe sophisticated EAE models should identify this pattern.",
"Therefore, we divide the test set into two subsets ( Subset-O and Subset-N ) based on whether an event contains overlapping entity mentions and check all models' performance on these two subsets.",
"Table 3 shows the results, from which we can find that all baselines perform worse on Subset-O .",
"It is a natural result since multiple overlapping entities usually have similar representations, making the pattern mentioned above challenging to capture.",
"BERD performs well in both Subset-O and Subset-N , and the superiority on Subset-O over baseline is more Model Subset-O Subset-N DMBERT 56.4 59.4 HMEAE 58.8 59.6 BERT(Inter) 57.3 58.8 BERT(Intra) 58.5 59.2 BERD 60.5 60.1 Table 3: Comparison on sentences with and without overlapping entities ( Subset-O v.s. Subset-N ).",
"significant.",
"We attribute it to BERD's capability of distinguishing argument distribution patterns.",
"To further investigate the effectiveness of the bidirectional decoding process, we exclude the backward decoder or forward decoder from BERD and obtain two models with only unidirectional decoder, whose performance is shown in the lines of -w/ Forward Decoder and -w/ Backward Decoder in Table",
"4. From the results, we can observe that: (1) When decoding with only forward or backward decoder, the performance decreases by 1.6 and 1.3 in terms of F1 respectively.",
"The results clearly demonstrate the superiority of the bidirectional decoding mechanism (2) Though the two model variants have performance degradation, they still outperform DMBERT significantly, once again verifying that exploiting contextual argument information, even in only one direction, is beneficial to EAE.",
"Considering number of model parameters will be decreased by excluding the forward/backward decoder, we build another two model variants with two decoders of the same direction (denoted by -w/ Forward Decoder x2 and -w/ Backward Decoder x2), whose parameter numbers are exactly equal to BERD.",
"Table 4 shows that the two enlarged single-direction models have similar performance with their original versions.",
"We can conclude that the improvement comes from complementation of the two decoders with different directions, rather than increment of model parameters.",
"Besides, we exclude the recurrent mechanism by preventing argument role predictions of contextual entities from being fed into the decoding module, obtaining another model variant named -w/o Recurrent Mechanism.",
"The performance degradation clearly shows the value of the recurrent decoding process incorporating argument role information.",
"To promote understanding of our method, we demonstrate three concrete examples in Figure",
"3. Sentence S1 contains a Transport event triggered by sailing.",
"DMBERT and BERT(Intra) assigns Destination role to candidate argument the perilous Strait of Gibraltar , the southern mainland and the Canary Islands out in the Atlantic, the first two of which are mislabeled.",
"It's an unusual pattern that a Transport event contains multiple destinations.",
"DMBERT and BERT(Intra) fail to recognize the information of such patterns, showing that they can not well capture this type of correlation among prediction results.",
"Our BERD, however, leverages previous predictions to generate argument roles entity by entity in a sentence, successfully avoiding the unusual pattern happening.",
"S2 contains a Transport event triggered by vis-ited, and 4 nested entities exists in the phrase Ankara police chief Ercument Yilmaz.",
"Since these nested entities share the same sentence context, it is not strange that DMBERT wrongly predicts such entities as the same argument role Artifact .",
"Thanks to the bidirectional entity-level recurrent decoder, our method can recognize the distribution pattern of arguments better and hence correctly identifies these nested entities as false instances.",
"In this case, BERD reduces 3 false-positive predictions compared with DMBERT, confirming the results and analysis of Table",
"3. As a qualitative error analysis, the last example S3 demonstrates that incorporating previous predictions may also lead to error propagation problem.",
"S3 contains a Marry event triggered by marry.",
"Entity home is mislabeled as Time-Within role by BERD and this wrong prediction will be used as argument features to identify entity later in this after, whose role is Time-Within .",
"As analyzed in the first case, BERD tends to avoid repetitive roles in a sentence, leading this entity incorrectly being S1: Tens of thousands of destitute Africans try to enter Spain illegally each year by crossing the perilous Strait of Gibraltar to reach the southern mainland or by sailing northwest to the Canary Islands out in the Atlantic (the perilous Strait of Gibraltar, N/A, Destination( ), Destination( ), N/A) (the southern mainland, N/A, Destination( ), Destination( ), N/A) (the Canary Islands out in the Atlantic, Destination, Destination, Destination, Destination) S2: Ankara police chief Ercument Yilmaz visited the site of the morning blast but refused to say if a bomb had caused the explosion (Ankara, N/A,Artifact( ), N/A, N/A) (Ankara police, N/A,Artifact( ), N/A, N/A) (Ankara police chief, N/A, Artifact( ),Artifact( ), N/A) (Ankara police chief ErcumentYilmaz,Artifact,Artifact,Artifact, Artifact) S3: Prison authorities have given the node for Anwar to be token home later in the afternoon to marry his eldest daughter , Nurul Izzah, to engineer Raja Ahmad Sharir Iskandar in a traditional Malay ceremony, he said (home, Place, Place, Place, Time-Within( ) ) (later in the afternoon, Time-Within, Time-Within, Time-Within, N/A( )) Figure 3: Case study.",
"We have covered research on EAE in Section 1, related work that inspires our technical design is mainly introduced in the following.",
"Though our recurrent decoder is entity-level, our bidirectional decoding mechanism is inspired by some bidirectional decoders in token-level Seq2Seq models, e.g., of machine translation (Zhou et al., 2019), speech recognition (Chen et al., 2020) and scene text recognition (Gao et al., 2019).",
"We formalize the task of EAE as a Seq2Seq-like learning problem instead of a classic classification problem or sequence labeling problem.",
"We have found that there are also some works performing classification or sequence labeling in a Seq2Seq manner in other fields.",
"For example, Yang et al. (2018) formulates the multi-label classification task as a sequence generation problem to capture the correlations between labels.",
"Daza and Frank (2018) explores an encoder-decoder model for semantic role labeling.",
"We are the first to employ a Seq2Seq-like architecture to solve the EAE task.",
"We have presented BERD, a neural architecture with a Bidirectional Entity-level Recurrent Decoder that achieves competitive performance on the task of event argument extraction (EAE).",
"One main characteristic that distinguishes our techniques from previous works is that we formalize EAE as a Seq2Seq-like learning problem instead of a classic classification or sequence labeling problem.",
"The novel bidirectional decoding mechanism enables our BERD to utilize both the leftand right-side argument predictions effectively to generate a sequence of argument roles that follows overall distribution patterns over a sentence better.",
"As pioneer research that introduces the Seq2Seq-like architecture into the EAE task, BERD also faces some open questions.",
"For example, since we use gold argument roles as prediction results during training, how to alleviate the exposure bias problem is worth investigating.",
"We are also interested in incorporating our techniques into more sophisticated models that jointly extract triggers and arguments.",
"We thank anonymous reviewers for valuable comments.",
"This research was supported by the National Key Research And Development Program of China (No.2019YFB1405802) and the central government guided local science and technology development fund projects (science and technology innovation base projects) No. 206Z0302G."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"result",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"objective",
"abstain",
"other",
"other",
"objective",
"result",
"objective",
"objective",
"abstain",
"objective",
"method",
"other",
"other"
] |
[
"Cross-Lingual Word Embeddings (CLWEs) encode words from two or more languages in a shared high-dimensional space in which vectors representing words with similar meaning (regardless of language) are closely located.",
"Existing methods for building high-quality CLWEs learn mappings that minimise the (cid:96) 2 norm loss function.",
"However, this optimisation objective has been demonstrated to be sensitive to outliers.",
"Based on the more robust Manhattan norm (aka.",
"(cid:96) 1 norm) goodness-of-fit criterion, this paper proposes a simple post-processing step to improve CLWEs.",
"An advantage of this approach is that it is fully agnostic to the training process of the original CLWEs and can therefore be applied widely.",
"Extensive experiments are performed involving ten diverse languages and embeddings trained on different corpora.",
"Evaluation results based on bilingual lexicon induction and cross-lingual transfer for natural language inference tasks show that the (cid:96) 1 refinement substantially outperforms four state-of-the-art baselines in both supervised and unsupervised settings.",
"It is therefore recommended that this strategy be adopted as a standard for CLWE methods.",
"Cross-Lingual Word Embedding (CLWE) techniques have recently received significant attention as an effective means to support Natural Language Processing applications for low-resource languages, e.g., machine translation (Artetxe et al., 2018b) and transfer learning (Peng et al., 2021).",
"The most successful CLWE models are the so-called projection-based methods, which learn mappings between monolingual word vectors with very little, or even zero, cross-lingual supervision (Lam-ple et al., 2018; Artetxe et al., 2018a; Glava et al., 2019).",
"Mainstream projection-based CLWE models typically identify orthogonal mappings by Chenghua Lin is the corresponding author.",
"minimising the topological dissimilarity between source and target embeddings based on (cid:96) 2 loss (aka. Frobenius loss or squared error) (Glava et al., 2019; Ruder et al., 2019).",
"This learning strategy has two advantages.",
"First, adding the orthogonality constraint to the mapping function has been demonstrated to significantly enhance the quality of CLWEs (Xing et al., 2015).",
"Second, the existence of a closed-form solution to the (cid:96) 2 optima (Schne-mann, 1966) greatly simplifies the computation required (Artetxe et al., 2016; Smith et al., 2017).",
"Despite its popularity, work in various application domains has noted that (cid:96) 2 loss is not robust to noise and outliers.",
"It is widely known in computer vision that (cid:96) 2 -loss-based solutions can severely exaggerate noise, leading to inaccurate estimates (Aans et al., 2002; De La Torre and Black, 2003).",
"In data mining, Principal Component Analysis (PCA) using (cid:96) 2 loss has been shown to be sensitive to the presence of outliers in the input data, degrading the quality of the feature space produced (Kwak, 2008).",
"Previous studies have demonstrated that the processes used to construct monolingual and cross-lingual embeddings may introduce noise (e.g. via reconstruction error (Allen and Hospedales, 2019) and structural variance (Ruder et al., 2019)), making the presence of outliers more likely.",
"Empirical analysis of CLWEs also demonstrates that more distant word pairs (which are more likely to be outliers) have more influence on the behaviour of (cid:96) 2 loss than closer pairs.",
"This raises the question of the appropriateness of (cid:96) 2 loss functions for CLWEs.",
"Compared to the conventional (cid:96) 2 loss, (cid:96) 1 loss (aka. Manhattan distance) has been mathematically demonstrated to be less affected by outliers (Rousseeuw and Leroy, 1987) and empirically proven useful in computer vision and data mining (Aans et al., 2002; De La Torre and Black, 2003; Kwak, 2008).",
"Motivated by this insight, our paper proposes a simple yet effective postprocessing technique to improve the quality of CLWEs: adjust the alignment of any cross-lingual vector space to minimise the (cid:96) 1 loss without violating the orthogonality constraint.",
"Specifically, given existing CLWEs, we bidirectionally retrieve bilingual vectors and optimise their Manhattan distance using a numerical solver.",
"The approach can be applied to any CLWEs, making the post-hoc refinement technique generic and applicable to a wide range of scenarios.",
"We believe this to be the first application of (cid:96) 1 loss to the CLWE problem.",
"To demonstrate the effectiveness of our method, we select four state-of-the-art baselines and conduct comprehensive evaluations in both supervised and unsupervised settings.",
"Our experiments involve ten languages from diverse branches/families and embeddings trained on corpora of different domains.",
"In addition to the standard Bilingual Lexicon Induction (BLI) benchmark, we also investigate a downstream task, namely cross-lingual transfer for Natural Language Inference (NLI).",
"In all setups tested, our algorithm significantly improves the performance of strong baselines.",
"Finally, we provide an intuitive visualisation illustrating why (cid:96) 1 loss is more robust than it (cid:96) 2 counterpart when refining CLWEs (see Fig. 1).",
"Our code is available at https://github.com/Pzoom522/ L1-Refinement .",
"Our contribution is three-fold: (1) we propose a robust refinement technique based on the (cid:96) 1 norm training objective, which can effectively enhance CLWEs; (2) our approach is generic and can be directly coupled with both supervised and unsupervised CLWE models; (3) our (cid:96) 1 refinement algorithm achieves state-of-the-art performance for both BLI and cross-lingual transfer for NLI tasks.",
"CLWE methods.",
"One approach to generating CLWEs is to train shared semantic representations using multilingual texts aligned at sentence or document level (Vulic and Korhonen, 2016; Upadhyay et al., 2016).",
"Although this research direction has been well studied, the parallel setup requirement for model training is expensive, and hence impractical for low-resource languages.",
"Recent years have seen an increase in interest in projection-based methods, which train CLWEs by finding mappings between pretrained word vectors of different languages (Mikolov et al., 2013; Lample et al., 2018; Peng et al., 2020).",
"Since the input embeddings can be generated independently using monolingual corpora only, projection-based methods reduce the supervision required for training and offer a viable solution for low-resource scenarios.",
"Xing et al. (2015) showed that the precision of the learned CLWEs can be improved by constraining the mapping function to be orthogonal, which is formalised as the so-called (cid:96) 2 Orthogonal Procrustes Analysis (OPA): argmin M O (cid:107) AM B (cid:107) 2 , (1) where M is the CLWE mapping, O denotes the orthogonal manifold (aka.",
"the Stiefel manifold (Chu and Trendafilov, 2001)), and A and B are matrices composed using vectors from source and target embedding spaces.",
"While Xing et al. (2015) exploited an approximate and relatively slow gradient-based solver, more recent approaches such as Artetxe et al. (2016) and Smith et al. (2017) introduced an exact closed-form solution for Eq.",
"(1).",
"Originally proposed by Schnemann (1966), it utilises Singular Value Decomposition (SVD): M (cid:63) = UV (cid:124) , with U V (cid:124) = SVD( A (cid:124) B ) , (2) where M (cid:63) denotes the (cid:96) 2 -optimal mapping matrix.",
"The efficiency and effectiveness of Eq.",
"(2) have led to its application within many other approaches, e.g., Ruder et al. (2018), Joulin et al. (2018) and Glava et al. (2019).",
"In particular, PROC-B (Glava et al., 2019), a supervised CLWE framework that simply applies multiple iterations of (cid:96) 2 OPA, has been demonstrated to produce very competitive performance on various benchmark tasks including BLI as well as cross-lingual transfer for NLI and information retrieval.",
"While the aforementioned approaches still require some weak supervision (i.e., seed dictionar-ies), there have also been some successful attempts to train CLWEs in a completely unsupervised fashion.",
"For instance, Lample et al. (2018) proposed a system called MUSE , which bootstraps CLWEs without any bilingual signal through adversarial learning.",
"VECMAP (Artetxe et al., 2018a) applied a self-learning strategy to iteratively compute the optimal mapping and then retrieve bilingual dictionary.",
"Comparing MUSE and VECMAP , the latter tends to be more robust as its similarity-matrix-based heuristic initialisation is more stable in most cases (Glava et al., 2019; Ruder et al., 2019).",
"Very recently, some studies bootstrapped unsupervised CLWEs by jointly training word embeddings on concatenated corpora of different languages and achieved good performance (Wang et al., 2020).",
"The (cid:96) 2 refinement algorithm.",
"CLWE models often apply (cid:96) 2 refinement, a post-processing step shown to improve the quality of the initial alignment (see Ruder et al. (2019) for survey).",
"Given existing CLWEs { XLA , XLB } for languages LA and LB , bidirectionally one can use approaches such as the classic nearest-neighbour algorithm, the inverted softmax (Smith et al., 2017) and the cross-domain similarity local scaling (CSLS) (Lample et al., 2018) to retrieve two bilingual dictionaries DLA (cid:55) LB and DLB (cid:55) LA .",
"Note that word pairs in DLA (cid:55) LB DLB (cid:55) LA are highly reliable, as they form mutual translations.",
"Next, one can compose bilingual embedding matrices A and B by aligning word vectors (rows) using the above word pairs.",
"Finally, a new orthogonal mapping is learned to fit A and B based on least-square regressions, i.e., perform (cid:96) 2 OPA described in Eq.",
"(1).",
"Early applications of (cid:96) 2 refinement applied a single iteration, e.g. (Vulic and Korhonen, 2016).",
"Due to the wide adoption of the closed-form (cid:96) 2 OPA solution (cf.",
"Eq.",
"(2)), recent methods perform multiple iterations.",
"The iterative (cid:96) 2 refinement strategy is an important component of approaches that bootstrap from small or null training lexicons (Artetxe et al., 2018a).",
"However, a single step of refinement is often sufficient to create suitable CLWEs (Lam-ple et al., 2018; Glava et al., 2019).",
"A common characteristic of CLWE methods that apply the orthogonality constraint is that they optimise using (cid:96) 2 loss (see 2).",
"However, outliers have disproportionate influence in (cid:96) 2 since the penalty increases quadratically and this can be particularly problematic with noisy data since the solution can shift towards them (Rousseeuw and Leroy, 1987).",
"The noise and outliers present in real-world word embeddings may affect the performance of (cid:96) 2 -loss-based CLWEs.",
"The (cid:96) 1 norm cost function is more robust than (cid:96) 2 loss as it is less affected by outliers (Rousseeuw and Leroy, 1987).",
"Therefore, we propose a refinement algorithm for improving the quality of CLWEs based on (cid:96) 1 loss.",
"This novel method, which we refer to as (cid:96) 1 refinement, is generic and can be applied post-hoc to improve the output of existing CLWE models.",
"To our knowledge, the use of alternatives to (cid:96) 2 -loss-based optimisation has never been explored by the CLWE community.",
"argmin M O (cid:107) AM B (cid:107) 1 = argmin M O tr[( AM B ) (cid:124) sgn( AM B )] , (3)",
"where tr( ) returns the matrix trace, sgn( ) is the signum function, and O denotes that M is subject to the orthogonal constraint.",
"Compared to (cid:96) 2 OPA which has a closed-form solution, solving Eq.",
"(3) is much more challenging due to the discontinuity of sgn( ) .",
"This issue can be addressed by replacing sgn( ) with tanh( ( )) , a smoothing function parameterised by , such that argmin M O tr[( AM B ) (cid:124) tanh( ( AM B ))] .",
"Larger values for lead to closer approximations to sgn( ) but reduce the smoothing effect.",
"This approach has been used in many applications, such as the activation function of long short-term memory networks (Hochreiter and Schmidhuber, 1997).",
"However, in practice, we find that Eq.",
"(4) remains unsolvable in our case with standard gradient-based frameworks for two reasons.",
"First, has to be sufficiently large in order to achieve a good approximation of sgn( ) .",
"Otherwise, relatively small residuals will be down-weighted during fitting and the objective will become biased towards outliers, just similar to (cid:96) 2 loss.",
"However, satisfying this requirement (i.e., large ) will lead to the activation function tanh( ( )) becoming easily saturated , resulting in an optimisation process that becomes trapped during the early stages.",
"In other words, the optimisation can only reach an unsatisfactory local optimum.",
"Second, the orthogonality constraint (i.e., M O ) also makes the optimisation more problematic for these methods.",
"We address these challenges by adopting the approaches proposed by Trendafilov (2003).",
"This method explicitly encourages the solver to only explore the desired manifold O thereby reducing the (cid:96) 1 solver's search space and difficulty of the optimisation problem.",
"We begin by calculating the gradient w.r.t. the objective in Eq.",
"(4) through matrix differentiation: = A (cid:124) (tanh( Z ) + Z (cid:12) cosh 2 ( Z )) , (5) where Z = ( AM B ) and (cid:12) is the Hadamard product.",
"Next, to find the steepest descent direction while ensuring that any M produced is orthogonal, we project onto O , yielding 1 O ( ):= 1 2 M ( M (cid:124) (cid:124) M )+( I MM (cid:124) ) .",
"Here I is an identity matrix with the shape of M .",
"With Eq.",
"(6) defining the optimisation flow, our (cid:96) 1 loss minimisation problem reduces to an integration problem, as M (cid:63) = M 0 + (cid:82) O ( ) d t, (7) where M 0 is a proper initial solution of Eq.",
"(3) (e.g., (cid:96) 2 -optimal mapping obtained via Eq.",
"(2)).",
"Empirically, unlike the aforementioned standard gradient-based methods, by following the established policy of Eq.",
"(6), the optimisation process of Eq.",
"(7) will not violate the orthogonality restriction or get trapped during early stages.",
"However, this (cid:96) 1 OPA solver requires extremely small step size to generate reliable solutions (Trendafilov, 2003), making it computationally expensive 2 .",
"Therefore, it is impractical to perform (cid:96) 1 refinement in an iterative fashion like (cid:96) 2 refinement without significant computational resources.",
"Previous work has demonstrated that applying the (cid:96) 1 -loss-based algorithms from a good initial state can speed up the optimisation.",
"For instance, Kwak (2008) found that feature spaces created by (cid:96) 2 PCA were severely affected by noise.",
"Replacing the cost function with (cid:96) 1 loss significantly reduced this problem, but required expensive linear programming.",
"To reduce the convergence time, Brooks and Jot (2013) exploited the first principal component from the (cid:96) 2 solution as an initial guess.",
"Similarly, when reconstructing corrupted pixel matrices, (cid:96) 2 -loss-based results are far from satisfactory; using (cid:96) 1 norm estimators can improve the quality, but are too slow to handle large-scale datasets (Aans et al., 2002).",
"However, taking the (cid:96) 2 optima as the starting point allowed less biased reconstructions to be learned in an acceptable time (De La Torre and Black, 2003).",
"Inspired by these works, we make use of (cid:96) 1 refinement to carry out post-hoc enhancement of existing CLWEs.",
"Our full pipeline is described in 1 See Chu and Trendafilov (2001) for derivation details.",
"2 It takes averagely 3 hours and up to 12 hours to perform Eq.",
"(7) on an Intel Core i9-9900K CPU.",
"In comparison, the time required to solve Eq.",
"(2) in each training loop is less than 1 second and the iterative (cid:96) 2 -norm-based training takes 1 to 5 hours in total.",
"Algorithm 1 (see 4.3 for implemented configu-rations).",
"In common with (cid:96) 2 refinement (cf. 2), steps 1-4 bootstrap a synthetic dictionary D and compose bilingual word vector matrices A and B which have reliable row-wise correspondence.",
"Taking them as the starting state, in step 5 an identity matrix naturally serves as our initial solution M 0 .",
"During the execution of Eq.",
"(7), we record (cid:96) 1 loss per iteration and see if either of the following two stopping criteria have been satisfied: (1) the updated (cid:96) 1 loss exceeds that of the previous iteration; (2) on-the-fly M has non-negligibly departed from the orthogonal manifold, which can be indicated by the maximum value of the disparity matrix as max( | M (cid:124) M I | ) > (cid:15), (8) where (cid:15) is a sufficiently small threshold.",
"The resulting M (cid:63) can be used to adjust the word vectors of LA and output refined CLWEs.",
"A significant advantage of our algorithm is its generality: it is fully independent of the method used for creating the original CLWEs and can therefore be used to enhance a wide range of models, both in supervised and unsupervised settings.",
"In order to demonstrate the generality of our proposed method, we conduct experiments using two groups of monolingual word embeddings trained on very different corpora:",
"Wiki-Embs (Grave et al., 2018): embeddings developed using Wikipedia dumps for a range of ten diverse languages: two Germanic (English | EN , German | DE ), two Slavic (Croatian | HR , Russian | RU ), three Romance (French | FR , Italian | IT , Spanish | ES ) and three non-Indo-European (Finnish | FI from the Uralic family, Turkish | TR from the Turkic family and Chinese | ZH from the Sino-Tibetan family).",
"the WaCKy Crawl of { EN , DE , IT } , the Common Crawl of FI , and the WMT News Crawl of ES .",
"News-Embs are considered to be more challenging for building good quality CLWEs due to the heterogeneous nature of the data, while a considerable portion of the multilingual training corpora for Wiki-Embs are roughly parallel.",
"Following previous studies (Lample et al., 2018; Artetxe et al., 2018a; Zhou et al., 2019; Glava et al., 2019), only the first 200K vocabulary entries are preserved.",
"Glava et al. (2019) provided a systematic evaluation for projection-based CLWE models, demonstrating that three methods (i.e., MUSE , VECMAP , and PROC-B) achieve the most competitive performance.",
"A recent algorithm (JA) by Wang et al. (2020) also reported state-of-the-art results.",
"For comprehensive comparison, we therefore use all these four methods as the main baselines for both supervised and unsupervised settings: MUSE (Lample et al., 2018): an unsupervised CLWE model based on adversarial learning and iterative (cid:96) 2 refinement; VECMAP (Artetxe et al., 2018a): a robust unsupervised framework using a self-learning strategy; PROC-B (Glava et al., 2019): a simple but effective supervised approach to creating CLWEs; JA-MUSE and JA-RCSLS (Wang et al., 2020): a recently proposed Joint-Align ( JA ) Framework, which first initialises CLWEs using joint embedding training, followed by vocabularies reallocation.",
"It then utilises off-the-shelf CLWE methods to improve the alignment in both unsupervised ( JAMUSE ) and supervised ( JA-RCSLS ) settings.",
"In the original implementations, MUSE , PROCB and JA were only trained on Wiki-Embs while VECMAP additionally used News-Embs.",
"Although all baselines reported performance for BLI, they used various versions of evaluation sets, hence previous results are not directly comparable with the ones reposted here.",
"More concretely, the test-sets for MUSE /JA and VECMAP are two different batches of EN -centric dictionaries, while the testset for PROC-B also supports nonEN translations.",
"The CSLS scheme with a neighbourhood size of 10 (CSLS-10) is adopted to build synthetic dictionaries via the input CLWEs.",
"A variable-coefficient ordinary differential equation (VODE) solver 3 was implemented for the system described in Eq.",
"(7).",
"Suggested by Trendafilov (2003), we set the maximum order at 15, the smoothness coefficient in Eq.",
"(5) at 1e8, the threshold (cid:15) in Eq.",
"(8) at 1e-5, and performed the integration with a fixed time interval of 1e-6.",
"An early-stopping design was adopted to ensure computation completed in a reasonable time: in addition to the two default stopping criteria in 3, integration is terminated if (cid:82) d t reaches 5e-3 ( d t is the differentiation term in Eq.",
"(7)).",
"In terms of the tolerance of the VODE solver, we set the absolute tolerance at 1e-7 and the relative tolerance at 1e-5, following the established approach of Kulikov (2013).",
"These tolerance settings show good generality empirically and were used for all tested language pairs, datasets, and models in our experiments.",
"We evaluate the effectiveness of the proposed (cid:96) 1 refinement technique on two benchmarks: Bilingual Lexicon Induction (BLI), the de facto standard for measuring the quality of CLWEs, and a downstream natural language inference task based on cross-lingual transfer.",
"In addition to comparison against state-of-the-art CLWE models, we also report the performance of the single-iteration (cid:96) 2 refinement method which follows steps 1-4 of Algorithm 1 then minimises (cid:96) 2 loss in the final step.",
"To reduce randomness, we executed each model in each setup three times and the average accuracy (ACC, aka. precision at rank 1) is reported.",
"Following Glava et al. (2019), by comparing scores achieved before and after (cid:96) 1 refinement, statistical significance is indicated via the p -value of two-tailed t-tests with Bonferroni correction (Dror et al., 2018) (note that p -values are not recorded for Tab. 2b given the small number of runs).",
"Refining unsupervised baselines.",
"Tab.",
"1a follows the main setup of Lample et al. (2018), who tested six language pairs using Wiki-Embs 4 .",
"After (cid:96) 1 refinement, MUSE(cid:96) 1 , JA-MUSE(cid:96) 1 , and VECMAP(cid:96) 1 all significantly ( p < 0 . 01 ) outperform their corresponding base algorithms, with an average 1.1% performance gain over MUSE , 3 http://www.netlib.org/ode/vode.f 4 Note that we are unable to report the result of English to Esperanto as the corresponding dictionary is missing, see https://git.io/en-eo-dict-issue .",
"(b) News-Embs (setup of Artetxe et al. (2018a)).",
"1.1% over JA-MUSE , and 0.5% over VECMAP .",
"To put these improvements in context, Heyman et al. (2019) reported an improvement of 0.4% for VECMAP on same dataset and language pairs.",
"Our method tends to work better on the more distant language pairs.",
"For instance, for the distant pairs EN { RU , ZH } , the increments achieved by MUSE(cid:96) 1 are 1.6% and 1.3%, respectively; whereas for the close pairs EN { DE , ES , FR } the average gain is a maximum of 0.9%.",
"A similar trend can be observed for JA-MUSE(cid:96) 1 and VECMAP(cid:96) 1 .",
"(As the VECMAP algorithm always collapses for EN ZH , no result is reported for this language pair).",
"Another set of experiments were conducted to evaluate the robustness of our algorithm following the main setup of Artetxe et al. (2018a), who tested four language pairs based on the more homogeneous News-Embs.",
"Tab.",
"1b shows that JAMUSE(cid:96) 1 and VECMAP(cid:96) 1 consistently improves the original VECMAP with an average gain of 1.2% and 1.0% ( p <0.01).",
"Obtaining such substantial improvements over the state-of-the-art is nontrivial.",
"For example, even a very recent weakly supervised method by Wang et al. (2019) is inferior to VECMAP by 1.0% average ACC.",
"On the other hand, MUSE fails to produce any analysable result as it always collapses on the more challenging News-Embs.",
"Improvement with (cid:96) 1 refinement is also larger when language pairs are more distant, e.g., for VECMAP(cid:96) 1 the ACC gain on EN-FI is 1.8%, more than double of the gain (0.7%) on the close pairs EN { DE , IT } (cf. Tab. 1a and above).",
"We also conduct an ablation study by reporting the performance of (cid:96) 2 refinement scheme ( { MUSE , JAMUSE , VECMAP } (cid:96) 2 ).",
"This observation is in accordance with that of Lample et al. (2018), who reported that after performing (cid:96) 2 refinement in the first loop, applying further iterations only produces marginal precision gain, if any.",
"Overall, the (cid:96) 1 refinement consistently and significantly improve the CLWEs produced by base algorithms, regardless of the embeddings and setups used, thereby demonstrating the effectiveness and robustness of the proposed algorithm.",
"Refining supervised baselines.",
"To test the gen-eralisability of our method, we also applied it on state-of-the-art supervised CLWE models: PROCB (Glava et al., 2019) and JA-RCSLS (Wang et al., 2020).",
"Following the setup of Glava et al. (2019), we learn mappings using Wiki-Embs and 1K training splits of their dataset.",
"Their evaluation code retrieves bilingual word pairs using the classic nearest-neighbour algorithm and outputs the Mean Reciprocal Rank (MRR).",
"As shown in Tab.",
"2a, both JA-RCSLS(cid:96) 1 and PROCB(cid:96) 1 outperform the baseline algorithms for all Unsupervised DE IT DE TR FI HR FI IT HR RU IT FR TR ITICP 44.7 21.5 20.8 26.3 30.9 62.9 24.3 GWA 44.0 10.1 00.9 17.3 00.1 65.5 14.2 MUSE 49.6 23.7 22.8 32.7 00.0 66.2 30.6 MUSE(cid:96) 2 50.3 23.9 23.1 32.7 34.9 67.1 *30.5* MUSE(cid:96) 1 50.7 26.5 25.4 35.0 37.9 67.6 *33.3* JA-MUSE 50.9 25.6 23.4 34.9 36.9 68.3 34.7 JA-MUSE(cid:96) 2 50.9 25.5 23.4 34.7 36.9 68.4 34.7 JA-MUSE(cid:96) 1 51.5 28.4 26.1 36.0 37.6 68.7 36.1 VECMAP 49.3 25.3 28.0 35.5 37.6 66.7 33.2 VECMAP(cid:96) 2 48.8 25.7 28.5 35.8 38.4 67.0 33.5 VECMAP(cid:96) 1 50.1 28.2 30.3 37.1 40.1 67.6 35.9 Supervised DLV 42.0 16.7 18.4 24.4 26.4 58.5 20.9 RCSLS 45.3 20.1 21.4 27.2 29.1 63.7 24.6 JA-RSCLS 46.6 20.9 22.1 29.0 29.9 65.2 25.3 JA-RSCLS(cid:96) 2 46.4 20.8 22.3 29.0 29.8 65.2 25.3 JA-RSCLS(cid:96) 1 47.3 22.2 23.8 30.1 31.2 65.9 26.6 PROC-B 50.7 25.0 26.3 32.8 34.8 66.5 29.8 PROC-B(cid:96) 2 50.0 24.1 25.6 31.8 34.3 66.4 29.6 PROC-B(cid:96) 1 51.1 25.6 26.9 33.6 35.0 67.4 30.5 Table 3: MRR (%) of BLI for nonEN language pairs.",
"language pairs (with the exception of EN IT where the score of PROC-B is unchanged) with an average improvement of 0.9% and 0.5%, respectively ( p <0.01).",
"JA-RCSLS(cid:96) 1 and PROC-B(cid:96) 1 were also tested using News-Embs with results shown in Tab.",
"2b 5 .",
"(cid:96) 1 refinement achieves an impressive improvement for both close ( EN { DE , IT } ) and distant ( EN FI ) language pairs: average gain of 1.9% and 3.9% respectively and over 5% for EN DE (PROC-B(cid:96) 1 ) in particular.",
"The (cid:96) 2 refinement does not benefit the supervised baseline, similar to the lack of improvement observed in the unsupervised setups.",
"Comparison of unsupervised and supervised settings.",
"This part provides a comparison of the effectiveness of (cid:96) 1 refinement in unsupervised and supervised scenarios.",
"Unlike previous experiments where only alignments involving English were investigated, these tests focus on nonEN setups.",
"Glava et al. (2019)'s dataset is used to construct seven representative pairs which cover every category of etymological combination, i.e., intra-language-branch { HR RU , IT FR } , inter-language-branch { DE IT } , and inter-language-family { DE TR , FI HR , FI IT , TR IT } .",
"The 1K training splits are used as seed lexicons in supervised runs.",
"Apart 5 Note that results for EN ES is not included, as no EN ES dictionary is provided in Glava et al. (2019)'s dataset.",
"from our main baselines, we further report the results of several other competitive CLWE models: Iterative Closest Point Model (ICP, Hoshen and Wolf, 2018), Gromov-Wasserstein Alignment Model (GWA, Alvarez-Melis and Jaakkola, 2018), Discriminative Latent-Variable Model (DLV, Ruder et al., 2018) and Relaxed CSLS Model (RCSLS, Joulin et al., 2018).",
"Results shown in Tab.",
"3 demonstrate that the main baselines (MUSE , JA-MUSE , VECMAP , JA-RCSLS, and PROC-B) outperform these other models by a large margin.",
"For all these main baselines, post applying (cid:96) 1 refinement improves the mapping quality for all language pairs ( p < 0 . 01 ), with average improvements of 1.7%, 1.4%, 1.8%, 1.1%, and 0.8%, respectively.",
"Consistent with findings in the previous experiments, (cid:96) 2 refinement does not enhance performance.",
"Improvement with (cid:96) 1 refinement is higher when language pairs are more distant, e.g., for all inter-language-family pairs such as FI HR and TR IT , even the minimum improvement of MUSE(cid:96) 1 over MUSE is 2.3%.",
"Comparing unsupervised and supervised approaches, it can be observed that MUSE , JA-MUSE and VECMAP achieve higher overall gain with (cid:96) 1 refinement than JA-RCSLS and PROC-B, where JA-MUSE(cid:96) 1 and VECMAP(cid:96) 1 give the best overall performance.",
"One possible explanation to this phenomenon is that there is only a single source of possible noise in unsupervised models (i.e. the embedding topology) but for supervised methods noise can also be introduced via the seed lexicons.",
"Consequently unsupervised approaches drive more benefit from (cid:96) 1 refinement, which reduces the influence of topological outliers in CLWEs.",
"Topological behaviours of (cid:96) 1 and (cid:96) 2 refinements.",
"To validate our assumption that (cid:96) 2 refinement is more sensitive to outliers while its (cid:96) 1 counterpart is more robust, we analyse how each refinement strategy changes the distance between bilingual word vector pairs in the synthetic dictionary D (cf. Algorithm 1) constructed from trained CLWE models.",
"Specifically, for each word vector pair we subtract its post-refinement distance from the original distance (i.e., without applying additional (cid:96) 1 or (cid:96) 2 refinement step).",
"Fig. 1 shows visualisation examples for three algorithms and language pairs, where each bar represents one word pair.",
"It can be observed that (cid:96) 1 refinement effectively reduces the distance for most word pairs, regardless of their original distance (i.e., indicated by bars with negative values in the figures).",
"The conventional (cid:96) 2 refinement strategy, in contrast, exhibits very different behaviour and tends to be overly influenced by word pairs with large distance (i.e. by outliers).",
"The reason for this is that the (cid:96) 2 -norm penalty increases quadratically, causing the solution to put much more weight on optimising distant word pairs (i.e., word pairs on the right end of the X-axis show sharp distance decrements).",
"This observation is in line with Rousseeuw and Leroy (1987) and explains why (cid:96) 1 loss performs substantially stronger than (cid:96) 2 loss in the refinement.",
"Case study.",
"After aligning EN-RU embeddings with unsupervised MUSE , we measured the distance between vectors corresponding to the ground-truth dictionary of Lample et al. (2018) (cf. Fig. 1a).",
"We then detected large outliers by finding vector pairs whose distance falls above Q3 + 1 .",
"5 (Q3 Q1) , where Q1 and Q3 respectively denote the lower and upper quartile based on the popular InterQuartile Range (Hoaglin et al., 1986).",
"We found that many of the outliers correspond to polysemous entries, such as { state (2 noun meanings and 1 verb meaning), (only means status ) } , { type (2 nominal meanings and 1 verb mean-ing), (only means kind ) } , and { film (5 noun meanings), (only means movie ) } .",
"We then Unsupervised EN DE EN FR EN RU EN TRICP 58.0 51.0 57.2 40.0 GWA 42.7 38.3 37.6 35.9 MUSE 61.1 53.6 36.3 35.9 MUSE(cid:96) 2 61.1 53.0 *57.3* *48.9* MUSE(cid:96) 1 63.5 55.3 *58.9* *52.3* JA-MUSE 61.3 55.2 58.1 55.0 JA-MUSE(cid:96) 2 61.2 55.2 57.6 55.1 JA-MUSE(cid:96) 1 62.9 57.9 59.4 57.5 VECMAP 60.4 61.3 58.1 53.4 VECMAP(cid:96) 2 60.3 60.6 57.7 53.5 VECMAP(cid:96) 1 61.5 63.7 60.1 56.4 Supervised RCSLS 37.6 35.7 37.8 38.7 JA-RSCLS 50.2 48.9 51.0 51.7 JA-RSCLS(cid:96) 2 50.4 48.6 50.9 51.5 JA-RSCLS(cid:96) 1 51.3 50.1 53.2 52.6 PROC-B 61.3 54.3 59.3 56.8 PROC-B(cid:96) 2 61.0 54.8 58.9 55.1 PROC-B(cid:96) 1 62.1 54.8 60.7 58.2 Table 4: ACC (%) of NLI.",
"re-perform (cid:96) 2 -based mapping after removing these vector pairs, observing that the accuracy jumps to 45.9% (cf.",
"the original (cid:96) 2 -norm alignment it is 43.8% and after (cid:96) 1 refinement it is 45.6%, cf.",
"Tab.",
"1).",
"This indicates that although all baselines already make use of preprocessing steps including vector normalization, outlier issues still exist and harms the (cid:96) 2 norm CLWEs.",
"However, they can be alleviated by the proposed (cid:96) 1 refinement technique.",
"Finally, we experimented with a downstream NLI task in which the aim is to determine whether a hypothesis is true ( entailment ), false ( contradiction ) or undetermined ( neutral ), given a premise.",
"Higher ACC indicates better encoding of semantics in the tested embeddings.",
"The CLWEs used are those trained with Wiki-Embs for BLI.",
"For MUSE , JA-MUSE and VECMAP , we also obtain CLWEs for EN TR pair with the same configuration.",
"Following Glava et al. (2019), we first train the Enhanced Sequential Inference Model (Chen et al., 2017) based on the large-scale English MultiNLI corpus (Williams et al., 2018) using vectors of language LA ( EN ) from an aligned bilingual embedding space (e.g., EN DE ).",
"Next, we replace the LA vectors with the vectors of language LB (e.g., DE ), and directly test the trained model on the language LB portion of the XNLI corpus (Conneau et al., 2018).",
"by our algorithm yield the highest ACC for all language pairs in both supervised and unsupervised settings.",
"The (cid:96) 2 refinement, on the contrary, is not beneficial overall.",
"Improvements in cross-lingual transfer for NLI exhibit similar trends to those in the BLI experiments, i.e. greater performance gain for unsupervised methods and more distant language pairs, consistent with previous observations (Glava et al., 2019).",
"For instance, MUSE(cid:96) 1 JAMUSE(cid:96) 1 and VECMAP(cid:96) 1 outperform their baselines by at least 2% in ACC on average ( p < 0 . 01 ), whereas the improvements of JA-RSCLS(cid:96) 1 and PROC-B(cid:96) 1 over their corresponding base methods are 2% and 2.1% respectively ( p < 0 . 01 ).",
"For both unsupervised and supervised methods, (cid:96) 1 refinement demonstrates stronger effect for more distant language pairs, e.g., MUSE(cid:96) 1 surpasses MUSE by 1.2% for EN FR , whereas a more impressive 2.7% gain is achieved for EN TR .",
"In summary, in addition to improving BLI performance, our (cid:96) 1 refinement method also produces a significant improvement for a downsteam task (NLI), demonstrating its effectiveness in improving the CLWE quality.",
"This paper proposes a generic post-processing technique to enhance CLWE performance based on optimising (cid:96) 1 loss.",
"This algorithm is motivated by successful applications in other research fields (e.g. computer vision and data mining) which exploit the (cid:96) 1 norm cost function since it has been shown to be more robust to noisy data than the commonly-adopted (cid:96) 2 loss.",
"The approach was evaluated using ten diverse languages and word embeddings from different domains on the popular BLI benchmark, as well as a downstream task of cross-lingual transfer for NLI.",
"Results demonstrated that our algorithm can significantly improve the quality of CLWEs in both supervised and unsupervised setups.",
"It is therefore recommended that this straightforward technique be applied to improve performance of CLWEs.",
"The convergence speed of the optimiser prevented us from performing (cid:96) 1 loss optimisation over multiple iterations.",
"Future work will focus on improving the efficiency of our (cid:96) 1 OPA solver, as well as exploring the application of other robust loss functions within CLWE training strategies.",
"This work provides an effective post-hoc method to improve CLWEs, advancing the state-of-the-art in both supervised and unsupervised settings.",
"Our comprehensive empirical studies demonstrate that the proposed algorithm can facilitate researches in machine translation, cross-lingual transfer learning, etc, which have deep societal impact of bridging cultural gaps across the world.",
"Besides, this paper introduces and solves an optimisation problem based on an under-explored robust cost function, namely (cid:96) 1 loss.",
"We believe it could be of interest for the wider community as outlier is a long-standing issue in many artificial intelligence applications.",
"One caveat with our method, as is the case for all word-embedding-based systems, is that various biases may exist in vector spaces.",
"We suggest this problem should always be looked at critically.",
"In addition, our implemented solver can be computationally expensive, leading to increased electricity consumption and the associated negative environmental repercussions.",
"This work is supported by the award made by the UK Engineering and Physical Sciences Research Council (Grant number: EP/P011829/1) and Baidu, Inc.",
"We would also like to express our sincerest gratitude to Guanyi Chen, Ruizhe Li, Xiao Li, Shun Wang, and the anonymous reviewers for their insightful and helpful comments."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"objective",
"objective",
"method",
"objective",
"result",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"other",
"other"
] |
[
"Recently, sentiment analysis has seen remarkable advance with the help of pre-training approaches.",
"However, sentiment knowledge, such as sentiment words and aspect-sentiment pairs, is ignored in the process of pre-training, despite the fact that they are widely used in traditional sentiment analysis approaches.",
"In this paper, we introduce Sentiment Knowledge Enhanced Pre-training (SKEP) in order to learn a unified sentiment representation for multiple sentiment analysis tasks.",
"With the help of automatically-mined knowledge, SKEP conducts sentiment masking and constructs three sentiment knowledge prediction objectives, so as to embed sentiment information at the word, polarity and aspect level into pre-trained sentiment representation.",
"In particular, the prediction of aspect-sentiment pairs is converted into multi-label classification, aiming to capture the dependency between words in a pair.",
"Experiments on three kinds of sentiment tasks show that SKEP significantly outperforms strong pre-training baseline, and achieves new state-of-the-art results on most of the test datasets.",
"We release our code at https://github.com/baidu/Senta .",
"Sentiment analysis refers to the identification of sentiment and opinion contained in the input texts that are often user-generated comments.",
"In practice, sentiment analysis involves a wide range of specific tasks (Liu, 2012), such as sentence-level sentiment classification, aspect-level sentiment classification, opinion extraction and so on.",
"Traditional methods often study these tasks separately and design specific models for each task, based on manually-designed features (Liu, 2012) or deep learning (Zhang et al., 2018).",
"Recently, pre-training methods (Peters et al., 2018; Radford et al., 2018; Devlin et al., 2019; Yang et al., 2019) have shown their powerfulness in learning general semantic representations, and have remarkably improved most natural language processing (NLP) tasks like sentiment analysis.",
"These methods build unsupervised objectives at word-level, such as masking strategy (Devlin et al., 2019), next-word prediction (Radford et al., 2018) or permutation (Yang et al., 2019).",
"Such word-prediction-based objectives have shown great abilities to capture dependency between words and syntactic structures (Jawahar et al., 2019).",
"However, as the sentiment information of a text is seldom explicitly studied, it is hard to expect such pre-trained general representations to deliver optimal results for sentiment analysis (Tang et al., 2014).",
"Sentiment analysis differs from other NLP tasks in that it deals mainly with user reviews other than news texts.",
"There are many specific sentiment tasks, and these tasks usually depend on different types of sentiment knowledge including sentiment words, word polarity and aspect-sentiment pairs.",
"The importance of these knowledge has been verified by tasks at different level, for instance, sentence-level sentiment classification (Taboada et al., 2011; Shin et al., 2017; Lei et al., 2018), aspect-level sentiment classification (Vo and Zhang, 2015; Zeng et al., 2019), opinion extraction (Li and Lam, 2017; Gui et al., 2017; Fan et al., 2019) and so on.",
"Therefore, we assume that, by integrating these knowledge into the pre-training process, the learned representation would be more sentiment-specific and appropriate for sentiment analysis.",
"In order to learn a unified sentiment representation for multiple sentiment analysis tasks, we propose Sentiment Knowledge Enhanced Pre-training (SKEP), where sentiment knowledge about words, polarity, and aspect-sentiment pairs are included to guide the process of pre-training.",
"The sentiment knowledge is first automatically mined from unlabeled data (Section 3.1).",
"With the knowledge 4068 Transformer Encoder [MASK] came this [MASK] and really [MASK] it I x 3 x 4 x 6 x 7 x 5 x 9 x 10 x 8 product fast appreciated !\"#$% & product came this fast and really appreiated it I & aspect-sentiment pair sentiment word SentimentPrediction x 2 x 1 !\"#$% !",
"mined, sentiment masking (Section 3.2) removes sentiment information from input texts.",
"Then, the pre-training model is trained to recover the sentiment information with three sentiment objectives (Section 3.3).",
"SKEP integrates different types of sentiment knowledge together and provides a unified sentiment representation for various sentiment analysis tasks.",
"This is quite different from traditional sentiment analysis approaches, where different types of sentiment knowledge are often studied separately for specific sentiment tasks.",
"To the best of our knowledge, this is the first work that has tackled sentiment-specific representation during pretraining.",
"Overall, our contributions are as follows: We propose sentiment knowledge enhanced pre-training for sentiment analysis, which provides a unified sentiment representation for multiple sentiment analysis tasks.",
"Three sentiment knowledge prediction objectives are jointly optimized during pre-training so as to embed sentiment words, polarity, aspect-sentiment pairs into the representation.",
"In particular, the pair prediction is converted into multi-label classification to capture the dependency between aspect and sentiment.",
"SKEP significantly outperforms the strong pre-training methods RoBERTa (Liu et al., 2019) on three typical sentiment tasks, and achieves new state-of-the-art results on most of the test datasets.",
"BERT (Devlin et al., 2019) is a self-supervised representation learning approach for pre-training a deep transformer encoder (Vaswani et al., 2017).",
"BERT constructs a self-supervised objective called masked language modeling (MLM) to pre-train the transformer encoder, and relies only on large-size unlabeled data.",
"With the help of pre-trained transformer, downstream tasks have been substantially improved by fine-tuning on task-specific labeled data.",
"We follow the method of BERT to construct masking objectives for pre-training.",
"BERT learns a transformer encoder that can produce a contextual representation for each token of input sequences.",
"In reality, the first token of an input sequence is a special classification token [CLS] .",
"In fine-tuning step, the final hidden state of [CLS] is often used as the overall semantic representation of the input sequence.",
"In order to train the transformer encoder, MLM is proposed.",
"Similar to doing a cloze test, MLM predicts the masked token in a sequence from their placeholder.",
"Specifically, parts of input tokens are randomly sampled and substituted.",
"BERT uniformly selects 15% of input tokens.",
"Of these sampled tokens, 80% are replaced with a special masked token [MASK] , 10% are replaced with a random token, 10% are left unchanged.",
"After the construction of this noisy version, the MLM aims to predict the original tokens in the masked positions using the corresponding final states.",
"Most recently, RoBERTa (Liu et al., 2019) significantly outperforms BERT by robust opti-4069 mization without the change of neural structure, and becomes one of the best pre-training models.",
"RoBERTa also removes the next sentence prediction objective from standard BERT.",
"To verify the effectiveness of our approach, this paper uses RoBERTa as a strong baseline.",
"We propose SKEP, Sentiment Knowledge Enhanced Pre-training, which incorporates sentiment knowledge by self-supervised training.",
"As shown in Figure 1, SKEP contains sentiment masking and sentiment pre-training objectives.",
"Sentiment masking (Section 3.2) recognizes the sentiment information of an input sequence based on automatically-mined sentiment knowledge (Section 3.1), and produces a corrupted version by removing this information.",
"Three sentiment pre-training objectives (Section 3.3) require the transformer to recover the sentiment information for the corrupted version.",
"Formally, sentiment masking constructs a corrupted version (cid:101) X for an input sequence X guided by sentiment knowledge G .",
"x i and (cid:101) x i denote the i -th token of X and (cid:101) X respectively.",
"After masking, a parallel data ( (cid:101) X, X ) is obtained.",
"Thus, the transformer encoder can be trained with sentiment pre-training objectives that are supervised by recovering sentiment information using the final states of encoder (cid:101) x 1 , ..., (cid:101) x n .",
"SKEP mines the sentiment knowledge from unlabeled data.",
"As sentiment knowledge has been the central subject of extensive research, SKEP finds a way to integrate former technique of knowledge mining with pre-training.",
"This paper uses a simple and effective mining method based on Pointwise Mutual Information (PMI) (Turney, 2002).",
"PMI method depends only on a small number of sentiment seed words and the word polarity WP( s ) of each seed word s is given.",
"It first builds a collection of candidate word-pairs where each word-pair contains a seed word, and meet with pre-defined part-of-speech patterns as Turney (2002).",
"Then, the co-occurrence of a word-pair is calculated by PMI as follows: PMI( w 1 , w 2 ) = log p ( w 1 , w 2 ) p ( w 1 ) p ( w 2 ) (1) Here, p ( . ) denotes probability estimated by count.",
"Finally, the polarity of a word is determined by the difference between its PMI scores with all positive seeds and that with all negative seeds.",
"WP( w ) = (cid:88) WP( s )=+ PMI( w, s ) (2) (cid:88) WP( s )= PMI( w, s ) If WP( w ) of a candidate word w is larger than 0 , then w is a positive word, otherwise it is negative.",
"After mining sentiment words, aspect-sentiment pairs are extracted by simple constraints.",
"An aspect-sentiment pair refers to the mention of an aspect and its corresponding sentiment word.",
"Thus, a sentiment word with its nearest noun will be considered as an aspect-sentiment pair.",
"The maximum distance between the aspect word and the sentiment word of a pair is empirically limited to no more than 3 tokens.",
"Consequently, the mined sentiment knowledge G contains a collection of sentiment words with their polarity along with a set of aspect-sentiment pairs.",
"Our research focuses for now the necessity of integrating sentiment knowledge in pre-training by virtue of a relatively common mining method.",
"We believe that a more fine-grained method would further improve the quality of knowledge, and this is something we will be exploring in the nearest future.",
"Sentiment masking aims to construct a corrupted version for each input sequence where sentiment information is masked.",
"Our sentiment masking is directed by sentiment knowledge, which is quite different from previous random word masking.",
"This process contains sentiment detection and hybrid sentiment masking that are as follows.",
"Sentiment Detection with Knowledge Sentiment detection recognizes both sentiment words and aspect-sentiment pairs by matching input sequences with the mined sentiment knowledge G .",
"1. Sentiment Word Detection.",
"The word detection is straightforward.",
"If a word of an input sequence also occurs in the knowledge base G , then this word is seen as a sentiment word.",
"2. Aspect-Sentiment Pair Detection.",
"The detection of an aspect-sentiment pair is similar to 4070 its mining described before.",
"A detected sentiment word and its nearby noun word are considered as an aspect-sentiment pair candidate, and the maximum distance of these two words is limited to 3 .",
"Thus, if such a candidate is also found in mined knowledge G , then it is considered as an aspect-sentiment pair.",
"Hybrid Sentiment Masking Sentiment detection results in three types of tokens for an input sequence: aspect-sentiment pairs, sentiment words and common tokens.",
"The process of masking a sequence runs in following steps:",
"1. Aspect-sentiment Pair Masking.",
"At most 2 aspect-sentiment pairs are randomly selected to mask.",
"All tokens of a pair are replaced by [MASK] simultaneously.",
"This masking provides a way for capturing the combination of an aspect word and a sentiment word.",
"2. Sentiment Word Masking.",
"For those unmasked sentiment words, some of them are randomly selected and all the tokens of them are substituted with [MASK] at the same time.",
"The total number of tokens masked in this step is limited to be less than 10%",
".",
"3. Common Token Masking.",
"If the number of tokens in step 2 is insufficient, say less than 10% , this would be filled during this step with randomly-selected tokens.",
"Here, random token masking is the same as RoBERTa.",
"1 3.3 Sentiment Pre-training Objectives Sentiment masking produces corrupted token sequences (cid:101) X , where their sentiment information is substituted with masked tokens.",
"Three sentiment objectives are defined to tell the transformer encoder to recover the replaced sentiment information.",
"The three objectives, Sentiment Word (SW) prediction L sw , Word Polarity (WP) prediction L wp and Aspect-sentiment Pair (AP) prediction L ap are jointly optimized.",
"Thus, the overall pretraining objective L is: L = L sw + L wp + L ap (3) 1 For each sentence, we would always in total mask 10% of its tokens at step 2 and",
"3. Among these masked tokens, 79.9% are sentiment words (during step 2) and 20.1% are common words (during step 3) in our experiment.",
"Sentiment Word Prediction Sentiment word prediction is to recover the masked tokens of sentiment words using the output vector (cid:101) x i from transformer encoder.",
"(cid:101) x i is fed into an output softmax layer, which produces a normalized probability vector y i over the entire vocabulary.",
"In this way, the sentiment word prediction objective L sw is to maximize the probability of original sentiment word x i as follows: y i = softmax( (cid:101) x i W + b ) (4) L sw = i = n (cid:88) i =1 m i y i log y i (5) Here, W and b are the parameters of the output layer.",
"m i = 1 if i -th position of a sequence is masked sentiment word 2 , otherwise it equals to 0 .",
"y i is the one-hot representation of the original token x i .",
"Regardless of a certain similarity to MLM of BERT, our sentiment word prediction has a different purpose.",
"Instead of predicting randomly masking tokens, this sentiment objective selects those sentiment words for self-supervision.",
"As sentiment words play a key role in sentiment analysis, the representation learned here is expected to be more suitable for sentiment analysis.",
"Word Polarity Prediction Word polarity is crucial for sentiment analysis.",
"For example, traditional lexicon-based model (Turney, 2002) directly utilizes word polarity to classify the sentiment of texts.",
"To incorporate this knowledge into the encoder, an objective called word polarity prediction L wp is further introduced.",
"L wp is similar to L sw .",
"For each masked sentiment token (cid:101) x i , L wp calculated its polarity (positive or negative) using final state (cid:101) x i .",
"Then the polarity of target corresponds to the polarity of the original sentiment word, which can be found from the mined knowledge.",
"Aspect-sentiment Pair Prediction Aspect sentiment pairs reveal more information than sentiment words do.",
"Therefore, in order to capture the dependency between aspect and sentiment, an aspect-sentiment pair objective is proposed.",
"Especially, words in a pair are not mutually exclusive.",
"This is quite different from BERT, which assumes tokens can be independently predicted.",
"2 In sentiment masking, we add common tokens to make up for the deficiency of masked tokens of sentiment words.",
"L sw also calculates these common tokens, while L wp does not includes them.",
"We thus conduct aspect-sentiment pair prediction with multi-label classification.",
"We use the final state of classification token [CLS] , which denotes representation of the entire sequence, to predict pairs.",
"sigmoid activation function is utilized, which allows multiple tokens to occur in the output at the same time.",
"The aspect-sentiment pair objective L ap is denoted as follows: y a = sigmoid( (cid:101) x 1 W ap + b ap ) (6) L ap = a = A (cid:88) a =1 y a log y a (7) Here, x 1 denotes the output vector of [CLS] .",
"A is the number of masked aspect-sentiment pairs in a corrupted sequence.",
"y a is the word probability normalized by sigmoid .",
"y a is the sparse representation of a target aspect-sentiment pair.",
"Each element of y a corresponds to one token of the vocabulary, and equals to 1 if the target aspect-sentiment pair contains the corresponding token.",
"3 As there are multiple elements of y a equals to 1 , the predication here is multi-label classification.",
"4 4 Fine-tuning for Sentiment Analysis We verify the effectiveness of SKEP on three typical sentiment analysis tasks: sentence-level sentiment classification, aspect-level sentiment classification, and opinion role labeling.",
"On top of the pre-trained transformer encoder, an output layer is added to perform task-specific prediction.",
"The neural network is then fine-tuned on task-specific labeled data.",
"Sentence-level Sentiment Classification This task is to classify the sentiment polarity of an input sentence.",
"The final state vector of classification token [CLS] is used as the overall representation of an input sentence.",
"On top of the transformer encoder, a classification layer is added to calculate the sentiment probability based on the overall representation.",
"Aspect-level Sentiment Classification This task aims to analyze fine-grained sentiment for an aspect when given a contextual text.",
"Thus, there are two parts in the input: aspect description and 3 This means that the dimension of y a equals to the vocabulary size of pre-training method, which is 50265 in our experiment.",
"4 It is possible to predict masked pairs with CRF-layer.",
"However, it is more than 10-times slower than multi-label classification, thus could not be used in pre-training.",
"contextual text.",
"These two parts are combined with a separator [SEP] , and fed into the transformer encoder.",
"This task also utilizes the final state of the first token [CLS] for classification.",
"Opinion Role Labeling This task is to detect fine-grained opinion, such as holder and target, from input texts.",
"Following SRL4ORL (Marasovic and Frank, 2018), this task is converted into sequence labeling, which uses BIOS scheme for labeling, and a CRF-layer is added to predict the labels.",
"5 5 Experiment 5.1 Dataset and Evaluation A variety of English sentiment analysis datasets are used in this paper.",
"Table 1 summarizes the statistics of the datasets used in the experiments.",
"These datasets contain three types of tasks: (1) For sentence-level sentiment classification, Standford Sentiment Treebank (SST-2) (Socher et al., 2013) and Amazon-2 (Zhang et al., 2015) are used.",
"In Amazon-2, 400 k of the original training data are reserved for development.",
"The performance is evaluated in terms of accuracy.",
"(2) Aspect-level sentiment classification is evaluated on Semantic Eval 5 All the pretraining models, including our SKEP and baselines use CRF-Layer here, thus their performances are comparable.",
"2014 Task4 (Pontiki et al., 2014).",
"This task contains both restaurant domain and laptop domain, whose accuracy is evaluated separately.",
"(3) For opinion role labeling, MPQA 2.0 dataset (Wiebe et al., 2005; Wilson, 2008) is used.",
"MPQA aims to extract the targets or the holders of the opinions.",
"Here we follow the method of evaluation in SRL4ORL (Marasovic and Frank, 2018), which is released and available online.",
"4-folder cross-validation is performed, and the F-1 scores of both holder and target are reported.",
"To perform sentiment pre-training of SKEP, the training part of Amazon-2 is used, which is the largest dataset among the list in Table",
"1. Notably, the pre-training only uses raw texts without any sentiment annotation.",
"To reduce the dependency on manually-constructed knowledge and provide SKEP with the least supervision, we only use 46 sentiment seed words.",
"Please refers to the appendix for more details about seed words.",
"We use RoBERTa (Liu et al., 2019) as our baseline, which is one of the best pre-training models.",
"models.",
"Both base and large versions of RoBERTa are used.",
"RoBERTa base and RoBERTa large contain 12 and 24 transformer layers respectively.",
"As the pre-training method is quite costly in term of GPU resources, most of the experiments are done on RoBERTa base , and only the main results report the performance on RoBERTa large .",
"For SKEP, the transformer encoder is first initialized with RoBERTa, then is pre-trained on sentiment unlabeled data.",
"An input sequence is truncated to 512 tokens.",
"Learning rate is kept as 5 e 5 , and batch-size is 8192 .",
"The number of epochs is set to",
"3. For the fine-tuning of each dataset, we run 3 times with random seeds for each combination of parameters (Table 2), and choose the medium checkpoint for testing according to the performance on the development set.",
"We compare our SKEP method with the strong pretraining baseline RoBERTa and previous SOTA.",
"The result is shown in Table",
"3. Comparing with RoBERTa, SKEP significantly and consistently improves the performance on both 4073 From Model Sentence Samples Prediction SST-2 RoBERTa altogether , this is (cid:58)(cid:58)(cid:58)(cid:58)(cid:58)(cid:58)(cid:58)(cid:58)(cid:58) successful as a film , while at the same time being a most touching reconsideration of the familiar (cid:58)(cid:58)(cid:58)(cid:58)(cid:58)(cid:58)(cid:58)(cid:58)(cid:58)(cid:58) masterpiece .",
"base and large settings.",
"Even on RoBERTa large , SKEP achieves an improvement of up to 2 .",
"4 points.",
"According to the task types, SKEP achieves larger improvements on fine-grained tasks, aspect-level classification and opinion role labeling, which are supposed to be more difficult than sentence-level classification.",
"We think this owes to the aspect-sentiment knowledge that is more effective for these tasks.",
"Interestingly, RoBERTa base + SKEP always outperforms RoBERTa large , except on Amazon-2.",
"As the large version of RoBERTa is computationally expensive, the base version of SKEP provides an efficient model for application.",
"Compared with previous SOTA, SKEP achieves new state-of-the-art results on almost all datasets, with a less satisfactory result only on SST-2.",
"Overall, through comparisons of various sentiment tasks, the results strongly verify the necessity of incorporating sentiment knowledge for pretraining methods, and also the effectiveness of our proposed sentiment pre-training method.",
"Effect of Sentiment Knowledge SKEP uses an additional sentiment data for further pre-training and utilizes three objectives to incorporate three types of knowledge.",
"Table 4 compares the contributions of these factors.",
"Further pre-training with random sub-word masking of Amazon, Roberta base obtains some improvements.",
"This proves the value of large-size task-specific unlabeled data.",
"However, the improvement is less evident compared with sentiment word masking.",
"This indicates that the importance of sentiment word knowledge.",
"Further improvements are obtained when word polarity and aspect-sentiment pair objectives are added, confirming the contribution of both types of knowledge.",
"Compare +SW+WP+AP with +Random Token, the improvements are consistently significant in all evaluated data and is up to about 1 .",
"5 points.",
"Overall, from the comparison of objectives, we conclude that sentiment knowledge is helpful, and more diverse knowledge results in better performance.",
"This also encourages us to use more types of knowledge and use better mining methods in the future.",
"Effect of Multi-label Optimization Multi-label classification is proposed to deal with the dependency in an aspect-sentiment pair.",
"To confirm the necessity of capturing the dependency of words in the aspect-sentiment pair, we also compare it with the method where the token is predicted independently, which is denoted by AP-I.",
"AP-I uses softmax for normalization, and independently predicts each word of a pair as the sentiment word prediction.",
"According to the last line that contains AP-I in Table 4, predicting words of a pair independently do not hurt the performance of sentence-level classification.",
"This is reasonable as the sentence-level task mainly relies on sentiment words.",
"In contrast, in aspect-level classification and opinion role labeling, multi-label classification is efficient and yields improvement of up to 0.6 points.",
"This denotes that multi-label classification does capture better dependency between aspect and sentiment, and also the necessity of dealing with such dependency.",
"Comparison of Vector for Aspect-Sentiment Pair Prediction SKEP utilizes the sentence rep-4074 resentation, which is the final state of classification token [CLS] , for aspect-sentiment pair prediction.",
"We call this Sent-Vector methods.",
"Another way is to use the concatenation of the final vectors of the two words in a pair, which we call Pair-Vector.",
"As shown in Table 6, the performances of these two decisions are very close.",
"We suppose this dues to the robustness of the pre-training approach.",
"As using a single vector for prediction is more efficient, we use final state of token [CLS] in SKEP.",
"Attention Visualization Table 5 shows the attention distribution of final layer for the [CLS] token when we adopt our SKEP model to classify the input sentences.",
"On the SST-2 example, despite RoBERTa gives a correct prediction, its attention about sentiment is inaccurate.",
"On the Sem-L case, RoBERTa fails to attend to the word amaz-ing, and produces a wrong prediction.",
"In contrast, SKEP produces correct predictions and appropriate attention of sentiment information in both cases.",
"This indicates that SKEP has better interpretability.",
"Sentiment Analysis with Knowledge Various types of sentiment knowledge, including sentiment words, word polarity, aspect-sentiment pairs, have been proved to be useful for a wide range of sentiment",
"sentiment analysis tasks.",
"Sentiment words with their polarity are widely used for sentiment analysis, including sentence-level sentiment classification (Taboada et al., 2011; Shin et al., 2017; Lei et al., 2018; Barnes et al., 2019), aspect-level sentiment classification (Vo and Zhang, 2015), opinion extraction (Li and Lam, 2017), emotion analysis (Gui et al., 2017; Fan et al., 2019) and so on.",
"Lexicon-based method (Turney, 2002; Taboada et al., 2011) directly utilizes polarity of sentiment words for classification.",
"Traditional feature-based approaches encode sentiment word information in manually-designed features to improve the supervised models (Pang et al., 2008; Agarwal et al., 2011).",
"In contrast, deep learning approaches enhance the embedding representation with the help of sentiment words (Shin et al., 2017), or absorb the sentiment knowledge through linguistic regularization (Qian et al., 2017; Fan et al., 2019).",
"Aspect-sentiment pair knowledge is also useful for aspect-level classification and opinion extraction.",
"Previous works often provide weak supervision by this type of knowledge, either for aspect-level classification (Zeng et al., 2019) or for opinion extraction (Yang et al., 2017; Ding et al., 2017).",
"Although studies of exploiting sentiment knowledge have been made throughout the years, most of them tend to build a specific mechanism for each sentiment analysis task, so different knowledge is adopted to support different tasks.",
"Whereas our method incorporates diverse knowledge in pretraining to provide a unified sentiment representation for sentiment analysis tasks.",
"Pre-training Approaches Pre-training methods have remarkably improved natural language processing, using self-supervised training with large scale unlabeled data.",
"This line of research is dramatically advanced very recently, and various types of methods are proposed, including ELMO (Peters et al., 2018), GPT (Radford et al., 2018), BERT (Devlin et al., 2019), XLNet (Yang et al., 2019) and so on.",
"Among them, BERT pre-trains a bidirectional transformer by randomly masked word prediction, and have shown strong performance gains.",
"RoBERTa (Liu et al., 2019) further improves BERT by robust optimization, and become one of the best pre-training methods.",
"Inspired by BERT, some works propose fine-grained objectives beyond random word masking.",
"SpanBERT (Joshi et al., 2019) masks the span of words at the same time.",
"ERNIE (Sun et al., 2019) proposes to mask entity words.",
"On the other hand, pre-training for specific tasks is also studied.",
"GlossBERT (Huang et al., 2019) exploits gloss knowledge to improve word sense disambiguation.",
"SenseBERT (Levine et al., 2019) uses WordNet super-senses to improve word-in-context tasks.",
"A different ERNIE (Zhang et al., 2019) exploits entity knowledge for entity-linking and relation classification.",
"In this paper, we propose Sentiment Knowledge Enhanced Pre-training for sentiment analysis.",
"Sentiment masking and three sentiment pre-training objectives are designed to incorporate various types of knowledge for pre-training model.",
"Thought conceptually simple, SKEP is empirically highly effective.",
"SKEP significantly outperforms strong pre-training baseline RoBERTa, and achieves new state-of-the-art on most datasets of three typical specific sentiment analysis tasks.",
"Our work verifies the necessity of utilizing sentiment knowledge for pre-training models, and provides a unified senti-4075 ment representation for a wide range of sentiment analysis tasks.",
"Zhen-Zhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut.",
"2019.",
"Albert: A lite bert for self-supervised learning of language representations.",
"ArXiv , abs/1909.11942.",
"Zeyang Lei, Yujiu Yang, Min Yang, and Yi Liu.",
"2018.",
"A multi-sentiment-resource enhanced attention network for sentiment classification.",
"In ACL 2018 .",
"Yoav Levine, Barak Lenz, Or Dagan, Dan Padnos, Or Sharir, Shai Shalev-Shwartz, Amnon Shashua, and Yoav Shoham.",
"2019.",
"Sensebert: Driving some sense into bert.",
"Xin Li and Wai Lam.",
"2017.",
"Deep multi-task learning for aspect term extraction with memory interaction.",
"In EMNLP 2017 .",
"Bing Liu.",
"2012.",
"Sentiment analysis and opinion mining.",
"In Synthesis Lectures on Human Language Technologies 5.1 (2012): 1-167.",
"Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov.",
"2019.",
"Roberta: A robustly optimized bert pretraining approach.",
"arXiv preprint arXiv:1907.11692 .",
"Ana Marasovi c and Anette Frank.",
"2018.",
"SRL4ORL: Improving opinion role labeling using multi-task learning with semantic role labeling.",
"In NAACL 2018 .",
"Bo Pang, Lillian Lee, et al. 2008.",
"Opinion mining and sentiment analysis.",
"Foundations and Trends R (cid:13) in Information Retrieval , 2(12):1135.",
"Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer.",
"2018.",
"Deep contextualized word representations.",
"arXiv preprint arXiv:1802.05365 .",
"Maria Pontiki, Dimitris Galanis, John Pavlopoulos, Harris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar.",
"2014.",
"SemEval-2014 task 4: Aspect based sentiment analysis.",
"In SemEval 2014 .",
"Qiao Qian, Minlie Huang, Jinhao Lei, and Xiaoyan Zhu.",
"2017.",
"Linguistically regularized LSTM for sentiment classification.",
"In ACL 2017 .",
"Alec Radford, Karthik Narasimhan, Time Salimans, and Ilya Sutskever.",
"2018.",
"Improving language understanding with unsupervised learning.",
"Technical report, Technical report, OpenAI.",
"Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu.",
"2019.",
"Exploring the limits of transfer learning with a unified text-to-text transformer.",
"Alexander Rietzler, Sebastian Stabinger, Paul Opitz, and Stefan Engl.",
"2019.",
"Adapt or get left behind: Domain adaptation through bert language model finetuning for aspect-target sentiment classification.",
"ArXiv , abs/1908.11860.",
"In the future, we hope to apply SKEP on more sentiment analysis tasks, to further see the generalization of SKEP, and we are also interested in exploiting more types of sentiment knowledge and more fine-grained sentiment mining methods.",
"We thanks Qinfei Li for her valuable comments.",
"We also thank the anonymous reviewers for their insightful comments.",
"This work was supported by the National Key Research and Development Project of China (No. 2018AAA0101900)."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"result",
"other",
"other",
"other"
] |
[
"A major proportion of a text summary includes important entities found in the original text.",
"These entities build up the topic of the summary.",
"Moreover, they hold commonsense information once they are linked to a knowledge base.",
"Based on these observations, this paper investigates the usage of linked entities to guide the decoder of a neural text summarizer to generate concise and better summaries.",
"To this end, we leverage on an off-the-shelf entity linking system (ELS) to extract linked entities and propose Entity2Topic (E2T) , a module easily attachable to a sequence-to-sequence model that transforms a list of entities into a vector representation of the topic of the summary.",
"Current available ELS's are still not suf-ficiently effective, possibly introducing unresolved ambiguities and irrelevant entities.",
"We resolve the imperfections of the ELS by",
"(a) encoding entities with selective disambiguation, and",
"(b) pooling entity vectors using firm attention.",
"By applying E2T to a simple sequence-to-sequence model with attention mechanism as base model, we see significant improvements of the performance in the Gigaword (sentence to title) and CNN (long document to multi-sentence highlights) summarization datasets by at least 2 ROUGE points.",
"Text summarization is a task to generate a shorter and concise version of a text while preserving the meaning of the original text.",
"The task can be divided into two subtask based on the approach: extractive and abstractive summarization.",
"Extractive summarization is a task to create summaries by pulling out snippets of text form the original text and combining them to form a summary.",
"Abstractive summarization asks to generate summaries from scratch without the restriction to use Amplayo and Lim are co-first authors with equal contribution.",
"Names are arranged alphabetically.",
"the available words from the original text.",
"Due to the limitations of extractive summarization on incoherent texts and unnatural methodology (Yao et al., 2017), the research trend has shifted towards abstractive summarization.",
"Sequence-to-sequence models (Sutskever et al., 2014) with attention mechanism (Bahdanau et al., 2014) have found great success in generating abstractive summaries, both from a single sentence (Chopra et al., 2016) and from a long document with multiple sentences (Chen et al., 2016).",
"However, when generating summaries, it is necessary to determine the main topic and to sift out unnecessary information that can be omitted.",
"Sequence-to-sequence models have the tendency to include all the information, relevant or not, that are found in the original text.",
"This may result to uncon-cise summaries that concentrates wrongly on irrelevant topics.",
"The problem is especially severe when summarizing longer texts.",
"In this paper, we propose to use entities found in the original text to infer the summary topic, miti-697 gating the aforementioned problem.",
"Specifically, we leverage on linked entities extracted by employing a readily available entity linking system.",
"The importance of using linked entities in summarization is intuitive and can be explained by looking at Figure 1 as an example.",
"First ( O1 in the Fig-ure), aside from auxiliary words to construct a sentence, a summary is mainly composed of linked entities extracted from the original text.",
"Second ( O2 ), we can depict the main topic of the summary as a probability distribution of relevant entities from the list of entities.",
"Finally ( O3 ), we can leverage on entity commonsense learned from a separate large knowledge base such as Wikipedia.",
"To this end, we present a method to effectively apply linked entities in sequence-to-sequence models, called Entity2Topic (E2T) .",
"E2T is a module that can be easily attached to any sequence-to-sequence based summarization model.",
"The module encodes the entities extracted from the original text by an entity linking system (ELS), constructs a vector representing the topic of the summary to be generated, and informs the decoder about the constructed topic vector.",
"Due to the imperfections of current ELS's, the extracted linked entities may be too ambiguous and coarse to be considered relevant to the summary.",
"We solve this issue by using entity encoders with selective disambiguation and by constructing topic vectors using firm attention .",
"We experiment on two datasets, Gigaword and CNN, with varying lengths.",
"We show that applying our module to a sequence-to-sequence model with attention mechanism significantly increases its performance on both datasets.",
"Moreover, when compared with the state-of-the-art models for each dataset, the model obtains a comparable performance on the Gigaword dataset where the texts are short, and outperforms all competing models on the CNN dataset where the texts are longer.",
"Furthermore, we provide analysis on how our model effectively uses the extracted linked entities to produce concise and better summaries.",
"In the next subsections, we present detailed arguments with empirical and previously examined evidences on the observations and possible issues when using linked entities extracted by an entity linking system (ELS) for generating abstractive",
"summaries.",
"For this purpose, we use the development sets of the Gigaword dataset provided in (Rush et al., 2015) and of the CNN dataset provided in (Hermann et al., 2015) as the experimental data for quantitative evidence and refer the readers to Figure 1 as the running example.",
"As discussed in Section 1, we find three observations that show the usefulness of linked entities for abstractive summarization.",
"First, summaries are mainly composed of linked entities extracted from the original text.",
"In the example, it can be seen that the summary contains four words that refer to different entities.",
"In fact, all noun phrases in the summary mention at least one linked entity.",
"In our experimental data, we extract linked entities from the original text and compare them to the noun phrases found in the summary.",
"We report that 77 .",
"1% and 75 .",
"1% of the noun phrases on the Gigaword and CNN datasets, respectively, contain at least one linked entity, which confirms our observation.",
"Second, linked entities can be used to represent the topic of the summary, defined as a multinomial distribution over entities, as graphically shown in the example, where the probabilities refer to the relevance of the entities.",
"Entities have been previously used to represent topics (Newman et al., 2006), as they can be utilized as a controlled vocabulary of the main topics in a document (Hulpus et al., 2013).",
"In the example, we see that the entity Jae Seo is the most relevant because it is the subject of the summary, while the entity South Korean is less relevant because it is less important when constructing the summary.",
"Third, we can make use of the entity commonsense that can be learned as a continuous vector representation from a separate larger corpus (Ni et al., 2016; Yamada et al., 2017).",
"In the example, if we know that the entities Los Angeles Dodgers and New York Mets are American baseball teams and Jae Seo is a baseball player associated with the teams, then we can use this information to generate more coherent summaries.",
"We find that 76 .",
"0% of the extracted linked entities are covered by the pre-trained vectors 1 in our experimental data, proving our third observation.",
"Despite its usefulness, linked entities extracted from ELS's have issues because of low precision rates (Hasibi et al., 2016) and design challenges in training datasets (Ling et al., 2015).",
"These issues can be summarized into two parts: ambiguity and coarseness.",
"First, the extracted entities may be ambiguous.",
"In the example, the entity South Korean is ambiguous because it can refer to both the South Korean person and the South Korean language, among others 2 .",
"In our experimental data, we extract (1) the top 100 entities based on frequency, and (2) the entities extracted from 100 randomly selected texts, and check whether they have disambiguation pages in Wikipedia or not.",
"We discover that 71 .",
"0% of the top 100 entities and 53 .",
"6% of the entities picked at random have disambiguation pages, which shows that most entities are prone to ambiguity problems.",
"Second, the linked entities may also be too common to be considered an entity.",
"This may introduce errors and irrelevance to the summary.",
"In the example, Wednesday is erroneous because it is wrongly linked to the entity Wednesday Night Baseball .",
"Also, swap is irrelevant because although it is linked correctly to the entity Trade (Sports) , it is too common and irrelevant when generating the summaries.",
"In our experimental data, we randomly select 100 data instances and tag the correctness and relevance of extracted entities into one of four labels: A: correct and relevant, B: correct and somewhat relevant, C: correct but irrelevant, and D: incorrect.",
"Results show that 29 .",
"4% , 13 .",
"7% , 30 .",
"0% , and 26 .",
"9% are tagged with A, B, C, and D, respectively, which shows that there is a large amount of incorrect and irrelevant entities.",
"To solve the issues described above, we present Entity2Topic (E2T) , a module that can be easily attached to any sequence-to-sequence based abstractive summarization model.",
"E2T encodes the linked entities extracted from the text and transforms them into a single topic vector.",
"This vector is ultimately concatenated to the decoder hidden state vectors.",
"The module contains two submodules specifically for the issues presented by the en-2 https://en.wikipedia.org/wiki/South_ Korean tity linking systems: the entity encoding submodule with selective disambiguation and the pooling submodule with firm attention.",
"Overall, our full architecture can be illustrated as in Figure 2, which consists of an entity linking system (ELS), a sequence-to-sequence with attention mechanism model, and the E2T module.",
"We note that our proposed module can be easily attached to more sophisticated abstractive summarization models (Zhou et al., 2017; Tan et al., 2017) that are based on the traditional encoder-decoder framework and consequently can produce better results.",
"The code of the base model and the E2T are available online 3 .",
"As our base model, we employ a basic encoder-decoder RNN used in most neural machine translation (Bahdanau et al., 2014) and text summarization (Nallapati et al., 2016) tasks.",
"We employ a two-layer bidirectional GRU (BiGRU) as the recurrent unit of the encoder.",
"The BiGRU consists of a forward and backward GRU, which results to sequences of forward and backward hidden states ( h 1 , h 2 , ..., h n ) and ( h 1 , h 2 , ..., h n ) , respectively: h i = GRU ( x i , h i 1 ) h i = GRU ( x i , h i +1 ) The forward and backward hidden states are concatenated to get the hidden state vectors of the tokens (i.e. h i = [ h i ; h i ] ).",
"The final states of the forward and backward GRU are also concatenated to create the final text representation vector of the encoder s = [ h n ; h 1 ] .",
"These values are calculated per layer, where x t of the second layer is h t of the first layer.",
"The final text representation vectors are projected by a fully connected layer and are passed to the decoder as the initial hidden states s 0 = s .",
"For the decoder, we use a two-layer unidirectional GRU with attention.",
"At each time step t , the previous token y t 1 , the previous hidden state s t 1 , and the previous context vector c t 1 are passed to a GRU to calculate the new hidden state s t , as shown in the equation below.",
"The context vector c t is computed using the additive attention mechanism (Bahdanau et al., 2014), which matches the current decoder state s t and each encoder state h i to get an importance score.",
"The scores are then passed to a softmax and are used to pool the encoder states using weighted sum.",
"The final pooled vector is the context vector, as shown in the equations below.",
"g t,i = v > a tanh ( W a s t 1 + U a h i ) a t,i = exp ( g t,i ) P i exp ( g t,i ) c t = X i a t,i h i Finally, the previous token y t 1 , the current context vector c t , and the current decoder state s t are used to generate the current word y t with a softmax layer over the decoder vocabulary, as shown below.",
"After performing entity linking to the input text using the ELS, we receive a sequential list of linked entities, arranged based on their location in the text.",
"We embed these entities to d -dimensional vectors E = { e 1 , e 2 , ..., e m } where e i R d .",
"Since these entities may still contain ambiguity, it is necessary to resolve them before applying them to the base model.",
"Based on the idea that an ambiguous entity can be disambiguated using its neighboring entities, we introduce two kinds of disambiguating encoders below.",
"Globally disambiguating encoder One way to disambiguate an entity is by using all the other entities, putting more importance to entities that are nearer.",
"For this purpose, we employ an RNN-based model to globally disambiguate the entities.",
"Specifically, we use BiGRU and concatenate the forward and backward hidden state vectors as the new entity vector: h i = GRU ( e i , h i 1 ) h i = GRU ( e i , h i +1 ) e 0 i = [ h i ; h i ] Locally disambiguating encoder Another way to disambiguate an entity is by using only the direct neighbors of the entity, putting no importance value to entities that are far.",
"To do this, we employ a CNN-based model to locally disambiguate the entities.",
"Specifically, we do the convolution operation using filter matrices W f R h d with filter size h to a window of h words.",
"We do this for different sizes of h .",
"This produces new feature vectors c i,h as shown below, where f ( . ) is a non-linear function: c i,h = f ([ e i ( h 1) / 2 ; ... ; e i + h (+1) / 2 ] > W f + b f ) The convolution operation reduces the number of entities differently depending on the filter size h .",
"To prevent loss of information and to produce the same amount of feature vectors c i,h , we pad the entity list dynamically such that when the filter size is h , the number of paddings on each side is ( h 1) / 2 .",
"The filter size h therefore refers to the number of entities used to disambiguate a middle entity.",
"Finally, we concatenate all feature vectors 700 Globally Disambiguating Encoder (RNN) 2 3 4 5 6 7 8 Locally Disambiguating Encoder (CNN) 2 3 4 1 2 3 4 5 = = or Disambiguating Encoder 2 3 4 tanh tanh tanh 1 Figure 3: Entity encoding submodule with selective disambiguation applied to the entity 3 (cid:13) .",
"of different h 's for each i as the new entity vector: e 0 i = [ c i,h 1 ; c i,h 2 ; ... ] The question on which disambiguating encoder is better has been a debate; some argued that using only the local context is appropriate (Lau et al., 2013) while some claimed that additionally using global context also helps (Wang et al., 2015).",
"The RNN-based encoder is good as it smartly makes use of all entities, however it may perform bad when there are many entities as it introduces noise when using a far entity during disambiguation.",
"The CNN-based encoder is good as it minimizes the noise by totally ignoring far entities when disambiguating, however determining the appropriate filter sizes h needs engineering.",
"Overall, we argue that when the input text is short (e.g. a sen-tence), both encoders perform comparably, otherwise when the input text is long (e.g. a document), the CNN-based encoder performs better.",
"Selective disambiguation It is obvious that not all entities need to be disambiguated.",
"When a correctly linked and already adequately disambiguated entity is disambiguated again, it would make the entity very context-specific and might not be suitable for the summarization task.",
"Our entity encoding submodule therefore uses a selective mechanism that decides whether to use the disambiguating encoder or not.",
"This is done by introducing a selective disambiguation gate d .",
"The final entity vector e i is calculated as the linear transformation of e i and e 0 i : e 0 i = encoder ( e i ) d = ( W d e 0 i + b d ) e i = d f ( W x e i + b x )+ (1 d ) f ( W y e 0 i + b y ) The full entity encoding submodule is illustrated in Figure 3. Ultimately, the submodule outputs the disambiguated entity vectors E = { e 1 , e 2 , ..., e m } .",
"The entity vectors E are pooled to create a single topic vector t that represents the topic of the summary.",
"One possible pooling technique is to use soft attention (Xu et al., 2015) on the vectors to determine the importance value of each vector, which can be done by matching each entity vector with the text vector s from the text encoder as the context vector.",
"The entity vectors are then pooled using weighted sum.",
"One problem with soft attention is that it considers all entity vectors when constructing the topic vector.",
"However, not all entities are important and necessary when generating summaries.",
"Moreover, a number of these entities may be erroneous and irrelevant, as reported in Section 2.2.",
"Soft attention gives non-negligible important scores to these entities, thus adds unnecessary noise to the construction of the topic vector.",
"Our pooling submodule instead uses firm attention mechanism to consider only top k entities when constructing the topic vector.",
"This is done in a differentiable way as follows: G = v > a tanh ( W a E + U a s ) K = top k ( G ) P = sparse vector ( K, 0 , ) g 0 i = g i + p i a i = exp ( g 0 i ) P i exp ( g 0 i ) t = X i a i e i where the functions K = top k ( G ) gets the indices of the top k vectors in G and P = sparse vector ( K, 0 , ) creates a sparse vector where the values of K is 0 and otherwise 4 .",
"The sparse vector P is added to the original importance score vector G to create a new importance 4 We use 10 9 to represent .",
"score vector.",
"In this new vector, important scores of non-top k entities are .",
"When softmax is applied, this gives very small, negligible, and close-to-zero values to non-top k entities.",
"The value k depends on the lengths of the input text and summary.",
"Moreover, when k increases towards infin-ity, firm attention becomes soft attention.",
"We decide k empirically (see Section 5).",
"Entity2Topic module extends the base model as follows.",
"The final text representation vector s is used as a context vector when constructing the topic vector t in the pooling submodule.",
"The topic vector t is then concatenated to the decoder hidden state vectors s i , i.e. s 0 i = [ s i ; t ] .",
"The concatenated vector is finally used to create the output vector: o i = W w w i 1 + W c c i + W s s 0 i 4 Related work Due to its recent success, neural network models have been used with competitive results on abstractive summarization.",
"A neural attention model was first applied to the task, easily achieving state-of-the-art performance on multiple datasets (Rush et al., 2015).",
"The model has been extended to instead use recurrent neural network as decoder (Chopra et al., 2016).",
"The model was further extended to use a full RNN encoder-decoder framework and further enhancements through lexical and statistical features (Nallapati et al., 2016).",
"The current state-of-the-art performance is achieved by selectively encoding words as a process of distilling salient information (Zhou et al., 2017).",
"Neural abstractive summarization models have also been explored to summarize longer documents.",
"Word extraction models have been previously explored, performing worse than sentence extraction models (Cheng and Lapata, 2016).",
"Hierarchical attention-based recurrent neural networks have also been applied to the task, owing to the idea that there are multiple sentences in a document (Nallapati et al., 2016).",
"Finally, distraction-based models were proposed to enable models to traverse the text content and grasp the overall meaning (Chen et al., 2016).",
"The current state-of-the-art performance is achieved by a graph-based attentional neural model, considering the key factors of document summarization such as saliency, fluency and novelty (Tan et al., 2017).",
"Previous studies on the summarization tasks have only used entities in the preprocessing stage to anonymize the dataset (Nallapati et al., 2016) and to mitigate out-of-vocabulary problems (Tan et al., 2017).",
"Linked entities for summarization are still not properly explored and we are the first to use linked entities to improve the performance of the summarizer.",
"Datasets We use two widely used summarization datasets with different text lengths.",
"First, we use the Annotated English Gigaword dataset as used in (Rush et al., 2015).",
"This dataset receives the first sentence of a news article as input and use the headline title as the gold standard summary.",
"Since the development dataset is large, we randomly selected 2000 pairs as our development dataset.",
"We use the same held-out test dataset used in (Rush et al., 2015) for comparison.",
"Second, we use the CNN dataset released in (Hermann et al., 2015).",
"This dataset receives the full news article as input and use the human-generated multiple sentence highlight as the gold standard summary.",
"The original dataset has been modified and preprocessed specifically for the document summarization task (Nallapati et al., 2016).",
"In addition to the previously provided datasets, we extract linked entities using Dexter 5 (Ceccarelli et al., 2013), an open source ELS that links text snippets found in a given text to entities contained in Wikipedia.",
"We use the default recommended parameters stated in the website.",
"We summarize the statistics of both datasets in Table 1.",
"Implementation For both datasets, we further reduce the size of the input, output, and entity vocabularies to at most 50K as suggested in (See et al., 2017) and replace less frequent words to 5 http://dexter.isti.cnr.it/ 702 < unk > .",
"We use 300D Glove 6 (Pennington et al., 2014) and 1000D wiki2vec 7 pre-trained vectors to initialize our word and entity vectors.",
"For GRUs, we set the state size to 500.",
"For CNN, we set h = 3 , 4 , 5 with 400 , 300 , 300 feature maps, respectively.",
"For firm attention, k is tuned by calculating the perplexity of the model starting with smaller values (i.e. k = 1 , 2 , 5 , 10 , 20 , ... ) and stopping when the perplexity of the model becomes worse than the previous model.",
"Our preliminary tuning showed that k = 5 for Gigaword dataset and k = 10 for CNN dataset are the best choices.",
"We use dropout (Srivastava et al., 2014) on all non-linear connections with a dropout rate of 0.5.",
"We set the batch sizes of Gigaword and CNN datasets to 80 and 10, respectively.",
"Training is done via stochastic gradient descent over shuf-fled mini-batches with the Adadelta update rule, with l 2 constraint (Hinton et al., 2012) of 3. We perform early stopping using a subset of the given development dataset.",
"We use beam search of size 10 to generate the summary.",
"Baselines For the Gigaword dataset, we compare our models with the following abstractive baselines: ABS+ (Rush et al., 2015) is a fine tuned version of ABS which uses an attentive CNN encoder and an NNLM decoder, Feat2s (Nallap-ati et al., 2016) is an RNN sequence-to-sequence model with lexical and statistical features in the encoder, Luong-NMT (Luong et al., 2015) is a two-layer LSTM encoder-decoder model, RAS-Elman (Chopra et al., 2016) uses an attentive CNN encoder and an Elman RNN decoder, and SEASS (Zhou et al., 2017) uses BiGRU encoders and GRU decoders with selective encoding.",
"For the CNN dataset, we compare our models with the following extractive and abstractive baselines: Lead-3 is a strong baseline that extracts the first three sentences of the document as summary, LexRank extracts texts using LexRank (Erkan and Radev, 2004), Bi-GRU is a non-hierarchical one-layer sequence-to-sequence abstractive baseline, Distraction-M3 (Chen et al., 2016) uses a sequence-to-sequence abstractive model with distraction-based networks, and GBA (Tan et al., 2017) is a graph-based attentional neural abstractive model.",
"All baseline results used beam search and are gathered from previous papers.",
"Also, 6 https://nlp.stanford.edu/projects/ glove/ 7 https://github.com/idio/wiki2vec Model RG-1 RG-2 RG-LBASE : s2s+att 34.14 15.44 32.47 BASE +E2T cnn+sd 37.04 16.66 34.93 BASE +E2T rnn+sd 36.89 16.86 34.74 BASE +E2T cnn 36.56 16.56 34.57 BASE +E2T rnn 36.52 16.21 34.32 BASE +E2T cnn+soft 36.56 16.44 34.58 BASE +E2T rnn+soft 36.38 16.12 34.20 ABS+ 29.78 11.89 26.97 Feat2s 32.67 15.59 30.64 Luong-NMT 33.10 14.45 30.71 RAS-Elman 33.78 15.97 31.15 SEASS 36.15 17.54 33.63 Table 2: Results on the Gigaword dataset using the full-length F1 variants of ROUGE.",
"we compare our final model BASE +E2T with the base model BASE and some variants of our model (without selective disambiguation, using soft at-tention).",
"We report the ROUGE F1 scores for both datasets of all the competing models using ROUGE F1 scores (Lin, 2004).",
"We report the results on the Gigaword and the CNN dataset in Table 2 and Table 3, respectively.",
"In Gigaword dataset where the texts are short, our best model achieves a comparable performance with the current state-of-the-art.",
"In CNN dataset where the texts are longer, our best model outperforms all the previous models.",
"We emphasize that E2T module is easily attachable to better models, and we expect E2T to improve 703 Model 1st 2nd 3rd 4th mean GOLD 0.27 0.34 0.21 0.18 2.38 BASE 0.14 0.15 0.28 0.43 3.00 BASE +E2T rnn 0.12 0.24 0.39 0.25 2.77 BASE +E2T cnn 0.47 0.27 0.12 0.14 1.93 Table 4: Human evaluations on the Gigaword dataset.",
"their performance as well.",
"Overall, E2T achieves a significant improvement over the baseline model BASE , with at least 2 ROUGE-1 points increase in the Gigaword dataset and 6 ROUGE-1 points increase in the CNN dataset.",
"In fact, all variants of E2T gain improvements over the baseline, implying that leveraging on linked entities improves the performance of the summarizer.",
"Among the model variants, the CNN-based encoder with selective disambiguation and firm attention performs the best.",
"Automatic evaluation on the Gigaword dataset shows that the CNN and RNN variants of BASE +E2T have similar performance.",
"To break the tie between both models, we also conduct human evaluation on the Gigaword dataset.",
"We instruct two annotators to read the input sentence and rank the competing summaries from first to last according to their relevance and fluency:",
"(a) the original summary GOLD , and from models",
"(b) BASE ,",
"(c) BASE +E2T cnn , and",
"(d) BASE +E2T rnn .",
"We then compute",
"(i) the proportion of every ranking of each model and",
"(ii) the mean rank of each model.",
"The results are reported in Table 4. The model with the best mean rank is BASE +E2T cnn , followed by GOLD , then by BASE +E2T rnn and BASE , respectively.",
"We also perform ANOVA and post-hoc Tukey tests to show that the CNN variant is significantly ( p < 0 . 01 ) better than the RNN variant and the base model.",
"The RNN variant does not perform as well as the CNN variant, contrary to the automatic ROUGE evaluation above.",
"Interestingly, the CNN variant produces better (but with no significant difference) summaries than the gold summaries.",
"We posit that this is due to the fact that the article title does not correspond to the summary of the first sentence.",
"Selective disambiguation of entities We show the effectiveness of the selective disambiguation gate d in selecting which entities to disambiguate or not.",
"Table 6 shows a total of four different examples of two entities with the highest/lowest d values.",
"In the first example, sentence E1.1 contains the entity United States and is linked with the country entity of the same name, however the correct linked entity should be United States Davis Cup team , and therefore is given a high d value.",
"On the other hand, sentence E1.2 is linked correctly to the country United States , and thus is given a low d",
"value..",
"The second example provides a similar scenario, where sentence E2.1 is linked to the entity Gold but should be linked to the entity Gold medal .",
"Sentence E2.2 is linked correctly to the chemical element.",
"Hence, the former case received a high value d while the latter case received a low d value.",
"Entities as summary topic Finally, we provide one sample for each dataset in Table 5 for case study, comparing our final model that uses firm attention ( BASE cnn+sd ), a variant that uses soft attention ( BASE cnn+soft ), and the baseline model ( BASE ).",
"We also show the attention weights of the firm and soft models.",
"In the Gigaword example, we find three observations.",
"First, the base model generated a less informative summary, not mentioning mexico state and first edition .",
"Second, the soft model produced a factually wrong summary, saying that guadalajara is a mexican state, while actually it is a city.",
"Third, the firm model is able to solve the problem by focusing only on the five most important entities, eliminating possible noise such as Unk and less crucial entities such as Country club .",
"We can also see the effectiveness of the selective disambiguation in this example, where the entity U.S. state is corrected to mean the entity Mexican state which becomes relevant and is therefore selected.",
"In the CNN example, we also find that the baseline model generated a very erroneous summary.",
"We argue that this is because the length of the input text is long and the decoder is not guided as to which topics it should focus on.",
"The soft model generated a much better summary, however it focuses on the wrong topics, specifically on Iran's nuclear program , making the summary less general.",
"A quick read of the original article tells us that the main topic of the article is all about the two political parties arguing over the deal with Iran.",
"However, the entity nuclear appeared a lot in the article, which makes the soft model wrongly focus on the nuclear entity.",
"The firm model produced the more relevant summary, focusing on the po-704 GigawordDatasetExample Original western mexico @ state @ jalisco will host the first edition of the @ UNK dollar @ lorena ochoa invitation @ golf tournament on nov.",
"litical entities (e.g. republicans , democrats ).",
"This is due to the fact that only the k = 10 most important elements are attended to create the summary topic vector.",
"We proposed to leverage on linked entities to improve the performance of sequence-to-sequence models on neural abstractive summarization task.",
"Linked entities are used to guide the decoding process based on the summary topic and commonsense learned from a knowledge base.",
"We introduced Entity2Topic (E2T), a module that is easily attachable to any model using an encoder-decoder framework.",
"E2T applies linked entities into the summarizer by encoding the entities with selective disambiguation and pooling them into one summary topic vector with firm attention mechanism.",
"We showed that by applying E2T to a basic sequence-to-sequence model, we achieve significant improvements over the base model and consequently achieve a comparable performance with more complex summarization models.",
"We would like to thank the three anonymous reviewers for their valuable feedback.",
"This work was supported by Microsoft Research, and Institute for Information communications Technology Promotion (IITP) grant funded by the Korea government (MSIT) (No.2017-0-01778 , Development of Explainable Humanlevel Deep Machine Learning Inference Framework).",
"S. Hwang is a corresponding author."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"result",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"other",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"result",
"other",
"other",
"other"
] |
[
"Information extraction suffers from its varying targets, heterogeneous structures, and demand-specific schemas.",
"In this paper, we propose a unified text-to-structure generation framework, namely UIE, which can universally model different IE tasks, adaptively generate targeted structures, and collaboratively learn general IE abilities from different knowledge sources.",
"Specifically, UIE uniformly encodes different extraction structures via a structured extraction language, adaptively generates target extractions via a schema-based prompt mechanism structural schema instructor, and captures the common IE abilities via a large-scale pre-trained text-to-structure model.",
"Experiments show that UIE achieved the state-of-the-art performance on 4 IE tasks, 13 datasets, and on all supervised, low-resource, and few-shot settings for a wide range of entity, relation, event and sentiment extraction tasks and their unifi-cation.",
"These results verified the effectiveness, universality, and transferability of UIE 1 .",
"Information extraction (IE) aims to identify and structure user-specified information from unstructured texts (Andersen et al., 1992; Grishman, 2019).",
"IE tasks are highly diversified due to its varying targets (entity, relation, event, sentiment, etc.), heterogeneous structures (spans, triplets, records, etc.), and demand-specific schemas (Grishman and Sund-heim, 1996; Mitchell et al., 2005; Ji and Grishman, 2011).",
"edge sources for different IE task.",
"These task-specialized solutions greatly hinder the rapid architecture development, effective knowledge sharing, and quick cross-domain adaptation of IE systems.",
"First, it is very complicated to develop dedicated architectures for a large amount of IE tasks/settings/scenarios.",
"Second, learning isolated models severely restricts the knowledge sharing between related tasks and settings.",
"Finally, it is costly and time-consuming to construct data sets and knowledge sources specialized for different IE tasks.",
"Therefore, it will be of great benefit to develop a universal IE architecture that can uniformly model different IE tasks, adaptively predict heterogeneous structures and effectively learn from various resources, which we referred to as Universal IE .",
"Fundamentally, all IE tasks can be modeled as text-to-structure transformations, with different 5755 tasks correspond to different structures.",
"For example, as shown in Figure 1, an entity is a named span structure, an event is a schema-defined record structure.",
"These text-to-structure transformations in IE can be further decomposed into several atomic transformation operations: 1) Spotting , which locates the desirable spans concerning to given specific semantic types (Kripke and Munitz, 1971; Chen and Yuille, 2004).",
"For example, locating span Steve as a Person entity and locating ex-cited as a sentiment expression.",
"2) Associating , which connects spans by assigning them with semantic roles in pre-defined schemas (Onyshkevych, 1994; Milward and Thomas, 2000).",
"For example, associating Steve and Apple by assigning them as the Arg1 and the Arg2 of a Work-for relation.",
"In this way, different IE tasks can be decomposed into a sequence of atomic text-to-structure transformations, and all IE models share the same underlying spotting and associating abilities.",
"For example, entity extraction can be viewed as spotting mention spans of corresponding entity types, while event detection can be reformulated as spotting triggers spans with event types.",
"And the spotting abilities can be shared between these two tasks.",
"Based on the above observations, we propose UIE, a unified text-to-structure generation architecture that can universally model different IE tasks, adaptively generate targeted structures, and collaboratively learn general IE abilities from different knowledge sources.",
"Specifically, to model heterogeneous IE structures, we design a structural extraction language (SEL) that can effectively encode different IE structures into a uniform representation, so that various IE tasks can be universally modeled in the same text-to-structure generation framework.",
"To adaptively generate targeted structures for different IE tasks, we propose structural schema instructor (SSI), a schema-based prompt mechanism which controls what to spot, what to associate, and what to generate in UIE.",
"To learn common IE abilities for UIE, we pre-train UIE on large-scale, heterogeneous datasets mined from easily accessible web sources.",
"The large-scale pre-trained UIE model provides a solid foundation for knowledge sharing and quick adaptation to new IE settings, and significantly boosts the IE performance in all supervised, low-resource, and few-shot settings.",
"extraction and their unification), and supervised, low-resource, and few-shot settings.",
"Experiment results show that UIE achieves significant improvements in all settings.",
"On supervised settings, UIE achieved 1.42% F1 scores improvements over the state-of-the-art, task-specialized architectures on all datasets.",
"On few-shot and low-resource settings, UIE exhibits strong on-demand adaptation ability: it outperforms baselines dramatically by a large margin.",
"These results verified the effectiveness, universality, and transferability of UIE across different IE tasks, settings, and scenarios.",
"The main contributions of this paper are: 1) We propose UIE, a unified text-to-structure generation architecture that can universally model different IE tasks, adaptively generate targeted structures, and collaboratively learn general IE abilities from different knowledge sources.",
"2) We design a unified structure generation network, which encodes heterogeneous IE structures into a uniform representation via a structural extraction language, and controls the UIE model which to spot, which to associate, and which to generate via structural schema instructor mechanism.",
"3) We pre-train a large-scale text-to-structure generation model via a unified pre-training algorithm.",
"To the best of our knowledge, this is the first text-to-structure pre-trained extraction model, which can benefit future IE studies.",
"Information extraction tasks can be formulated as text-to-structure problems, where different IE tasks correspond to different structures.",
"This paper aims to uniformly model the text-to-structure transformations of different IE tasks via a single framework, i.e., different structure transformations will share the same underlying operations and different transformation abilities in a universal model.",
"Formally, given a specific pre-defined schema s and texts x , a universal IE model needs to generate a structure that contains the desirable structural information in the text x indicated by the schema s .",
"Generally, there are two main challenges here.",
"Firstly, due to the diversity of IE tasks, there are many different target structures to extract, e.g., entity, relation, event, etc.",
"Secondly, IE tasks are often demand-specific which are defined using different schemas, therefore we need to adaptively control the extraction process.",
"(person: Steve (work for: Apple) )(start-position: became (employee: Steve) (employer: Apple) (time: 1997) )(organization: Apple) (time: 1997) )",
"(b) The SEL representation of the extraction structure of Steve became CEO of Apple in 1997., where the relation structure is marked blue, the event structure is marked red, and the rest are entities.",
"In this section, we describe how to jointly formulate, learn, and conduct various IE tasks in a unified text-to-structure generation architecture, named UIE .",
"Specifically, we first design structured extraction language (SEL) to uniformly encode heterogeneous extraction structures, i.e., encode entity, relation, event into a unified representation.",
"Then we describe structural schema instructor (SSI), a schema-based prompt mechanism that controls the UIE model which to spot, which to associate, and which to generate for different extraction settings.",
"The details are as follows.",
"This section describes how to encode heterogeneous IE structures into a uniform representation.",
"Based on the above discussions, IE structure generation can be decomposed into two atomic operations:",
"1. Spotting indicates locating target information pieces from the sentence, e.g., the entity and the trigger word in the event.",
"2. Associating indicates connecting different information pieces based on the desirable associations, e.g., the relation between entity pair or the role between event and its argument.",
"Then different IE structures can be represented as a combination of atomic structure generation operations.",
"extraction language (SEL), which encodes different IE structures via the spotting-associating structure.",
"As shown in Figure 2a, each SEL expression contains three types of semantic units: 1) SPOTNAME represents there is a specific information piece with the type of spot name existing in the source text; 2) ASSONAME indicates there exists a specific information piece in the source text that is with the AssoName association to its upper-level Spotted information in the structure; 3) INFOSPAN represents the text span corresponding to the specific spotting or associating information piece in the source text.",
"Furthermore, : in the SEL indicates the mapping from InfoSpan to its spotting or associating names, and the two structure indicators ( and ) are used to form the hierarchical structure between the extracted information.",
"Using SEL, Figure 2b shows how to represent entity, relation, and event structures.",
"There are three entities and each entity is represented as a spotting structure such as person:Steve, organiza-tion:Apple, and time:1997; one relation which is represented as an association structure between Steve and Apple with association name work for; and one event which is represented as an association structure, where the trigger is a spotting structure start-position:became, and its arguments are associated with the trigger: Steve as employee, Apple as employer, 1997 as time.",
"We can see that, SEL have the advantages that: 1) uniformly encodes varying IE structures, therefore different IE tasks can be modeled as the same text-to-structure generation process; 2) efficiently represents all extraction results of a sentence in the same structure, thus can perform joint extraction naturally; 3) the output structure of generation is very compact, which greatly reduce the complexity of decoding.",
"For example, the two different tasks entity recognition and event detection can be revisited using the same (SpotName: InfoSpan) grammar.",
"While both relation extraction and event extraction can be formulated using the grammar (SpotName: InfoSpan (AssoName: InfoSpan), ...), even they are with totally different binary entity-relation-entity and N-ary event-arguments structures.",
"Such a unified structured extraction language enables UIE to learn from and adapt to different IE tasks without designing task-specialized architectures, because these IE tasks are all universally formulated as the transformation from texts to SEL representations.",
"Using SEL, UIE can uniformly generate different IE structures.",
"However, because different IE tasks have different schemas, one challenge here is how to adaptively control which information we want to generate during extraction.",
"For example, given a sentence Steve became CEO of Apple in 1997., an entity recognition system will generate ((person: Steve) (organization: Apple) (Time: 1997)), and an event extraction system will generate (start position: became (employee: Steve) (employer: Apple)).",
"To this end, we propose structural schema instructor (SSI), a schema-based prompt mechanism that controls which kinds of information need to be spotted and associated.",
"Figure 3 shows the overall framework of UIE.",
"Formally, UIE takes the given structural schema instructor ( s ) and the text sequence ( x ) as input, and generates the linearized SEL ( y ) which contains the extracted information from x based on schema s : y = UIE ( s x ) (1) where x = [ x 1 , ..., x | x | ] is the text sequence, s = [ s 1 , ..., s | s | ] is the structural schema instructor, and y = [ y 1 , ..., y | y | ] is a SEL sequence that can be easily converted into the extracted information record.",
"To describe the extraction target of a task, the structural schema instructor constructs a schema-based prompt and uses it as a prefix during generation.",
"Specifically, corresponding to the spotting-association structure, the structural schema instructor contains three types of token segments: 1) SPOTNAME : the targeted spotting name in the specific information extraction task, such as person in the NER task; 2) ASSONAME : the targeted association name, such as work for in the relation extraction task; 3) Special Symbols ([spot], [asso], [text]) which are added before each SPOTNAME , ASSONAME , and input text sequence.",
"All tokens in SSI are concatenated and put before the original text sequences.",
"As shown in Figure 3, the entire input for UIE is in the form of: s x = [ s 1 , s 2 , ..., s | s | , x 1 , x 2 , ..., x | x | ] = [ [spot] , ... [spot] ..., [asso] , ..., [asso] ..., [text] , x 1 , x 2 , ..., x | x | ] (2) For example, the SSI [spot] person [spot] company [asso] work for [text] indicates extracting records of the relation schema the person works for the company from the sentence.",
"Given the SSI s , UIE first encodes the text x , then generates the target record y in linearized SEL using an encoder-decoder-style architecture.",
"We found that the schema-based prompt can: 1) effectively guide the SEL generation of UIE, so that the general IE ability can be transferred to new IE tasks; 2) adaptively control which to spot, which to associate, and which to generate, so that semantic knowledge across different labels and tasks can be better shared.",
"Given SSI s and text x as input, UIE extracts targeted information by generating a linearized SEL.",
"We formulate this text-to-SEL generation process using an encoder-decoder-style architecture.",
"Given the raw text sequence x and the schema instructor s , UIE first compute the hidden representation H = [ s 1 , ..., s | s | , x 1 , ..., x | x | ] of each token: H = Encoder ( s 1 , ..., s | s | , x 1 , ..., x | x | ) (3) where Encoder ( ) is a Transformer encoder.",
"Then UIE will decode the input text into a linearized SEL in an auto-regressive style.",
"At the step i of decoding, UIE generates the i -th token y i in the SEL 5758 sequence and the decoder state h di as following: y i , h di = Decoder ([ H ; h d 1 , ..., h di 1 ]) (4) Decoder ( ) is a transformer decoder, which predicts the conditional probability p ( y i | y <i , x, s ) of token y i .",
"Finally, Decoder ( ) finishes prediction when outputting the end symbol <eos>, then we convert the predicted SEL expression into the extracted information record.",
"Compared with previous IE studies which treat labels as specific symbols, the text-to-structure generation paradigm treats labels as natural language tokens.",
"By verbalizing and generating labels and structures, our method can effectively transfer knowledge from pre-trained language models such as BART (Lewis et al., 2020), T5 (Raffel et al., 2020), and related tasks can easily share knowledge because their labels have similar semantics (e.g., location and place ) and share common label-text associations (e.g., victim for different event types).",
"In this section, we describe: 1) how to pre-train a large-scale UIE model which captures common IE abilities for different IE tasks; 2) how to adapt UIE to different IE tasks in different settings via quick fine-tuning.",
"Specifically, we first collect several large-scale datasets from the Web, including structured (e.g., knowledge bases), unstructured (e.g., raw texts), and parallel (e.g., Wikipedia-Wikidata links) data, then we uniformly pre-train our UIE model on these heterogeneous datasets.",
"Finally, we adapt the pre-trained UIE model to the specific downstream IE tasks via on-demand fine-tuning.",
"We found that the pre-trained UIE model provides a solid foundation for capturing, sharing, and transferring knowledge between different IE tasks, and new IE tasks can be effectively solved because UIE learns general IE ability.",
"UIE needs to encode the text, map text to structure, and decode valid structure.",
"Therefore, we collect a large-scale pre-training corpus from easily accessible web data sources (more details are in the appendix): D pair is the text-structure parallel data, where each instance is a parallel pair (token sequence x , structured record y ).",
"We collect large-scale parallel text-structure pairs by aligning Wikidata with English Wikipedia.",
"text-to-structure transformation ability of UIE.",
"D record is the structure dataset where each instance is structured record y .",
"We collect structured records from ConceptNet (Speer et al., 2017) and Wikidata.",
"D record is used to pre-train the structure decoding ability of UIE.",
"Text-to-Structure Pre-training using D pair .",
"To capture the fundamental text-to-structure mapping ability, we pre-train UIE using D pair = { ( x, y ) } .",
"Specifically, for each parallel pair ( x , y ), we extract the spot type s s + and the associating type s a + in the record y as the positive schema s + = s s+ s a+ .",
"However, we found that if we only feed UIE with a positive schema, it will only simply remember the triplet in the pre-training data.",
"To learn general mapping ability, we also automatically construct negative schemas for each pair, i.e., we first sample negative spots s sand negative association set s a, then concatenate meta-schema s meta = s + s s s a, and construct the final extraction target.",
"For example, person and work for is the positive schema in the record ((person: Steve (work for: Apple))), and we sample vehicle and located in as the negative schema to construct meta-schema.",
"Finally, the objective of text-to-structure pre-training is: L Pair = (cid:88) ( x,y ) D pair log p ( y | x, s meta ; e , d ) (5) where e and d are the parameter of encoder and decoder, respectively.",
"Structure Generation Pre-training with D record .",
"To pre-train the ability of generating valid structures defined by SEL and schemas, we pre-train UIE on D record .",
"We pre-train UIE decoder as an structured language model, where each record in D record is an expression of SEL: L Record = (cid:88) y D record log p ( y i | y <i ; d ) (6) By pre-training for structure generation, the decoder can capture the regularity of SEL and the interactions between different labels.",
"Retrofitting Semantic Representation using D text .",
"During text-to-structure pre-training, we continually pre-train UIE also with the masked language model tasks (Raffel et al., 2020) on D text to retrofit semantic representations of UIE.",
"Specifi-cally, we add span corruption based mask language modeling objective in the pre-training stage: L Text = (cid:88) x D text log p ( x (cid:48)(cid:48) | x (cid:48) ; e , d ) (7) where x (cid:48) is the corrupted source text and x (cid:48)(cid:48) is corrupted target spans.",
"We found this pre-training can effectively alleviate the catastrophic forgetting of token semantics especially on SPOTNAME and ASSONAME tokens.",
"Final Pre-training Criteria.",
"We initialize UIE-base and UIE-large with T5-v1.1-base and T5-v1.1-large (Raffel et al., 2020), and the model architectures are shown in Table 7.",
"The final objective is the combine of the above tasks: L = L Pair + L Record + L Text (8) For implementation, we uniformly represent all pre-training data as triplets.",
"For text data ( x ) in D text , we build a triplet (None, x (cid:48) , x (cid:48)(cid:48) ) where x (cid:48) is the corrupted source text and x (cid:48)(cid:48) is corrupted spans.",
"For text-record data ( x , y ) in D pair , we construct ( s , x , y ) by sampling the meta-schema s for each text-record pair.",
"For record data ( y ) in D record , we take (None, None, y ) as the input triplet.",
"We randomly pack instances for different tasks in one batch, and details are shown in Algorithm 1 in the appendix.",
"Given the pre-trained UIE model, we can quickly adapt it to different IE tasks and settings through model fine-tuning.",
"Given a labeled corpus D task = { ( s, x, y ) } , we fine-tune the UIE model using teacher-forcing cross-entropy loss: LFT = (cid:88) ( s,x,y ) D Task log p ( y | x, s ; e , d ) (9) To alleviate the exposure bias (Ranzato et al., 2016; Zhang et al., 2020) of the auto-regressive model during decoding, we also design a Rejection Mechanism for effective fine-tuning.",
"Specifically, given an instance ( s , x , y ), we first encode y using SEL language, then we randomly insert several [ NULL ] unit with negative SPOTNAME and ASSONAME : (SPOTNAME , [ NULL ]) and (ASSONAME , [ NULL ]) into the ground-truth SEL with the SSI <spot> person ... <spot> facility <asso> ... <text> Text Steve became CEO of Apple in 1997.",
"probability of p (cid:15) .",
"For example, in Table 1, facility is the negative spot in the schema prompt, i.e., there is no facility entity in the sentence Steve became CEO of Apple in 1997.",
"Therefore, we randomly inject the noise of (facility: [ NULL ]) into the target record during model learning.",
"In this way, the UIE can effectively learn to reject misleading generation by generating [ NULL ] token.",
"To verify the effectiveness of UIE, we conducted experiments on different IE tasks and settings.",
"Datasets.",
"We conduct experiments on 13 IE benchmarks across 4 well-representative IE tasks (including entity extraction, relation extraction, event extraction, structured sentiment extraction) and their combinations (e.g., joint entity-relation extraction).",
"The used datasets includes ACE04 (Mitchell et al., 2005), ACE05 (Walker et al., 2006); CoNLL03 (Tjong Kim Sang and De Meul-der, 2003), CoNLL04 (Roth and Yih, 2004), SciERC (Luan et al., 2018), NYT (Riedel et al., 2010), CASIE (Satyapanich et al., 2020), SemEval-14 (Pontiki et al., 2014), SemEval-15 (Pontiki et al., 2015), SemEval-16 (Pontiki et al., 2016), see Table 8 for detail.",
"We employ the end-to-end setting for all extraction tasks, which takes the raw text as input and directly generates the target structure.",
"Evaluation.",
"We use the same evaluation metrics as all previous methods, and details of metrics are shown in the appendix.",
"For each fine-tuning experiment, we report the average performance on 3 random seeds.",
"Because UIE only generates text spans, we map spans to offsets by finding the first matched offsets that are not already matched in the same SEL hierarchical level (details in appendix).",
"We found this simple heuristic rule is very effective (<0.5% error offsets) and more complicated mapping approaches (such as attention-weight guided span mapping) are left as the future work.",
"UIE provides a universal backbone for IE tasks.",
"This section assesses the UIE performance in supervised settings.",
"We compare UIE with the state-of-the-art, task-specific supervised models.",
"For a fair comparison, we only compare the state-of-the-art without leveraging additional dataset-specific knowledge or larger-scale contexts.",
"These extensions are good complementary of UIE, and can be left for further improvement.",
"Table 2 shows the performance of UIE on the 13 IE datasets across 4 tasks.",
"We can observe that: 1) By modeling IE as text-to-structure generation and encoding with an effective SEL language, UIE provides an effective universal architecture for IE.",
"The UIE model achieves state-of-the-art performance on nearly all datasets and tasks, even without pre-training (SEL).",
"2) The large-scale pre-trained model provides a solid foundation for universal IE.",
"Compared with baselines, the pre-trained model achieves the performance of the state-of-the-art in most datasets and improves 1.42% F1 on average.",
"3) By universally modeling IE tasks and pre-training using large-scale datasets, UIE can effectively capture, share, and transfer IE abilities.",
"Pre-training improves all tasks at the same time, especially events and sentiment knowledge rarely appear in the pretrain dataset.",
"It proves that SEL is a unified and cross-task transferable structured representation for IE, which allows UIE to share learned capabilities and information across different and various information extraction tasks.",
"To verify the quick adaptation ability of UIE, we conducted low-resource experiments on six different partitions of the original training sets (1/5/10-shot, 1/5/10% ratio) across 4 tasks.",
"For the few-shot experiments, we sample 1/5/10 sentences for each entity/relation/event/sentiment type in the training set.",
"To avoid the influence of random sampling, we repeated each experiment 10 times with different samples and reported their averaged results as previous works (Huang et al., 2021).",
"We compare UIE with the following pre-trained model: 1) T5-v1.1-base is an initial model of UIE-base; 2) Fine-tuned T5-base is fine-tuned with sequence generation tasks such as summarization, which have been shown effective in many low-resource NLP tasks (Paolini et al., 2021); 3) UIE-base w/o SSI is the distant supervised version of UIE without SSI in the pre-training stage, which is used to verify the necessity of SSI when adapting UIE in low-resource settings.",
"Table 3 shows the performance of 4 IE tasks under 6 low-resource settings.",
"We observe that: 1) By guiding the generation using schema-based prompts, SSI is an effective way for adaptively controlling which to ex-5761 Model 1-Shot 5-Shot 10-Shot AVE-S 1% 5% 10% AVE-R Entity ( CoNLL03 ) Ent-F1 T5-v1.1-base 12.73 30.17 58.89 33.93 75.74 85.71 87.70 83.05 Fine-tuned T5-base 24.93 54.85 65.31 48.36 78.51 87.67 88.91 85.03 UIE-base w/o SSI 43.52 64.76 72.47 60.25 81.91 88.41 89.84 86.72 UIE-base 46.43 67.09 73.90 62.47 82.84 88.34 89.63 86.94 Relation ( CoNLL04 ) Rel-S F1 T5-v1.1-base 2.35 7.99 25.98 12.11 6.08 32.38 41.87 26.78 Fine-tuned T5-base 4.24 28.16 41.44 24.61 12.89 37.75 49.95 33.53 UIE-base w/o SSI 13.21 40.35 49.47 34.34 24.21 48.70 56.59 43.17 UIE-base 22.05 45.41 52.39 39.95 30.77 51.72 59.18 47.22 Event Trigger ( ACE05-Evt ) Evt Tri F1 T5-v1.1-base 19.40 43.35 50.57 37.77 25.59 49.47 57.18 44.08 Fine-tuned T5-base 30.18 48.31 51.27 43.25 31.08 51.16 57.76 46.67 UIE-base w/o SSI 32.07 48.11 51.00 43.73 32.71 53.20 59.26 48.39 UIE-base 38.14 51.21 53.23 47.53 41.53 55.70 60.29 52.51 Event Argument ( ACE05-Evt ) Evt Arg F1 T5-v1.1-base 2.75 20.21 27.53 16.83 3.59 21.53 30.90 18.67 Fine-tuned T5-base 6.96 25.07 30.96 21.00 7.39 24.97 33.90 22.09 UIE-base w/o SSI 9.31 23.99 30.31 21.20 9.57 27.25 34.18 23.67 UIE-base 11.88 27.44 33.64 24.32 12.80 30.43 36.28 26.50 Sentiment ( 16res ) Rel-S F1 T5-v1.1-base 0.04 2.11 12.66 4.94 3.50 27.08 45.97 25.52 Fine-tuned T5-base 6.55 21.06 29.92 19.18 18.72 39.63 51.65 36.67 UIE-base w/o SSI 7.79 17.77 32.07 19.21 19.14 42.76 53.44 38.45 UIE-base 10.50 26.24 39.11 25.28 24.24 49.31 57.61 43.72 Table 3: Low-resource results on end-to-end IE tasks, where AVE-S (hot) and AVE-R (atio) are the averaged performance across 3 few-shot settings and 3 low-resource settings respectively.",
"tract.",
"Compared with the UIE model w/o SSI, UIE equipped with SSI achieves improvements of 4.16 and 3.30 on average for n-shot and n-ratio experiments.",
"2) Our pre-training algorithms can learn general IE ability rather than capture task-specific information.",
"Even the pre-training of UIE didn't include event and sentiment knowledge, UIE still achieved significantly better performance on these tasks compared to the baseline with only a small number of samples.",
"To investigate the effect of different pre-training tasks, Table 4 shows ablation experiment results of UIE-base on four downstream tasks.",
"We can P P R F UIE-base +11.41 79.54 72.63 75.91 w/o rejection 68.13 67.85 66.13 UIE-base w/o SSI +9.41 78.96 70.50 74.49 w/o rejection 69.55 63.69 66.44 T5-base +17.95 74.12 61.72 67.33 w/o rejection 56.17 56.00 55.94 T5-v11 +13.88 71.88 51.23 59.67 w/o rejection 58.00 45.04 50.38 Table 5: Experiment results of 10-shot setting on the CoNLL 03 development set.",
"see that: (1) The pre-training of SEL ( L Record ) and sequence-to-structure mapping ( L Pair ) is crucial for UIE, and such a structure generation pre-training is especially useful for small-scale datasets.",
"In small datasets CoNLL04 and 16res, adding structure generation pre-training (from T5-v1.1-base to UIE-base w/o L Text ), the performance significantly increases from 72.12 to 75.70 and 72.03 to 74.28.",
"(2) Retrofitting semantic using the mask language model task ( L Text ) is more important for the complex extraction task.",
"In the tasks with more semantic types such as event extraction (33 types), the performance drops significantly after removing the L Text task, e.g., 72.63 70.89 and 5762 57.27 54.16.",
"(3) The mapping pre-training with L Pair enables the model to learn the ability of extraction.",
"After ablating L Pair , the extraction ability of UIE is significantly decreased, i.e., the performance on the relation (-0.90), event (-1.43/-1.48), and sentiment (-0.46) tasks all see large decline.",
"This section investigates the effect of the proposed rejection noise.",
"Table 5 shows the results of the different pre-trained models on the development set of CoNLL 03 under the 10-shot setting.",
"The mis-generated label has a negative influence on the precision of the proposed generation method leading to a large number of error extraction results.",
"The proposed rejection noise is useful for the generation method, which leads to improvements of 13.16 precision (P) on average.",
"Building and pre-training universal models of NLP tasks has attracted a lot of attention in recent years, e.g., contextualized representation (Devlin et al., 2019; Liu et al., 2019), text generation (Lewis et al., 2020; Raffel et al., 2020), multi-modal (Li et al., 2021b; Cho et al., 2021), and multi-lingual (Con-neau et al., 2020; Xue et al., 2021).",
"This paper proposes and pre-trains the first universal model for information extraction.",
"IE is a long-researched area and many classical neural architectures have been proposed, such as sequence tagging (Lample et al., 2016; Zheng et al., 2017; Lin et al., 2019), span classification (Sohrab and Miwa, 2018; Lin et al., 2018; Wadden et al., 2019), and MRC (Levy et al., 2017; Li et al., 2020; Du and Cardie, 2020).",
"And several task-specific pre-training techniques are proposed on these architectures (Mengge et al., 2020; Wang et al., 2021b; Qin et al., 2021).",
"More relevant to our work are generation-based IE methods, which generate text spans via tagging (Strakov et al., 2019; Ma et al., 2019), index pointer (Ren et al., 2021; Yan et al., 2021b) or copy mechanism (Zeng et al., 2018), and these methods usually employ specific classifiers to represent labels.",
"The generation can be enhanced using label templates (Li et al., 2021a; Liu et al., 2021; Cui et al., 2021), schema (Lu et al., 2021; Ahmad et al., 2021), and augmented language methods (Paolini et al., 2021).",
"models, this paper aims to universally model various IE tasks in an unified text-to-structure framework, which can greatly benefit the rapid development, effective knowledge sharing, and quick adaptation of IE systems.",
"In this paper, we propose a unified text-to-structure generation framework UIE, which can universally model different IE tasks, adaptively generate targeted structures, and unfiedly learn general IE abilities from different knowledge sources.",
"Experimental results show that UIE achieves very competitive performance in both supervised and low-resource settings, which verified its universality, effectiveness, and transferability.",
"A large-scale pre-trained text-to-structure model is also released, which will benefit future studies.",
"For future work, we want to extend UIE to KB-aware IE tasks such as entity linking (Cao et al., 2021), and document-aware IE tasks such as co-reference (Lee et al., 2017; Lu et al., 2022).",
"We sincerely thank the reviewers for their insightful comments and valuable suggestions.",
"This research work is supported by the National Natural Science Foundation of China under Grants no.",
"U1936207, 62122077 and 62106251, the Project of the Chinese Language Committee under Grant no.",
"YB2003C002."
] | [
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"other",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"objective",
"other",
"other",
"method",
"other",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other"
] |
[
"A good conversation requires balance between simplicity and detail; staying on topic and changing it; asking questions and answering them.",
"Although dialogue agents are commonly evaluated via human judgments of overall quality, the relationship between quality and these individual factors is less well-studied.",
"In this work, we examine two controllable neural text generation methods, conditional training and weighted decoding, in order to control four important attributes for chitchat dialogue: repetition, specificity, response-relatedness and question-asking.",
"We conduct a large-scale human evaluation to measure the effect of these control parameters on multi-turn interactive conversations on the PersonaChat task.",
"We provide a detailed analysis of their relationship to high-level aspects of conversation, and show that by controlling combinations of these variables our models obtain clear improvements in human quality judgments.",
"Neural generation models for dialogue, despite their ubiquity in current research, are still poorly understood.",
"Well known problems, such as the genericness and repetitiveness of responses (Ser-ban et al., 2016a), remain without a de facto solution.",
"Strikingly, the factors that determine human judgments of overall conversation quality are almost entirely unexplored.",
"Most works have been limited to the next utterance prediction problem, whereas a multi-turn evaluation is necessary to evaluate the quality of a full conversation.",
"In this work we both",
"(i) conduct a large-scale study to identify the fine-grained factors governing human judgments of full conversations, and",
"(ii) develop models that apply our findings in practice, A.S. completed most of this work at Facebook (FAIR).",
"leading to state-of-the-art performance.",
"Specifi-cally, we identify and study eight aspects of conversation that can be measured by human judgments, while varying four types of low-level attributes that can be algorithmically controlled in neural models; see Figure 1.",
"To control the low-level model attributes, we consider two simple but general algorithms: conditional training, in which the neural model is conditioned on additional control features, and weighted decoding, in which control features are added to the decoding scoring function at test time only.",
"One major result of our findings is that existing work has ignored the importance of conversational flow, as standard models",
"(i) repeat or contradict previous statements,",
"(ii) fail to balance specificity with genericness, and",
"(iii) fail to balance asking questions with other dialogue acts.",
"Conducting experiments on the PersonaChat task (Zhang et al., 2018b), we obtain significantly higher engagingness scores than the baseline by optimizing control of repetition, specificity and question-asking over multiple turns.",
"Using these findings, our best model matches the performance of the winning entry in the recent NeurIPS ConvAI2 competition (Dinan et al., 2019), which was trained on much more data but had no control (see Section 8.1).",
"Our code, pretrained models, and full chatlogs, are available at https://parl.ai/projects/ controllable_dialogue .",
"Dialogue Dialogue evaluation is relatively well understood in goal-oriented tasks, where automated approaches can be coded by measuring task completion (Bordes et al., 2017; El Asri et al., 2017; Hastie, 2012; Henderson et al., 2014; Wen et al., 2017).",
"Task success combined with dialogue cost can be linked to human judgments like user satisfaction via the PARADISE framework (Walker et al., 1997).",
"However in chitchat tasks, which we study in this work, automatic metrics and their relation to human ratings are less well-understood.",
"While word-overlap metrics are effective for question-answering and machine translation, for dialogue they have little to no correlation with human judgments (Liu et al., 2016; Novikova et al., 2017) this is due to the open-ended nature of dialogue.",
"There are more recent attempts to find better automatic approaches, such as adversarial evaluation (Li et al., 2017b) and learning a scoring model (Lowe et al., 2017), but their value is still unclear.",
"Nevertheless, a number of studies only use automatic metrics, with no human study at all (Lowe et al., 2015; Parthasarathi and Pineau, 2018; Ser-ban et al., 2016b).",
"Other works do use human evaluations (Dinan et al., 2018; Li et al., 2016a,b; Venkatesh et al., 2017; Vinyals and Le, 2015; Zhang et al., 2018b), typically reporting just one type of judgment (either quality or appropriateness) via a Likert scale or pairwise comparison.",
"Most of those works only consider single turn evaluations, often with a shortened dialogue history, rather than full multi-turn dialogue.",
"A more comprehensive evaluation strategy has been studied within the scope of the Alexa prize (Venkatesh et al., 2017; Guo et al., 2018) by combining multiple automatic metrics designed to capture various conversational aspects (engagement, coherence, domain coverage, conversational depth and topical diversity).",
"Though these aspects have some similarity to the aspects studied here, we also focus on lower-level aspects (e.g. avoiding repetition, fluency), to understand how they correspond to both our controllable attributes, and to overall quality judgments.",
"Controllable neural text generation Researchers have proposed several approaches to control aspects of RNN-based natural language generation such as sentiment, length, speaker style and tense (Fan et al., 2018; Ficler and Goldberg, 2017; Ghazvininejad et al., 2017; Hu et al., 2017; Kikuchi et al., 2016; Peng et al., 2018; Wang et al., 2017).",
"In particular, several works use control to tackle the same common sequence-to-sequence problems we address here (particularly genericness and unrelated output), in the context of single-turn response generation (Baheti et al., 2018; Li et al., 2016a, 2017a; Shen et al., 2017; Xing et al., 2017; Zhang et al., 2018a; Zhou et al., 2017).",
"By contrast, we focus on developing controls for, and human evaluation of, multi -turn interactive dialogue this includes a new method (described in Section 5) to control attributes at the dialogue level rather than the utterance level.",
"In this work, we require a control method that is both general-purpose (one technique to simultaneously control many attributes) and easily tunable (the control setting is adjustable after train-ing).",
"Given these constraints, we study two control methods: conditional training (variants of which have been described by Fan et al. (2018); Kikuchi et al. (2016); Peng et al. (2018)) and weighted decoding (described by Ghazvininejad et al. (2017) as a general technique, and by Baheti et al. (2018) to control response-relatedness).",
"To our knowledge, this work is the first to systematically compare the effectiveness of two general-purpose control methods across several attributes.",
"PersonaChat (Zhang et al., 2018b) is a chitchat dialogue task involving two participants (two humans or a human and a bot).",
"Each participant is given a persona a short collection of personal traits such as I'm left handed or My favorite season is spring and are instructed to get to know each other by chatting naturally using their designated personas, for 68 turns.",
"The training set contains 8939 conversations and 955 personas, collected via crowdworkers, plus 1000 conversations and 100 personas for validation, and a similar number in the hidden test set.",
"The PersonaChat task was the subject of the NeurIPS 2018 ConvAI2 Challenge (Dinan et al., 2019), in which competitors were first evaluated with respect to automatic metrics (perplexity, hits@1 and F1 score), and then with respect to human judgment via the question How much did you enjoy talking to this user? on a scale of 14.",
"Our baseline model is a 2-layer LSTM sequence-to-sequence model with attention.",
"On any dialogue turn, the input x to the encoder is the entire dialogue history (separated using unique speaker-identifying tokens), with the model's own persona prepended.",
"Conditioned on this input sequence x , the decoder generates a response y .",
"Except when stated otherwise, all our models decode using beam search with beam size 20.",
"We initialized the word embedding matrix with 300-dimensional GloVe embeddings (Pennington et al., 2014).",
"Using the ParlAI framework (Miller et al., 2017), we pretrained the model on a dataset of 2.5 million Twitter message-response pairs, 1 then fine-tuned it on PersonaChat.",
"On the PersonaChat validation set, the baseline model has a perplexity of 26.83 and F1 of 17.02, which would have placed us 4th out of 26 models in the ConvAI2 competition (Dinan et al., 2019).",
"We attempt to improve over this baseline using control.",
"Suppose we have a sequence-to-sequence model which gives P ( y | x ) = t P ( y t | x, y 1 , . . . , y t 1 ) , the conditional probability of a response y (the model's next utterance) given input x (the context, which in our case includes the model's own persona and the dialogue history).",
"Contrary to most previous work, which controls at the sentence level , we wish to control attributes of the output y at the dialogue level meaning that a single control setting is used for a whole dialogue.",
"For example, to control question-asking, we provide a control setting at the beginning of each dialogue (e.g. 20% questions or 70% questions ) rather than providing a control setting for each utterance (e.g. is a question or isn't a question ).",
"With this approach, the sequence-to-sequence model is able to choose what value the controlled attribute should take for any particular utterance, but we are able to choose the overall distribution.",
"We find that this approach works well for example, the sequence-to-sequence model is 1 The Twitter dataset is provided in ParlAI; details can be found here: https://parl.ai/docs/tasks.html generally good at detecting when to ask a question.",
"In particular, this is easier than the alternative: developing a separate process to decide, for each utterance, whether to ask a question.",
"In this section, we describe the two methods which we call Conditional Training (CT) and Weighted Decoding (WD) that we use to control attributes of the output y at the dialogue level.",
"Conditional Training (Fan et al., 2018; Kikuchi et al., 2016; Peng et al., 2018) is a method to learn a sequence-to-sequence model P ( y | x, z ) , where z is a discrete control variable .",
"If the control attribute is naturally continuous (for example in our work, repetitiveness, specificity and response-relatedness), we use z to represent bucketed ranges.",
"For a binary attribute like question-asking, z represents an overall probability (as explained in Section 5).",
"To train a CT model, we first automatically annotate every ( x, y ) pair in the training set with the attribute we wish to control (for example, whether y contains a question mark).",
"During training, for each example we determine the corresponding z value (for continuous attributes, this simply means sorting into the correct bucket; for question-asking, see Section 6.4).",
"Next, the control variable z is represented via an embedding (each of the possible values of z has its own em-bedding).",
"For all our experiments, the embedding is of length 10; this was determined via hyperpa-rameter tuning.",
"There are several possible ways to condition the sequence-to-sequence model on z for example, append z to the end of the input sequence, or use z as the START symbol for the decoder.",
"We find it most effective to concatenate z to the decoder's input on every step.",
"2 Lastly, the CT model learns to produce y = y 1 , . . . , y T by optimizing the cross-entropy loss: loss CT = 1 TT (cid:88) t =1 log P ( y t | x, z, y 1 , . . . , y t 1 ) Our CT models are initialized with the parameters from the baseline sequence-to-sequence model P ( y | x ) (the new decoder parameters are initialized with small random values), then fine-tuned to optimize loss CT on the PersonaChat training set, until convergence of loss CT on the validation set.",
"2 To build a CT model P ( y | x, z 1 , . . . , z n ) conditioned on multiple controls { z 1 , . . . , z n } , we can simply concatenate multiple control embeddings to the decoder inputs.",
"Weighted Decoding (Ghazvininejad et al., 2017) is a decoding method that increases or decreases the probability of words with certain features.",
"The technique is applied only at test time, requiring no change to the training method.",
"A limitation of WD is that the controllable attribute must be defined at the word-level; any desired utterance-level attribute must be redefined via word-level features.",
"In weighted decoding, on the t th step of decoding, a partial hypothesis y <t = y 1 , . . . , y t 1 is expanded by computing the score for each possible next word w in the vocabulary: score ( w, y <t ; x ) = score ( y <t ; x ) + log PRNN ( w | y <t , x ) + (cid:88) i w i f i ( w ; y <t , x ) .",
"Here, log PRNN ( w | y <t , x ) is the log-probability of the word w calculated by the RNN, score ( y <t ; x ) is the accumulated score of the already-generated words in the hypothesis y <t , and f i ( w ; y <t , x ) are decoding features with associated weights w i .",
"There can be multiple features f i (to control multiple attributes), and the weights w i are hyperpa-rameters to be chosen.",
"A decoding feature f i ( w ; y <t , x ) assigns a real value to the word w , in the context of the text generated so far y <t and the context x .",
"The feature can be continuous (e.g. the unigram probability of w ), discrete (e.g. the length of w in characters), or binary (e.g. whether w starts with the same letter as the last word in y <t ).",
"A positive weight w i increases the probability of words w that score highly with respect to f i ; a negative weight decreases their probability.",
"Note that weighted decoding and conditional training can be applied simultaneously (i.e. train a CT model then apply WD at test time) a strategy we use in our experiments.",
"In this section, we describe how we use conditional training and weighted decoding to control four attributes: repetition, specificity, response-relatedness and question-asking.",
"We evaluate the effectiveness of both control methods via automatic metrics (i.e., measuring how well the attribute was controlled), and use our findings to select control methods and control settings to be explored further via human evaluation (Section 8).",
"Our baseline model exhibits three types of repetition, which we call external repetition (self-repetition across utterances), internal repetition (self-repetition within utterances), and partner repetition (repeating the conversational partner).",
"To control repetition with weighted decoding, 3 we define five n -gram based decoding features (see Appendix D).",
"Three of these features ( extrep bigram , intrep bigram and partnerrep bigram ) identify repeating bigrams for the three repetition types.",
"The other two features ( extrep unigram and intrep unigram ) identify repeating content words.",
"By applying a negative weight to these features, we can reduce repetition.",
"In particular, if the weight is , our method is equivalent to n-gram blocking as described by Kulikov et al. (2018).",
"We observe that repetition control is very important, thus all further control experiments include repetition control.",
"Like many sequence-to-sequence models using beam search decoding, our baseline frequently asks generic questions such as What music do you like?",
"and gives dull, unspecific responses, such as I like all kinds of music .",
"We control specificity using Normalized Inverse Document Frequency (NIDF) as a measure of word rareness.",
"4 The Inverse Document Frequency of a word w is IDF ( w ) = log( R/c w ) where R is the number of responses in the dataset, and c w is the number of those responses that contain w .",
"Normalized IDF (which ranges from 0 to",
"1) is NIDF ( w ) = IDF ( w ) min idf max idf min idf (1) where min idf and max idf are the minimum and maximum IDFs, taken over all words in the vocabulary.",
"To control specificity with weighted decoding, we use NIDF as a decoding feature.",
"As shown in Table 1, this method produces reasonable outputs when the feature weight is within a certain range, but at the extremes the outputs are 3 We also tried controlling repetition with conditional training, defining z as the (bucketed) maximum ROUGE-L precision between the response y and the bot's previous utterances.",
"However, this method was unsuccessful because there are not enough repetitive examples in the training data for the model to learn the control.",
"Experimenting with data augmentation to solve this problem is an area for future work.",
"4 Note that our NIDF specificity features are similar to the NIRF and NIWF features used by Zhang et al. (2018a).",
"nonsensical.",
"The boundary for nonsensical output differs from example to example.",
"To control specificity with conditional training, we define the specificity of an utterance y to be the mean NIDF of the words in y .",
"Thus our control variable z is mean NIDF (discretized into 10 equal-sized buckets).",
"As shown in Table 1, this method gives outputs with a narrower NIDF range, but overall produces less nonsensical outputs.",
"In conversation, it's generally desirable to produce a response that is related to the partner's last utterance; for example if the partner says My grandfather died last month , it is appropriate to say I'm so sorry.",
"Were you close to your grandfather?",
"However, our baseline model frequently responds with unrelated utterances like Do you have any pets?",
"To control response-relatedness with weighted decoding, we use the decoding feature resp rel : resp rel ( w ; y <t , x ) = cos sim ( word emb ( w ) , sent emb ( (cid:96) )) where word emb ( w ) is the GloVe embedding for the word w , sent emb ( (cid:96) ) is the sentence embedding for the partner's last utterance (cid:96) (note (cid:96) is part of the context x ), and cos sim is the cosine similarity between the two.",
"In particular, the sentence embedding sent emb ( s ) for an utterance s is a Input: Do you go get coffee often Baseline Response: I do, when I am not playing the piano.",
"weighted average of the GloVe embeddings of the words in s , with the first principal component projected out; for full details, see Arora et al. (2017).",
"This method of controlling response-relatedness is similar to that described in (Baheti et al., 2018).",
"We find that weighted decoding is effective to control the semantic relatedness of the model's response to the partner's last utterance (see Table 2).",
"As before, we find that extreme weights lead to nonsensical output.",
"To control response-relatedness with conditional training, we try defining the control variable z to be cos sim ( sent emb ( y ) , sent emb ( (cid:96) )) , the overall cosine similarity between the partner's last utterance (cid:96) and the model's response y (again, we discretize z ).",
"However, we find this method ineffective the CT model learns only a very weak connection between z and the semantic relatedness of the output (see Section 7 for more details).",
"Considerate chitchat requires a reciprocal asking and answering of questions asking too few or too many can appear self-centered or nosy.",
"We control question-asking in order to study these trade-offs.",
"To control question-asking with weighted decoding, we use the binary decoding feature is qn word ( w ) , which is equal to 1 if and only if the word w is in a pre-defined list of interrogative words ( how, what, when, where, which, who, whom, whose, why, ? ).",
"We find this is a somewhat effective method to encourage or discourage questions, but with unintended side-effects: a negative weight can discourage valid non-question utterances that happen to contain interrogative words (such as I'm learning how to knit ) and a positive weight can result in degenerate utterances (such as 0 1 2 3 4 5 6 7 8 9 10 10 (boost) Question-Asking Control Level (CT) 0% 20% 40% 60% 80% 100% % U tt e r a n c e s c o n t a i n i n g ' ? ' Question-controlled CT Question-controlled CT w/ rep ctrl Target for question-controlled CT Beam search baseline Repetition-controlled baseline Gold data Figure 2: Controlling question-asking via conditional training.",
"For conditional training, we regard an utterance y as containing a question if and only if y contains a question mark.",
"We train our CT model on a control variable z with 11 possible values: { 0 , . . . , 10 } .",
"As discussed in Section 5, we wish to control question-asking at the distributional, dialogue level, rather than at the binary, utterance level.",
"Thus the setting z = i means that the model should produce, on average, utterances containing ?' with probability i/ 10 .",
"During training we randomly assign examples to buckets such that each bucket i is trained on examples with the correct proportion of questions ( i/ 10 ), and all buckets have the same amount of training examples.",
"We find that conditional training is effective to control question-asking as shown in Figure 2, by increasing z from 0 to 10, we obtain a range of question-asking rates from 1.40% to 97.72%.",
"However, when we introduce repetition control, question-asking is reduced in particular, the z = 10 setting (which should produce 100% questions) now only produces 79.67% questions.",
"The primary problem is the weighted decoding feature extrep bigram , which discourages bigrams that have appeared in previous utterances this prevents the model from producing bigrams that commonly occur in many questions, such as do you and what is .",
"To fix this, we introduce an extra setting z = 10 (boost) , in which we do not use the feature extrep bigram for weighted decoding during beam search, but we do use it to rerank the candidates after beam search.",
"This setting, which allows the model to produce necessary question-asking bigrams, yields a 99.54% question-asking rate, at the cost of slightly increased external bigram repetition (see Appendix F).",
"For controlling question-asking, conditional training is preferable to weighted decoding for two reasons.",
"Firstly, it allows us to achieve (close to) 0% questions, 100% questions, or anything in between, without introducing the risk of degenerate output.",
"Secondly, presence-of-a-question-mark captures the true attribute of interest (question-asking) more exactly and directly than presence of interrogative words.",
"For these reasons, only the CT method is considered in the human evaluation.",
"The previous section shows that conditional training and weighted decoding are both useful techniques, with different strengths and weaknesses.",
"The primary disadvantage of conditional training is that it sometimes fails to learn the connection between the control variable z and the target output y .",
"In practice, we find the model can learn simple attributes of the output (such as the presence of ?', and overall genericness), but not relationships between the input and output (such as semantic relatedness).",
"By contrast, weighted decoding can force the desired feature to appear in the output by raising the weight arbitrarily high (though this may have unintended side-effects).",
"The primary disadvantage of weighted decoding is that it risks going off-distribution when the weight is too strong.",
"By contrast, conditional training produces mostly well-formed, in-distribution outputs.",
"This highlights the importance of learned control it is safer to learn to produce output that both satisfies the control variable and is appropriate, than to alter the decoding process to satisfy the control variable, potentially trading off appropriateness in the process.",
"Other considerations include: (1) Convenience: conditional training requires retraining; weighted decoding doesn't, but is slower at test time.",
"(2) Data availability: conditional training requires training examples of the controllable attribute, whereas weighted decoding can control any computable feature without requiring examples.",
"(3) Attribute definition: conditional training can control sentence-level attributes, but they must be discrete.",
"By contrast, weighted decoding requires word-level features, but they can be continuous.",
"In order to study the effect of our controllable attributes, we conduct a large-scale human evaluation",
"evaluation of 28 model configurations (see Appendix E), plus human-human conversations for comparison.",
"Approach In our evaluation, a crowdworker chats with a model (or in the human-human case, another crowdworker) for six conversational turns, then answers eight multiple-choice questions which each capture different aspects of conversational quality: avoiding repetition, interestingness, making sense, fluency, listening, inquisitiveness, humanness and engagingness.",
"The eight questions are Likert questions on a 1-4 scale, where higher is better.",
"5 To match the ConvAI2 Challenge, we also add a persona retrieval question, in which the crowdworker is asked to select which of two possible personas was the model's persona.",
"For full details of the evaluation design, see Appendix B. Our evaluation is the same as the ConvAI2 Challenge evaluation, but more detailed ConvAI2 includes only engagingness and persona retrieval.",
"6 As in the ConvAI2 challenge, each of our 28 model configurations was evaluated by over 100 crowdworkers, and the results were adjusted for annotator variance via a Bayesian calibration (Kulikov et al., 2018).",
"In designing our evaluation, we aimed to capture the four aspects we expected to directly improve via control (avoiding repetition, interestingness, listening, inquisitiveness), two important error classes we thought would be affected by our controls (fluency, making sense), and two overall quality measures (engagingness, humanness).",
"In this section we summarize the main findings of our human evaluation whose full results can be found in Appendices G and H, with sample conversations in Appendix C.",
"As Figure 3 shows, controlling for repetition, specificity and question-asking all lead to large",
"5 Exceptions: Avoiding repetition is a 1-3 scale, as we found this gave clearer instructions.",
"Inquisitiveness has an optimal score of 3; 1 and 2 represent too little question-asking, and 4 represents too much.",
"6 There are three other minor differences between our evaluation and ConvAI2's: (1) We fix capitalization and spacing before showing the chatbot's utterances to crowdworkers, while ConvAI2 show the raw lowercase tokenized form.",
"We found the latter interferes with fluency evaluation.",
"(2) We conduct 6 dialogue turns, while ConvAI2 conducts 4-6.",
"This was necessary to evaluate repetitiveness.",
"(3) We use (publicly-available) validation set personas, while ConvAI2 uses (hidden) test set personas.",
"This enables us to release our evaluation chatlogs.",
"engagingness improvements over the greedy and beam-search baseline models.",
"In particular, we find that controlling for multi-turn (self) repetition is important and should be incorporated alongside other attribute control methods.",
"We found no improvement by controlling response-relatedness.",
"To better understand these overall engagingness improvements, we consider the full set of human judgments, shown in Figure 4.",
"We find that reducing repetition leads to improvements across all our aspects of conversational quality.",
"Increasing specificity shows improvements in interestingness and listening ability over the repetition-controlled baseline, while increasing question-asking shows improvements in inquisitiveness and interestingness over the repetition-controlled baseline.",
"Our most engaging model, which controls both repetition and question-asking marked Ques-tion (CT)' in Figure 3 (left) matches the engagingness of the winning entry in the ConvAI2 competition, as both models achieve a raw score 7 of 3 .",
"1 (Dinan et al., 2019).",
"However, the ConvAI2 winner, Lost in Conversation, was trained on approximately 12 as much data as our model.",
"Lost in Conversation is based on the OpenAI GPT Language Model (Radford et al., 2018), which is pretrained on the BookCorpus (Zhu et al., 2015), which contains approximately 985 million words, whereas our model is pretrained on the Twitter dataset (approximately 79 million words).",
"Repetition (WD) We observe that self-repetition across utterances ( external repetition ) is by far the most severe form of repetition in our beam search baseline model.",
"We evaluate several settings of the extrep bigram weighted decoding feature, and find that an aggressive repetition-reduction setting (reducing bigram repetition rate to below gold data levels) is rated best.",
"We also find that blocking repeated content words improves the avoiding repetition score.",
"See Appendices E, F and G for full details.",
"As shown in Figure 3 (left) and Figure 4, our repetition-controlled model improves hugely 7 Although the same Bayesian calibration method was applied both in our study and in the ConvAI2 competition, calibrated scores are not comparable across the two; thus we compare raw scores (viewable in Table 7).",
"over the beam search baseline in all metrics, and achieves close-to-human scores on all metrics except humanness.",
"This striking result demonstrates that repetition is by far the biggest limiting quality factor for naive sequence-to-sequence dialogue agents.",
"The result also emphasizes the importance of multi-turn dialogue evaluation to detect the problem.",
"We refer to this model as the repetition-controlled baseline , and use it as a basis for all remaining experiments (i.e., we control specificity, response-relatedness and question-asking on top of these repetition-control settings).",
"Specificity (WD, CT) For our weighted decoding models, the extreme settings (very generic and very specific) score poorly in engagingness due to the frequent presence of degenerate output see Figure 3 (middle).",
"We find that the weight = 4 setting (which is more specific than the repetition-controlled baseline and about as specific as the gold data) maximizes engagingness.",
"As shown in Figure 3 (left) and Figure 4, this more-specific model is rated more interesting, engaging, and a better listener than the repetition-controlled baseline, but at the cost of reduced fluency and making sense.",
"Our CT model with z = 7 (which has a similar NIDF level as WD with weight = 4 ) shows similar results, but the improvements are smaller.",
"For further discussion on the interestingness of our specificity models, see Section 8.3.",
"Response-relatedness (WD) We evaluated several control settings (weight = 10 , 5 , 10 , 13 ) and found that none scored better than weight = 0 (no response-relatedness control); see Appendix H. This is surprising prior to running the human evaluation, we annotated 100 examples ourselves to determine the best control settings.",
"While we identified a more responsive setting (weight = 5 ) as less likely than the uncontrolled model to ignore the user, crowdworkers rated it as a slightly worse listener than the uncontrolled model.",
"One explanation for this discrepancy is that the more responsive model takes more risks, using more rare words (0.197 NIDF, up from 0.178), and thus receives a lower makes-sense score (3.41, down from 3.70).",
"We hypothesize that, compared to us, the crowdworkers are less tolerant of slightly nonsensical output, and more tolerant of generic unrelated utterances.",
"Question-asking (CT) As shown in Figure 3 (right), a question-asking rate of 65.7% ( z = 7 ) maximizes engagingness.",
"This setting, which asks more questions than both the repetition-controlled baseline (50.0%) and the human-produced gold data (28.8%), brings us closest to human-level engagingness see Figure 3 (left).",
"Although we find that a rate of approximately 65.7% question-asking is the most engaging, a lower level (48.9%, or z = 4 ) is rated the best listener.",
"Lastly, we find Model Win% Top 3 reasons for preferring model Specificity WD (weight = 6 ) 84.1% More information; Better flow; More descriptive Specificity WD (weight = 4 ) 75.5% More information; They describe their life in more detail; Funny Specificity CT ( z = 7 ) 56.2% More information; Better flow; Seems more interested Table 3: A/B tests comparing various specificity-controlled models to the repetition-controlled baseline on interestingness.",
"that although asking too many questions is less engaging, most crowdworkers will not directly criticize a chatbot that asks questions on every turn only 11.9% of crowdworkers judged the z = 10 (boost) setting, which asks 99.5% questions, as asking too many questions.",
"8 For full details of these scores, see Appendix F and H. For time and budget reasons, we did not evaluate any models controlling both question-asking and specificity.",
"However, we expect it is possible to obtain further improvements by doing so.",
"Though our more-specific models yielded significant improvements in engagingness, we were surprised that they did not yield clearer improvements in interestingness.",
"To investigate further, we conducted an A/B interestingness evaluation of three specificity-controlled models, compared to the repetition-controlled baseline.",
"Crowdworkers were shown two conversations (from the main human evaluation) and asked to choose which model was more interesting (see Figure 7 for details).",
"We collected 500 samples per comparison, plus 200 additional human vs repetition-controlled baseline samples, which were used to filter for quality control.",
"After discarding low-quality crowdworkers, we have roughly 300 evaluations per comparison, with an average Cohen's = 0 .",
"6 .",
"As shown in Table 3, all three models were rated significantly more interesting than the repetition-controlled baseline.",
"This convincingly shows that producing utterances with more rare words is a valid strategy to improve interestingness.",
"We have two explanations for why these interestingness differences did not materialize in our main evaluation.",
"Firstly, interestingness is a particularly subjective metric (unlike more tangible metrics such as avoiding repetition and making sense) this makes it hard to calibrate across crowdworkers.",
"8 Though this conclusion may hold true for the PersonaChat task a synthetic chatting task that instructs participants to get to know each other in real-life social conversations, incessant question-asking may be less tolerated.",
"Secondly, we suspect that in our original evaluation, the crowdworkers may have evaluated the interestingness of the task rather than the chatbot .",
"This could account for why subtle increases in conversational ability did not result in higher interestingness ratings the PersonaChat task itself has a natural interestingness limit.",
"What makes a good conversation?",
"Through our evaluation, we showed that a good conversation is about balance controlling for the right level of repetition, specificity and question-asking is important for overall quality.",
"We also found that conversational aspects such as interestingness, listening, and inquisitiveness are all important though optimizing these can introduce a trade-off against certain types of errors (such as repetitive, disfluent, or nonsensical output).",
"Secondly, multiturn evaluation is essential to study what makes a good conversation multiple turns are required to reveal issues such as repetition, consistency, and question-asking frequency.",
"Lastly, what do we mean by good'?",
"Although humanness and engagingness are both commonly used as overall quality metrics, the two are very different.",
"While our models achieved close-to-human scores on engagingness, they failed to get close on humanness showing that a chatbot need not be human-like to be enjoyable.",
"This striking result also demonstrates the importance of measuring more than one quality metric when evaluating dialogue agents.",
"Outlook Our work shows that neural generative systems have systemic problems when applied to open-ended dialogue, some of which (e.g. repetition) are only observable in the multi-turn setting.",
"Furthermore, control of low-level attributes offers a practical way to correct these problems, yielding large improvements to overall quality in our case, comparable to systems trained on much more data.",
"Future work includes optimizing control settings automatically, and building more convincingly human-like chatbots."
] | [
"abstain",
"abstain",
"method",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"method",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"method",
"objective",
"objective",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"method",
"abstain"
] |
[
"Intent classification is a major task in spoken language understanding (SLU).",
"Since most models are built with pre-collected in-domain (IND) training utterances, their ability to detect unsupported out-of-domain (OOD) utterances has a critical effect in practical use.",
"Recent works have shown that using extra data and labels can improve the OOD detection performance, yet it could be costly to collect such data.",
"This paper proposes to train a model with only IND data while supporting both IND intent classification and OOD detection.",
"Our method designs a novel domain-regularized module (DRM) to reduce the overconfident phenomenon of a vanilla classifier, achieving a better generalization in both cases.",
"Besides, DRM can be used as a drop-in replacement for the last layer in any neural network-based intent classifier, providing a low-cost strategy for a significant improvement.",
"The evaluation on four datasets shows that our method built on BERT and RoBERTa models achieves state-of-the-art performance against existing approaches and the strong baselines we created for the comparisons.",
"Spoken language understanding (SLU) systems play a crucial role in ubiquitous artificially intelligent voice-enabled personal assistants (PA).",
"SLU needs to process a wide variety of user utterances and carry out user's intents, a.k.a. intent classification .",
"Many deep neural network-based SLU models have recently been proposed and have demonstrated significant progress (Guo et al., 2014; Liu and Lane, 2016; Zhang and Wang, 2016; Wang et al., 2018; Goo et al., 2018; Chen et al., 2019) in classification accuracy.",
"These models usually apply the closed-world assumption, in which the SLU model is trained with predefined domains, and the model expects to see the same data distribution I don't like Thriller in playlist Playlist deleted I am too cold Oven turned on Figure 1: Failure Examples of Unsupported Skills in AI Voice Assistants.",
"during both training and testing.",
"However, such an assumption is not held in the practical use case of PA systems, where the system is used under a dynamic and open environment with personal expressions, new vocabulary, and unknown intents that are out of the design scope.",
"To address the challenges in open-world settings, previous works adopt varied strategies.",
"Shen et al. (2018a, 2019c) use a cold-start algorithm to generate additional training data to cover a larger variety of utterances.",
"This strategy relies on the software developers to pre-build all possible skills.",
"Shen et al. (2019b,a) introduce a SkillBot that allows users to build up their own skills.",
"Recently, Ray et al. (2018, 2019); Shen et al. (2018b, 2019d) enables an SLU model to incorporate user personalization over time.",
"However, the above approaches do not explicitly address unsupported user utter-ances/intents, leading to catastrophic failures illustrated in Figure 1.",
"Thus, it is critically desirable for an SLU system to classify the supported intents ( in-domain (IND) ) and reject unsupported ones ( out-of-domain (OOD) ) correctly.",
"A straightforward solution is to collect OOD data and train a supervised binary classifier on both IND data and OOD data (Hendrycks et al., 2018).",
"However, collecting a representative set of OOD data could be impractical due to the infinite compositionality of language.",
"Arbitrarily selecting a subset could incur the selection bias, causing the learned model might not generalize to unseen OOD data.",
"Ryu et al. (2017, 2018) avoid learning with OOD data by using generative models ( e.g., autoencoder and GAN) to capture the IND data distribution, then judge IND/OOD based on the reconstruction error or likelihood.",
"Recently, Tan et al. (2019) utilizes a large training data to enable the meta-learning for OOD detection.",
"Zheng et al. (2020) generates pseudo OOD data to learn the OOD detector.",
"The above-discussed approaches require additional data or training procedures beyond the intent classification task, introducing significant data collection effort or inference overhead.",
"This paper proposes a strategy based on neural networks to use only IND utterances and their labels to learn both the intent classifier and OOD detector.",
"Our strategy modifies the structure of the classifier, introducing an extra branch as a regularization target.",
"We call the structure a Domain-Regularized Module (DRM).",
"This structure is probabilistically motivated and empirically leads to a better generalization in both intent classification and OOD detection.",
"Our analysis focuses more on the latter task, finding that DRM not only outputs a class probability that is a better indicator for judging IND/OOD, but also leads to a feature representation with a less distribution overlap between IND and OOD data.",
"More importantly, DRM is a simple drop-in replacement of the last linear layer, making it easy to plug into any off-the-shelf pre-trained models ( e.g. BERT (Devlin et al., 2019)) to fine-tune for a target task.",
"The evaluation on four datasets shows that DRM can consistently improve upon previous state-of-the-art methods.",
"In the application of intent classification, a user utterance will be either an in-domain (IND) utterance (supported by the system) or an out-of-domain (OOD) utterance (not supported by the system).",
"The classifier is expected to correctly (1) predict the intent of supported IND utterances; and (2) detect to reject the unsupported OOD utterances.",
"The task is formally defined below.",
"We are given a closed world IND training set DIND = { x , y } = { ( x i , y i ) } Ni =1 .",
"Each sample ( x i , y i ) , an utterance x i and its intent class label y i { 1 . . . C } for C predefined in-domain classes, is drawn from a fixed but unknown IND distribution PIND ( x , y ) .",
"We aim to train an intent classifier model only on IND training data DIND such that the model can perform: (1) Intent Classification: classify the intent class label y of an utterance x if x is drawn from the same distribution PIND as the training set DIND ; (2) OOD Detection: detect an utterance x to be an abnormal/unsupported sample if x is drawn from a different distribution POOD .",
"Intent Classification is one of the major SLU components (Haffner et al., 2003; Wang et al., 2005; Tur and De Mori, 2011).",
"Various models have been proposed to encode the user utterance for intent classification, including RNN (Ravuri and Stoicke, 2015; Zhang and Wang, 2016; Liu and Lane, 2016; Kim et al., 2017; Wang et al., 2018; Goo et al., 2018), Recursive autoencoders (Kato et al., 2017), or enriched word embeddings (Kim et al., 2016).",
"Recently, the BERT model (Devlin et al., 2019) was explored by (Chen et al., 2019) for SLU.",
"Our work also leverages the representation learned in BERT.",
"OOD Detection has been studied for many years (Hellman, 1970).",
"Tur et al. (2014) explores its combination with intent classification by learning an SVM classifier on the IND data and randomly sampled OOD data.",
"Ryu et al. (2017) detects OOD by using reconstruction criteria with an autoencoder.",
"Ryu et al. (2018) learns an intent classifier with GAN and uses the discriminator as the classifier for OOD detection.",
"Zheng et al. (2020) leverages extra unlabeled data to generate pseudo-OOD samples using GAN via auxiliary classifier regularization.",
"Tan et al. (2019) further incorporates the few-shot setting, learning the encoding of sentences with a prototypical network that is regularized with the OOD data outside a learning episode.",
"Other researchers developed methods in computer vision based on the rescaling of the predicted class probabilities (ODIN) (Liang et al., 2017) or building the Gaussian model with the features extracted from the hidden layers of neural networks (Mahalanobis) (Lee et al., 2018).",
"Recently, (Hsu et al., 2020) proposed Generalized-ODIN with decomposed confidence scores.",
"However, both approaches also heavily depend on the image input perturbation to achieve good performance.",
"Unfortunately, such perturbation cannot be applied to discrete utterance data in SLU.",
"Our method is inspired by the decomposed confidence of Generalized-ODIN (Hsu et al., 2020), but we leverage the fact that the training data are all from IND to introduce an extra regularization.",
"This regularization leads to a better generalization (lower classification error) on the intent classification.",
"The improvement is in contrast to the original Generalized-ODIN, which has its classification error slightly increased.",
"Since the improved generalization is likely due to a more generalizable feature representation, we leverage this observation, providing a modified Mahalanobis (Lee et al., 2018), which we called L-Mahalanobis, for a transformer-based model to detect OOD data.",
"In the following sections, we first describe the DRM and then elaborate on using the outputs of a DRM-equipped model to detect OOD data.",
"The motivation begins with introducing the domain variable d ( d = 1 means IND, while d = 0 means OOD) following the intuition in (Hsu et al., 2020), then rewrite the posterior of class y given x with domain d as follows:",
"(cid:98) p ( y | d = 1 , x ) = (cid:98) p ( y , d = 1 | x ) (cid:98) p ( d = 1 | x ) = (cid:98) p ( y | x ) (cid:98) p ( d = 1 | x ) (cid:98) p ( y , d = 0 | x ) (cid:98) p ( d = 1 | x ) (cid:98) p ( y | x ) (cid:98) p ( d = 1 | x ) (1)",
"where the last step holds since (cid:98) p ( y , d = 0 | x ) is close to 0 with the intrinsic conflict between IND classes y and random variable d = 0 for OOD.",
"Motivated by the above Equation 1, we design the DRM to mitigate overconfidence by decomposing the final logits f into two branches.",
"Figure 2 illustrates the architecture.",
"Domain Logits f d models (cid:98) p ( d = 1 | x ) before normalization.",
"It projects from hidden state h to a scalar w.r.t. d: f d = W d h + b d (2) where W d R | h | 1 .",
"Since (cid:98) p ( d = 1 | x ) is a probability between 0 and 1, Section 3.1.2 will describe the training details of domain loss via the sigmoid function.",
"Classification Logits f c models the probability posterior (cid:98) p ( y | x ) before normalization. It follows the conventional linear projection from hidden state h to the number of classes:",
"where W R | h | C with C classes.",
"d At the end, we obtain the final logits f to represent (cid:98) p ( y | d = 1 , x ) by putting f d and f c together following the dividend-divisor structure of Equation 1: f = f /f (4)",
"We propose two training loss functions to train a model with DRM. The first training loss aims to minimize a cross-entropy between the predicted intent class and ground truth IND class labels.",
"L classification (cid:44) C (cid:88) i =1 y i log (cid:98) p ( f ) i (5) (cid:98) p ( f ) = softmax ( f )",
"The second training loss aims to ensure that the domain component f d is close to 1 since all utterances in the training set are IND.",
"We first restrict f d between 0 and 1 by using sigmoid activation function. Then, this loss function encourages sigmoid ( f d ) close to 1 for training on IND utterances. In order to avoid f d to be very large values and affect the training convergence,",
"we further apply clamp function on f d before feeds to Equation 4:",
"f d = (cid:40) f d if < f d < if f d < = or f d > =",
"Thus, we sum them up to optimize the model:",
"Remarks: It is important to note that the design of L domain is to introduce extra regularization to mitigate the overconfidence in standard posterior probability (cid:98) p ( f ) . sigmoid ( f d ) is not used to directly predict if an utterance is IND or OOD.",
"There are two types of strategies to utilize the outputs of a classifier to perform OOD detection. One is based on the confidence which is computed from logits, the other is based on the features. In the below, we describe how to compute different OOD scores with our DRM.",
"Recent works (Liang et al., 2017) has shown that the softmax outputs provide a good scoring for detecting OOD data. In our DRM model, we use the decomposed softmax outputs for the score. The logits f c w.r.t. the true posterior distribution in open-world can be combined with varied approaches: DRM Confidence Score:",
"Conf DRM = softmax ( f c ) (8) DRM ODIN Confidence Score: ODINDRM = softmax ( f c /T ) (9) with large T = 1000 (Liang et al., 2017). DRM Entropy Confidence Score: ENTDRM = Entropy [ softmax ( f c )] (10)",
"The OOD utterances have low Conf DRM ODINDRM scores and high ENTDRM score.",
"While our DRM confidence already outperforms many existing methods (later shown in experi-ments), we further design the feature-based Mahalanobis distance score, inspired by the recent",
"work (Lee et al., 2018) for detecting OOD images. We first recap the approach in (Lee et al., 2018) which consists of two parts: Mahalanobis distance calculation and input preprocessing. Mahalanobis distance score models the class conditional Gaussian distributions w.r.t. Gaussian discriminant analysis based on both lowand upper-level features of the deep classifier models. The score on layer (cid:96) is computed as follows:",
"where f (cid:96) ( x ) represents the output features at the (cid:96) th -layer of neural networks; i and are the class mean representation and the covariance matrix. Thus, the overall score is their summation:",
"In addition, the input preprocessing adds a small controlled noise to the test samples to enhance the performance.",
"Although Mahalanobis distance score can be applied only to the last feature layer without input preprocessing S last Maha ( x ) , the analysis (Table 2 in (Lee et al., 2018)) shows that either input preprocessing or multi-layer scoring mechanism is required to achieve decent OOD detection performance. Unfortunately, neither of the above two mechanisms is applicable in the intent classifier for SLU. First, unlike image data, noise injection into discrete natural language utterances has been shown not to perform well. Second, in most cutting-edge intent classifier models, lowand upper-level network layers are quite different. The direct application of multilayer Mahalanobis distance leads to much worse OOD detection performance.",
"Since BERT-based models showed significant performance improvement for intent classification in SLU (Chen et al., 2019), we focus on designing the multi-layer Mahalanobis score for BERT-based classifier models. In existing BERT-based text classification models, such as BERT, RoBERTa, Distil-BERT, ALBERT, etc., there are different designs between the last transformer layer and the classification layer. Figure 3 shows our generic design of",
"Mahalanobis score computation (blue) for various BERT-based models.",
"Our design is based on our extensive experiments and understanding of the common insights in different BERT-based models.",
"Specifically, we use the features from different layers between the last transformer layer and the classification layer.",
"We empirically found that the nonlinear tanh layer plays an important role.",
"Thus, to map the features of each transformer layer and last layer into the same semantic space, we pass the features of each layer through tanh function and sum them up to compute our Mahalanobis score: SL Maha ( x ) = S Maha ( f n ( x )) + (cid:88) 1 (cid:96)<n S Maha ( tanh ( f (cid:96) ( x ))) (11) where f (cid:96) and f n are the features of each layer (cid:96) and last layer n in a BERT-based intent classifier model.",
"We refer to our proposed approach as L-Mahalanobis .",
"We evaluate our proposed approach on three benchmark SLU datasets and one in-house SLU dataset.",
"Table 1 provides an overview of all datasets.",
"Among all these datasets, the recently released CLINC dataset serves as a benchmark for OOD detection in SLU.",
"For the other three datasets, we treat them mutually OOD due to non-overlapping domains.",
"We crowdsourced the in-house Movie dataset containing common questions that users may ask regarding movies.",
"This dataset mainly consists of queries a user may ask in the movie domain.",
"The dataset consists of 38 different intents (e.g. rating information, genre information, award information, show trailer) and 20 slots or entities (e.g., director, award, release year).",
"This dataset was collected using crowdsourcing as follows.",
"At first, some example template queries were generated by linguistic experts for each intent, along with intent and slot descriptions.",
"Next, a generation crowdsourcing job was launched where a crowd worker was assigned a random intent, a combination of entities, and few slots generally associated with the intent.",
"To better understand the intent and slots, the worker was asked to review the intent and slot descriptions, and example template utterances.",
"The first task of the worker was to provide 3 different queries corresponding to the given intent, which also contains the provided entities.",
"The second task of the worker was to provide additional entities corresponding to the same slot type.",
"A subsequent validation crowdsourcing job was launched where these crowdsourced queries were rated by validation workers in terms of their accuracy with the provided intent and entities.",
"Each query was rated by 5 different validation workers, and the final validated dataset contains a subset of crowdsourced queries with high accuracy score and high inter-rater agreement.",
"We implemented our method using PyTorch on top of the Hugging Face transformer library (Wolf et al., 2019).",
"We follow the hyperparameters in the original models.",
"For the only hyperparame-ter , we experimented only on CLINC dataset from 2.2 to 4 with uniform interval 0.2 (we try 10 values of ) based on sigmoid (2 . 2) 0 .",
"9 and sigmoid (4) 0 .",
"982 .",
"We used = 3 which gives the best performance in our experiment for all datasets.",
"We train each model with 3 epochs using 4 NVIDIA Tesla V100 GPUs (16GB) for each training.",
"We conducted experiments on two transformer-based models, BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019).",
"Remarks: All experiments only use IND data for both training and validation.",
"We use the same hyperparameters in all datasets and validate the generalizability of our method.",
"We consider the strongest baseline BERT-Linear (the last layer is linear) fine-tuned on the pre-trained BERT-based models (Chen et al., 2019).",
"We consider the existing OOD detection methods: ConGAN (Ryu et al., 2018): a GAN-based model based on given sentence representations to generate OOD features with additional feature matching loss.",
"OOD utterances are expected to have low discriminator confidence scores.",
"Autoencoder (AE) (Ryu et al., 2017): first uses an LSTM based classifier model to train sentence representations; then train an autoencoder on the above sentence embeddings.",
"OOD utterances are expected to have high reconstruction error.",
"ODIN (Liang et al., 2017): we only use the temperature scaling on logits.",
"OOD utterances are expected to have a low scaled confidence score.",
"Generalized-ODIN (G-ODIN) (Hsu et al., 2020): we fine-tune on pre-trained BERT models with replaced last layer and only use the decomposed confidence.",
"We evaluate all three variations proposed in the paper h I , h E and h C and report the best one.",
"OOD utterances are expected to have low scaled confidence score.",
"Mahalanobis (Lee et al., 2018): we only use the feature of BERT's last layer to compute Mahalanobis distance score.",
"OOD utterances are expected to have a low scaled confidence score.",
"For ConGAN and AE, we evaluate the model in the original paper as well as customized BERT-based backbone models as strong baselines.",
"Specifically, we customize En-ConGAN and En-AE as follows: En-ConGAN uses BERT sentence representation as input; En-AE applies a BERT classifier model to train the sentence representation and then use them to further train an autoencoder.",
"Thus, En-ConGAN and En-AE are not existing baselines.",
"Note that ERAEPOG (Zheng et al., 2020) and O-Proto (Tan et al., 2019) are not comparable since they require additional unlabeled data and labels.",
"We only put the ERAEPOG results on CLINC dataset (from the original paper) for reference.",
"We evaluate IND performance using the classification accuracy metric as in literature (Liu and Lane, 2016; Wang et al., 2018; Chen et al., 2019).",
"we follow the evaluation metrics in literature (Ryu et al., 2018) and (Liang et al., 2017; Lee et al., 2018).",
"Let TP, TN, FP, and FN denote true positive, true negative, false positive, and false negative.",
"We use the following OOD evaluation metrics: EER (lower is better) : (Equal Error Rate) measures the error rate when false positive rate (FPR) is equal to the false negative rate (FNR).",
"Here, FPR=FP/(FP+TN) and FNR=FN/(TP+FN).",
"FPR95 (lower is better) : (False Positive Rate (FPR) at 95% True Positive Rate (TPR)) can be interpreted as the probability that an OOD utterance is misclassified as IND when the true positive rate (TPR) is as high as 95%.",
"Here, TPR=TP/(TP+FN).",
"Detection Error (lower is better) : measures the misclassification probability when TPR is 95%.",
"Detection error is defined as follows: min { PIND ( s ) p ( x PIND ) + POOD ( s > ) p ( x POOD ) } where s is a confidence score.",
"We follow the same assumption that both IND and OOD examples have an equal probability of appearing in the testing set.",
"AUROC (higher is better) : (Area under the Receiver Operating Characteristic Curve) The ROC curve is a graph plotting TPR against the FPR=FP/(FP+TN) by varying a threshold.",
"AUPR (higher is better) : (Area under the Precision-Recall Curve (AUPR)) The PR curve is a graph plotting the precision against recall by varying a threshold.",
"Here, precision=TP/(TP+FP) and recall=TP/(TP+FN).",
"AUPR-IN and AUPR-OUT is AUPR where IND and OOD distribution samples are specified as positive, respectively.",
"Note that EER, detection error, AUROC, and AUPR are threshold-independent metrics.",
"We also evaluate the statistical significance between all baselines and our best result (DRM + L-Mahalanobis) on all the above metrics.",
"We train each model 10 times with different PyTorch random seeds.",
"We report the average results and t-test statistical significance results.",
"Table 3 reports the IND intent classification results on each dataset finetuned using BERT and RoBERTa pre-trained models.",
"It is interesting to observe that all DRM combined models consistently achieve better classification accuracy with up to 0.8% improvement (reproduced No joint row in Table 3 in (Chen et al., 2019) on Snips dataset).",
"This is because the domain loss forces sigmoid ( f d ) close to 1 and therefore also slightly mitigates its impact to IND classification.",
"Thus, the true posterior distribution of IND data is also modeled more precisely.",
"For both BERT and RoBERTa Table 3: IND Intent Classification Results Model Last Layer Datasets CLINC ATIS Snips Movie BERT Linear 96.19 97.76 97.97 97.26 DRM* 96.66 98.21 98.23 97.87 RoBERTa Linear 96.82 97.64 98.07 98.07 DRM* 97.15 98.31 98.87 98.63 Our DRM methods (marked by *) are significantly better than baseline model on all datasets with p-value < 0 .",
"backbones, DRM models are significantly better than conventional BERT-linear classification models with p-value < 0 .",
"05 .",
"Results on CLINC Dataset: Table 2 reports the OOD detection results on CLINC dataset.",
"This result covers all existing work and our enhanced baselines.",
"We focus on analyzing the contribution by each of our proposed techniques, DRM and L-Mahalanobis.",
"The first three rows report the performance of existing approaches based on the original designs in their papers (ERAEPOG in grey uses additional unlabeled data).",
"Unfortunately, we observe that their performance is even worse than the simple confidence-based approach via BERT Table 4: OOD Detection Results on Snips/ATIS/Movie Datasets (RoBERTa Model Finetuning) OOD Method OOD Evaluation EER( ) FPR95( ) Detection Error( ) AUROC( ) AUPR In( ) AUPR Out( ) IND dataset: Snips; OOD Datasets: CLINC OOD/ATIS/Movie En-ConGAN 54.50 /63.05 /54.22 99.16 /99.87 /99.10 42.61 /49.10 /37.32 39.03 /30.88 /45.64 37.15 /34.47 /30.03 51.23 /45.70 /52.59 Confidence 9.91 /17.83 /22.22 14.94 /47.43 /51.85 9.18 /11.17 /19.34 96.09 /92.03 /87.44 94.78 /92.65 /97.67 97.21 /92.29 /55.16 Entropy 10.21 /18.05 /23.15 14.54 /45.04 /52.68 9.25 /10.77 /19.58 96.32 /92.44 /87.12 94.90 /92.94 /97.60 97.53 /92.99 /52.27 ODIN 10.01 /16.93 /23.15 14.22 /39.04 /58.33 9.43 /9.64 /23.01 96.46 /93.81 /83.58 94.59 /93.99 /96.63 97.75 /94.53 /47.36 G-ODIN 9.65 /15.16 /22.02 13.31 /37.86 /55.67 8.32 /8.55 /21.82 97.21 /94.73 /85.60 95.70 /95.04 /97.73 98.02 /95.44 /50.38 En-AE 4.40 /4.37 /3.59 4.18 /3.59 /3.08 4.25 /4.00 /3.64 98.56 /98.12 /88.96 97.41 /98.92 /94.39 98.12 /95.34 /86.84 Maha 3.90 /1.81/11.11 2.66 /2.23 /5.58 3.47 /1.36 /10.21 98.79 /99.74 /95.61 97.73 /99.75 /99.22 99.21 /99.77 /76.61 DRM+L-Maha* 3.00/1.79/2.78 1.95/0.00/2.78 2.63/1.16/3.16 98.90/99.79/98.53 98.15/99.79/99.76 99.24/99.80/87.02 IND dataset: ATIS; OOD Datasets: CLINC OOD/Snips/Movie En-ConGAN 21.60 /19.74 /23.28 81.52 /86.33 /93.77 15.51 /15.54 /16.03 82.34 /81.79 /79.32 84.52 /89.35 /58.36 72.74 /60.20 /89.14 Confidence 10.21 /8.52 /10.19 20.50 /12.92 /17.59 9.28 /8.36 /9.33 96.99 /97.84 /96.62 97.19 /98.57 /99.56 97.04 /96.99 /84.27 Entropy 9.91 /8.84 /10.12 21.67 /13.75 /17.59 9.11 /8.16 /9.38 97.06 /97.93 /96.68 97.25 /98.62 /99.57 97.11 /97.14 /85.02 ODIN 9.11 /8.36 /10.08 21.32 /14.39 /18.52 7.50 /6.15 /9.37 97.16 /98.00 /96.73 97.39 /98.68 /99.58 97.16 /97.18 /84.88 G-ODIN 8.75 /8.01 /9.97 20.87 /13.44 /17.76 7.31 /6.02 /8.98 97.27 /98.11 /96.85 97.46 /98.76 /99.59 97.28 /97.32 /85.90 En-AE 4.00 / 2.09 /3.69 2.20 / 0.00 /0.35 3.45 / 1.33 /1.97 99.41 / 99.83 /99.63 99.43 / 99.89 /98.72 99.43 / 99.74 /97.93 Maha 4.00 /3.85 /6.48 12.13 /8.06 /11.64 3.76 /2.94 /5.04 99.18 /99.47 /98.72 98.78 /99.45 /99.71 99.46/99.49/95.45 DRM+L-Maha* 2.70/2.09/1.85 1.30 /0.32/ 0.00 2.55 /2.01/ 1.23 99.48 /99.70/ 99.78 99.51 /99.82/ 99.97 99.47 /99.50/ 98.22 IND dataset: Movie; OOD Datasets: CLINC OOD/ATIS/Snips En-ConGAN 45.90 /15.12 /41.09 44.05 /14.35 /39.59 22.95 /7.56 /20.55 43.85 /57.44 /45.78 85.21 /88.23 /90.40 14.68 /17.56 /10.09 Confidence 19.22 /16.70 /18.81 36.81 /47.94 /47.52 18.51 /15.15 /18.53 91.65 /91.99 /90.53 98.11 /98.50 /98.68 76.78 /67.58 /59.63 Entropy 19.12 /17.26 /19.13 34.64 /44.24 /44.80 18.25 /16.12 /18.87 91.79 /92.14 /90.72 98.11 /98.50 /98.69 78.66 /70.87 /63.96 ODIN 19.42 /18.95 /19.94 34.43 /39.91 /39.38 18.24 /18.38 /19.33 91.34 /91.40 /90.03 97.96 /98.29 /98.53 78.56 /71.62 /65.18 G-ODIN 18.61 /18.23 /19.25 34.19 /36.42 /37.03 18.15 /17.27 /18.91 91.86 /91.97 /90.63 98.21 /98.34 /98.70 78.98 /72.07 /66.79 En-AE 13.70 /7.28 /16.05 43.42 /16.05 /32.29 11.00 /4.46 /11.87 94.57 /93.56 /92.23 98.91 /99.58 /99.01 77.12 /76.13 /68.75 Maha 3.90 /3.41 /6.11 6.02 /2.35 /15.40 3.72 /3.02 /6.02 99.37 /99.43 /98.63 99.81 /99.89/99.82 97.82 /97.27 /91.44 DRM+L-Maha* 3.70/3.36/4.66 2.56/1.01/4.34 3.61/2.85/4.58 99.48/99.53/99.06 99.89/99.92/99.88 97.90/97.38/93.85 In each OOD method for an IND dataset, / separates the results for different OOD datasets.",
"Our method (*) is significantly better than baseline models with p-value < 0 .",
"01 (marked by ) and p-value < 0 .",
"05 (marked by ) using t-test in most cases.",
"finetuning baseline (row 5).",
"Thus, we mainly focus on comparing our method with strong baselines with BERT and RoBERTa models.",
"For a given OOD detection method, we find that their combinations with DRM consistently perform better than those with standard models.",
"The improvement is at least 1-2% for all metrics against our enhanced baselines.",
"Among all OOD detection approaches, our proposed L-Mahalanobis OOD detection approach achieves the best performance for both linear and DRM combined BERT and RoBERTa models.",
"It is not surprising to observe that our DRM method combined with a better pre-trained RoBERTa model achieves larger OOD detection performance improvement.",
"Note that our customized En-AE performs much better than most other methods since we incorporated the enhanced reconstruction capability with pre-trained BERT models.",
"However, En-AE cannot utilize all BERT layers as our proposed L-Mahalanobis method, resulting in worse performance.",
"In addition, DRM+L-Mahalanobis models are significantly better than existing methods and enhanced baselines with p-value < 0 .",
"01 on most metrics for both BERT and RoBERTa backbones.",
"Ablation Study on CLINC Dataset: We analyze how our two novel components, DRM model and L-Mahalanobis, impact the performance.",
"The rows with DRM in Last Layer column of Table 2 show the performance of DRM model.",
"As one can see, for all OOD methods, DRM consistently performs better than the conventional Lin-ear last layer.",
"Specifically, the DRM and Confidence combo also outperforms its closest baseline G-ODIN.",
"This validates the effectiveness of our disentangled logits design in DRM based on the mathematical analysis of overconfidence.",
"It also shows that our new domain loss can indeed enhance the model awareness that all training data is IND.",
"The rows with L-Mahalanobis in OOD Method column of Table 2 outperform other OOD methods with the same model and last layer.",
"Compared with its closest baseline Mahalanobis, the better performance of L-Mahalanobis validates the usefulness of all layers' features in various models.",
"Results on ATIS/Snips/Movie Datasets: Since our strong baselines on pre-trained RoBERTa model showed better results on CLINC, we next evaluate other results finetuned on RoBERTa",
"model.",
"When taking each dataset as IND, we use the other two mutually exclusive datasets and CLINC OOD as OOD datasets for evaluating OOD detection performance.",
"As one can see in Table 4, our method outperforms other approaches on both Snips and movie IND datasets.",
"For ATIS IND dataset, En-AE for Snips OOD dataset achieves almost perfect performance.",
"This is because ATIS and Snips are almost completely non-overlapping and ATIS is well designed with carefully selected varieties and entities in the airline travel domain.",
"When taking Snip as IND and ATIS as OOD, it is interesting to see that our method achieves better performance than En-AE.",
"This is because that Snips contains a large number of entities such that the reconstruction error will be lower and become less separable than that in ATIS OOD utterances.",
"For both Snips and Movie IND datasets, DRM+L-Mahalanobis are significantly better than baseline methods with p-value < 0 .",
"01 in most cases for all OOD datasets.",
"For ATIS IND dataset, DRM+L-Mahalanobis shows similar behavior except En-AE since it is easier to train an autoencoder model for ATIS IND dataset due to its carefully collected clean training utterances.",
"We provide a quantitative analysis by visualizing our two methods, DRM and L-Mahalanobis.",
"Figure 4 plots the histograms of detection scores for OOD and IND data.",
"Compared with Figure",
"4(a), DRM significantly reduces the overlap between OOD and IND in Figure",
"4(b).",
"L-Mahalanobis utilizes features from all layers to further reduce the overlap in Figure",
"4(c).",
"Moreover, the score distributions from left to right in Figure 4, imply that a larger entropy of all score reflects a better uncertainty modeling.",
"Figure 5 visualizes the utterance representations learned with or without DRM.",
"The red IND data are tightly clustered within classes (totally 150 CLINC IND classes), while the blue OOD data spread arbitrarily.",
"As one can see, the blue dots in Figure",
"5(b) have less overlap with red dots, indicating the DRM helps to learn the utterance representation to better disentangle IND and OOD data.",
"This paper proposes using only IND utterances to conduct intent classification and OOD detection for SLU in an open-world setting.",
"The proposed DRM has a structure of two branches to avoid overconfidence and achieves a better generalization.",
"The evaluation shows that our method can achieve state-of-the-art performance on various SLU benchmark and in-house datasets for both IND intent classification and OOD detection.",
"In addition, thanks to the generic of our DRM design and with the recent extensive use of BERT on different data modalities, our work can contribute to improving both in-domain classification robustness and out-of-domain detection robustness for various classification models such as image classification, sound classification, vision-language classifications.",
"Our proposed method in this paper has been deployed in the domain classification SLU model for Samsung Bixby voice assistant.",
"In addition to SLU, our work could have a broader impact on other applications, which can be benefited from having a more robust classification system.",
"For example, our method can help the robot to detect objects more accurately or stop safely by correctly identifying unknown objects, classify environmental sounds or detect anomaly sounds, and so on.",
"Moreover, by better detecting the OOD samples that are different from the training data distribution, our method can facilitate to handle distributional shifts between training data and practical usage data."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"abstain",
"result",
"abstain",
"abstain",
"other",
"other",
"other",
"abstain",
"other",
"objective",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"method",
"abstain"
] |
[
"Massively multilingual models for neural machine translation (NMT) are theoretically attractive, but often underperform bilingual models and deliver poor zero-shot translations.",
"In this paper, we explore ways to improve them.",
"We argue that multilingual NMT requires stronger modeling capacity to support language pairs with varying typological characteristics, and overcome this bottleneck via language-specific components and deepening NMT architectures.",
"We identify the off-target translation issue (i.e. translating into a wrong target language) as the major source of the inferior zero-shot performance, and propose random online backtranslation to enforce the translation of unseen training language pairs.",
"Experiments on OPUS-100 (a novel multilingual dataset with 100 languages) show that our approach substantially narrows the performance gap with bilingual models in both one-to-many and many-to-many settings, and improves zero-shot performance by 10 BLEU, approaching conventional pivot-based methods.",
"1 1 Introduction With the great success of neural machine translation (NMT) on bilingual datasets (Bahdanau et al., 2015; Vaswani et al., 2017; Barrault et al., 2019), there is renewed interest in multilingual translation where a single NMT model is optimized for the translation of multiple language pairs (Firat et al., 2016a; Johnson et al., 2017; Lu et al., 2018; Aharoni et al., 2019).",
"Multilingual NMT eases model deployment and can encourage knowledge transfer among related language pairs (Lakew et al., 2018; Tan et al., 2019), improve low-resource translation (Ha et al., 2016; Arivazhagan et al., 2019b), 1 We release our code at https://github.",
"Table 1 : Illustration of the off-target translation issue with French German zero-shot translations with a multilingual NMT model.",
"Our baseline multilingual NMT model often translates into the wrong language for zero-shot language pairs, such as copying the source sentence or translating into English rather than German.",
"and enable zero-shot translation (i.e. direct translation between a language pair never seen in training) (Firat et al., 2016b; Johnson et al., 2017; Al-Shedivat and Parikh, 2019; Gu et al., 2019).",
"Despite these potential benefits, multilingual NMT tends to underperform its bilingual counterparts (Johnson et al., 2017; Arivazhagan et al., 2019b) and results in considerably worse translation performance when many languages are accommodated (Aharoni et al., 2019).",
"Since multilingual NMT must distribute its modeling capacity between different translation directions, we ascribe this deteriorated performance to the deficient capacity of single NMT models and seek solutions that are capable of overcoming this capacity bottleneck.",
"We propose language-aware layer normalization and linear transformation to relax the representation constraint in multilingual NMT models.",
"The linear transformation is inserted in-between the encoder and the decoder so as to facilitate the induction of language-specific translation correspondences.",
"We also investigate deep NMT architectures (Wang et al., 2019a; Zhang et al., 2019) aiming at further reducing the performance gap with bilingual methods.",
"Another pitfall of massively multilingual NMT is its poor zero-shot performance, particularly compared to pivot-based models.",
"Without access to parallel training data for zero-shot language pairs, multilingual models easily fall into the trap of off-target translation where a model ignores the given target information and translates into a wrong language as shown in Table 1.",
"To avoid such a trap, we propose the random online backtranslation (ROBT ) algorithm.",
"ROBT finetunes a pretrained multilingual NMT model for unseen training language pairs with pseudo parallel batches generated by back-translating the target-side training data.",
"2 We perform backtranslation (Sennrich et al., 2016a) into randomly picked intermediate languages to ensure good coverage of 10,000 zero-shot directions.",
"Although backtranslation has been successfully applied to zero-shot translation (Firat et al., 2016b; Gu et al., 2019; Lakew et al., 2019), whether it works in the massively multilingual set-up remained an open question and we investigate it in our work.",
"For experiments, we collect OPUS-100, a massively multilingual dataset sampled from OPUS (Tiedemann, 2012).",
"OPUS-100 consists of 55M English-centric sentence pairs covering 100 languages.",
"As far as we know, no similar dataset is publicly available.",
"3 We have released OPUS-100 to facilitate future research.",
"4 We adopt the Transformer model (Vaswani et al., 2017) and evaluate our approach under one-to-many and many-to-many translation settings.",
"Our main findings are summarized as follows: Increasing the capacity of multilingual NMT yields large improvements and narrows the performance gap with bilingual models.",
"Low-resource translation benefits more from the increased capacity.",
"Language-specific modeling and deep NMT architectures can slightly improve zero-shot 2 Note that backtranslation actually converts the zero-shot problem into a zero-resource problem.",
"We follow previous work and continue referring to zero-shot translation, even when using synthetic training data.",
"3 Previous studies (Aharoni et al., 2019; Arivazhagan et al., 2019b) adopt in-house data which was not released.",
"4 https://github.com/EdinburghNLP/ opus-100-corpus translation, but fail to alleviate the off-target translation issue.",
"Finetuning multilingual NMT with ROBT substantially reduces the proportion of off-target translations (by 50%) and delivers an improvement of 10 BLEU in zero-shot settings, approaching the conventional pivot-based method.",
"We show that finetuning with ROBT converges within a few thousand steps.",
"Pioneering work on multilingual NMT began with multitask learning, which shared the encoder for one-to-many translation (Dong et al., 2015) or the attention mechanism for many-to-many translation (Firat et al., 2016a).",
"These methods required a dedicated encoder or decoder for each language, limiting their scalability.",
"By contrast, Lee et al. (2017) exploited character-level inputs and adopted a shared encoder for many-to-one translation.",
"Ha et al. (2016) and Johnson et al. (2017) further successfully trained a single NMT model for multilingual translation with a target language symbol guiding the translation direction.",
"This approach serves as our baseline.",
"Still, this paradigm forces different languages into one joint representation space, neglecting their linguistic diversity.",
"Several subsequent studies have explored different strategies to mitigate this representation bottleneck, ranging from reorganizing parameter sharing (Black-wood et al., 2018; Sachan and Neubig, 2018; Lu et al., 2018; Wang et al., 2019c; Vzquez et al., 2019), designing language-specific parameter generators (Platanios et al., 2018), decoupling multilingual word encodings (Wang et al., 2019b) to language clustering (Tan et al., 2019).",
"Our language-specific modeling continues in this direction, but with a special focus on broadening normalization layers and encoder outputs.",
"Multilingual NMT allows us to perform zero-shot translation, although the quality is not guaranteed (Firat et al., 2016b; Johnson et al., 2017).",
"We observe that multilingual NMT often translates into the wrong target language on zero-shot directions (Table 1), resonating with the missing ingredient problem' (Arivazhagan et al., 2019a) and the spurious correlation issue (Gu et al., 2019).",
"Approaches to improve zero-shot performance fall into two categories:",
"1) developing novel cross-lingual regulariz-ers, such as the alignment regularizer (Arivazhagan et al., 2019a) and the consistency regularizer (Al-Shedivat and Parikh, 2019); and",
"2) generating artificial parallel data with backtranslation (Firat et al., 2016b; Gu et al., 2019; Lakew et al., 2019) or pivot-based translation (Currey and Heafield, 2019).",
"The proposed ROBT algorithm belongs to the second category.",
"In contrast to Gu et al. (2019) and Lakew et al. (2019), however, we perform online backtranslation for each training step with randomly selected intermediate languages.",
"ROBT avoids decoding the whole training set for each zero-shot language pair and can therefore scale to massively multilingual settings.",
"Our work belongs to a line of research on massively multilingual translation (Aharoni et al., 2019; Arivazhagan et al., 2019b).",
"Aharoni et al. (2019) demonstrated the feasibility of massively multilingual NMT and reported encouraging results.",
"We continue in this direction by developing approaches that improve both multilingual and zero-shot performance.",
"Independently from our work, Arivazhagan et al. (2019b) also find that increasing model capacity with deep architectures (Wang et al., 2019a; Zhang et al., 2019) substantially improves multilingual performance.",
"A concurrent related work is (Bapna and Firat, 2019), which introduces task-specific and lightweight adaptors for fast and scalable model adaptation.",
"Compared to these adaptors, our language-aware layers are jointly trained with the whole NMT model from scratch without relying on any pretraining.",
"We briefly review the multilingual approach (Ha et al., 2016; Johnson et al., 2017) and the Transformer model (Vaswani et al., 2017), which are used as our baseline.",
"Johnson et al. (2017) rely on prepending tokens specifying the target language to each source sentence.",
"In that way a single NMT model can be trained on the modified multilingual dataset and used to perform multilingual translation.",
"Given a source sentence x =( x 1 , x 2 , . . . , x | x | ) , its target reference y =( y 1 , y 2 , . . . , y | y | ) and the target language token t 5 , multilingual NMT translates under the encoder-decoder framework (Bahdanau et al., 2015): H = Encoder ([ t, x ]) , (1) S = Decoder ( y , H ) , (2) 5 t is in the form of <2X> where X is a language name, such as <2EN> meaning translating into English .",
"where H R | x | d / S R | y | d denote the en-coder/decoder output.",
"d is the model dimension.",
"We employ the Transformer (Vaswani et al., 2017) as the backbone NMT model due to its superior multilingual performance (Lakew et al., 2018).",
"The encoder is a stack of L = 6 identical layers, each containing a self-attention sublayer and a point-wise feedforward sublayer.",
"The decoder follows a similar structure, except for an extra cross-attention sublayer used to condition the decoder on the source sentence.",
"Each sublayer is equipped with a residual connection (He et al., 2015), followed by layer normalization (Ba et al., 2016, LN ( ) ): a = LN ( a | g , b ) = a (cid:12) g + b , (3) where (cid:12) denotes element-wise multiplication, and are the mean and standard deviation of the input vector a R d , respectively.",
"g R d and b R d are model parameters.",
"They control the sharpness and location of the regularized layer output a .",
"Layer normalization has proven effective in accelerating model convergence (Ba et al., 2016).",
"Despite its success, multilingual NMT still suffers from",
"1) insufficient modeling capacity , where including more languages results in reduction in translation quality (Aharoni et al., 2019); and",
"2) off-target translation , where models translate into a wrong target language on zero-shot directions (Ari-vazhagan et al., 2019a).",
"These drawbacks become severe in massively multilingual settings and we explore approaches to alleviate them.",
"We hypothesize that the vanilla Transformer has insufficient capacity and search for model-level strategies such as deepening Transformer and devising language-specific components.",
"By contrast, we regard the lack of parallel data as the reason behind the off-target issue.",
"We resort to data-level strategy by creating, in online fashion, artificial parallel training data for each zero-shot language pair in order to encourage its translation.",
"Deep Transformer One natural way to improve the capacity is to increase model depth.",
"Deeper neural models are often capable of inducing more generalizable (abstract') representations and capturing more complex dependencies and have shown encouraging performance on bilingual translation (Bapna et al., 2018; Zhang et al., 2019; Wang et al., 2019a).",
"Language-aware Layer Normalization Regardless of linguistic differences, layer normalization in multilingual NMT simply constrains all languages into one joint Gaussian space, which makes learning more difficult.",
"We propose to relax this restriction by conditioning the normalization on the given target language token t (LALN for short) as follows: a = LN ( a | g t , b t ) .",
"We apply this formula to all normalization layers, and leave the study of conditioning on source language information for the future.",
"Language-aware Linear Transformation Different language pairs have different translation correspondences or word alignments (Koehn, 2010).",
"In addition to LALN , we introduce a target-language-aware linear transformation (LALT for short) between the encoder and the decoder to enhance the freedom of multilingual NMT in expressing flexible translation relationships.",
"We adapt Eq.",
"(2) as follows: S = Decoder ( y , HW t ) , (5) where W t R d d denotes model parameters.",
"Note that adding one more target language in LALT brings in only one weight matrix.",
"6 Compared to existing work (Firat et al., 2016b; Sachan and Neubig, 2018), LALT reaches a better trade-off between expressivity and scalability.",
"Random Online Backtranslation Prior studies on backtranslation for zero-shot translation decode the whole training set for each zero-shot language pair (Gu et al., 2019; Lakew et al., 2019), and scalability to massively multilingual translation is questionable in our setting, the number of zero-shot translation directions is 9702.",
"We address scalability by performing online backtranslation paired with randomly sampled intermediate languages.",
"Algorithm 1 shows the detail of ROBT , where for each training instance ( x k , y k , t k ) , we uniformly sample an intermediate language t (cid:48) k ( t k (cid:54) = t (cid:48) k ), back-translate y k into 6 We also attempted to factorize W t into smaller matri-ces/vectors to reduce the number of parameters.",
"Unfortunately, the final performance was rather disappointing.",
"Algorithm 1: Algorithm for Random Online Backtranslation Input : Multilingual training data, D ; Pretrained multilingual model, M ; Maximum finetuning step, N ; Finetuning batch size, B ; Target language set, T ; Output: Zero-shot enabled model, M 1 i 0 2 while i N not converged do 3 B sample batch from D 4 for k 1 to B do 5 ( x k , y k , t k ) B k 6 t (cid:48) k Uniform ( T ) such that t (cid:48) k (cid:54) = t k 7 x (cid:48) k M ([ t (cid:48) k , y k ]) // backtrans t k t (cid:48) k to produce training example for t (cid:48) k t k 8 B B ( x (cid:48) k , y k , t k ) 9 Optimize M using B 10 i i + 1 11 return M t (cid:48) k to obtain x (cid:48) k , and train on the new instance ( x (cid:48) k , y k , t k ) .",
"Although x (cid:48) k may be poor initially (translations are produced on-line by the model being trained), ROBT still benefits from the translation signal of t (cid:48) k t k .",
"To reduce the computational cost, we implement batch-based greedy decoding for line 7.",
"Recent work has scaled up multilingual NMT from a handful of languages to tens or hundreds, with many-to-many systems being capable of translation in thousands of directions.",
"Following Aharoni et al. (2019), we created an English-centric dataset, meaning that all training pairs include English on either the source or target side.",
"Translation for any language pair that does not include English is zero-shot or must be pivoted through English.",
"We created OPUS-100 by sampling data from the OPUS collection (Tiedemann, 2012).",
"OPUS-100 is at a similar scale to Aharoni et al. (2019)'s, with 100 languages (including English) on both sides and up to 1M training pairs for each language pair.",
"We selected the languages based on the volume of parallel data available in OPUS.",
"The OPUS collection is comprised of multiple corpora, ranging from movie subtitles to GNOMEID Model Architecture L #Param BLEU 94 WR BLEU 4 1 Transformer, Bilingual 6 106M -20.90 2 Transformer, Bilingual 12 150M -22.75 3 Transformer 6 106M 24.64 ref 18.95 4 3 + MATT 6 99M 23.81 20.2 17.95 5 4 + LALN 6 102M 24.22 28.7 18.50 6 4 + LALT 6 126M 27.11 72.3 20.28 7 4 + LALN + LALT 6 129M 27.18 75.5 20.08 8 4 12 137M 25.69 81.9 19.13 9 7 12 169M 28.04 91.5 19.93 10 7 24 249M 29.60 92.6 21.23 Table 2 : Test BLEU for one-to-many translation on OPUS-100 (100 languages).",
"Bilingual : bilingual NMT, L : model depth (for both encoder and decoder), #Param : parameter number, WR : win ratio (%) compared to ref ( 3 (cid:13) ), MATT : the merged attention (Zhang et al., 2019).",
"LALN and LALT denote the proposed language-aware layer normalization and linear transformation, respectively.",
"BLEU 94 / BLEU 4 : average BLEU over all 94 translation directions in test set and En De/Zh/Br/Te, respectively.",
"Higher BLEU and WR indicate better result.",
"Best scores are highlighted in bold .",
"documentation to the Bible.",
"We did not curate the data or attempt to balance the representation of different domains, instead opting for the simplest approach of downloading all corpora for each language pair and concatenating them.",
"We randomly sampled up to 1M sentence pairs per language pair for training, as well as 2000 for validation and 2000 for testing.",
"7 To ensure that there was no overlap (at the monolingual sentence level) between the training and validation/test data, we applied a filter during sampling to exclude sentences that had already been sampled.",
"Note that this was done cross-lingually, so an English sentence in the Portuguese-English portion of the training data could not occur in the Hindi-English test set, for instance.",
"OPUS-100 contains approximately 55M sentence pairs.",
"Of the 99 language pairs, 44 have 1M sentence pairs of training data, 73 have at least 100k, and 95 have at least 10k.",
"To evaluate zero-shot translation, we also sampled 2000 sentence pairs of test data for each of the 15 pairings of Arabic, Chinese, Dutch, French, German, and Russian.",
"Filtering was used to exclude sentences already in OPUS-100.",
"We perform one-to-many (English-X) and many-to-many (English-X X-English) translation on OPUS-100 ( |T | is 100).",
"We apply byte pair encoding (BPE) (Sennrich et al., 2016b; Kudo and Richardson, 2018) to handle multilingual words with a joint vocabulary size of 64k.",
"We randomly 7 For efficiency, we only use 200 sentences per language pair for validation in our multilingual experiments.",
"shuffle the training set to mix instances of different language pairs.",
"We adopt BLEU (Papineni et al., 2002) for translation evaluation with the toolkit SacreBLEU (Post, 2018) 8 .",
"We employ the langdetect library 9 to detect the language of translations, and measure the translation-language accuracy for zero-shot cases.",
"Rather than providing numbers for each language pair, we report average BLEU over all 94 language pairs with test sets (BLEU 94 ).",
"We also show the win ratio (WR), counting the proportion where our approach outperforms its baseline.",
"Apart from multilingual NMT, our baselines also involve bilingual NMT and pivot-based translation (only for zero-shot comparison).",
"We select four typologically different target languages (Ger-man/De, Chinese/Zh, Breton/Br, Telugu/Te) with varied training data size for comparison to bilingual models as applying bilingual NMT to each language pair is resource-consuming.",
"We report average BLEU over these four languages as BLEU 4 .",
"We reuse the multilingual BPE vocabulary for bilingual NMT.",
"We train all NMT models with the Transformer base settings (512/2048, 8 heads) (Vaswani et al., 2017).",
"We pair our approaches with the merged attention (MATT ) (Zhang et al., 2019) to reduce training time.",
"Other details about model settings are in the Appendix.",
"Table 2 summarizes the results.",
"The inferior performance of multilingual NMT ( 3 (cid:13) ) against its 8 Signature: BLEU+case.mixed+numrefs.1+smooth.exp+ tok.13a+version.1.4.1 9 https://github.com/Mimino666/ langdetect ID Model Architecture L #Param w/o ROBT w/ ROBTBLEU 94 WR BLEU 4 BLEU 94 WR BLEU 4 1 Transformer, Bilingual 6 110M -20.28 -2 Transformer 6 110M 19.50 ref 15.35 18.75 4.3 14.73 3 2 + MATT 6 103M 18.49 5.3 14.90 17.85 6.4 14.38 4 3 + LALN + LALT 6 133M 21.39 78.7 18.13 20.81 69.1 17.45 5 3 12 141M 20.77 94.7 16.08 20.24 84.0 15.80 6 4 12 173M 22.86 97.9 19.25 22.39 97.9 18.23 7 4 24 254M 23.96 100.0 19.83 23.36 97.9 19.45 Table 3 : English X test BLEU for many-to-many translation on OPUS-100 (100 languages).",
"WR : win ratio (%) compared to ref ( 2 (cid:13) w/o ROBT ).",
"ROBT denotes the proposed random online backtranslation method.",
"Table 4 : X English test BLEU for many-to-many translation on OPUS-100 (100 languages).",
"WR : win ratio (%) compared to ref ( 2 (cid:13) w/o ROBT ).",
"bilingual counterpart ( 1 (cid:13) ) reflects the capacity issue (-1.95 BLEU 4 ).",
"Replacing the self-attention with MATT slightly deteriorates performance (-0.83 BLEU 94 3 (cid:13) 4 (cid:13) ); we still use MATT for more efficiently training deep models.",
"Our ablation study ( 4 (cid:13) 7 (cid:13) ) shows that enriching the language awareness in multilingual NMT substantially alleviates this capacity problem.",
"Relaxing the normalization constraints with LALN gains 0.41 BLEU 94 with 8.5% WR ( 4 (cid:13) 5 (cid:13) ).",
"Decoupling different translation relationships with LALT delivers an improvement of 3.30 BLEU 94 and 52.1% WR ( 4 (cid:13) 6 (cid:13) ).",
"Combining LALT and LALN demonstrates their complementarity (+3.37 BLEU 94 and +55.3% WR, 4 (cid:13) 7 (cid:13) ), significantly outperforming the multilingual baseline (+2.54 BLEU 94 , 3 (cid:13) 7 (cid:13) ), albeit still behind the bilingual models (-0.82 BLEU 4 , 1 (cid:13) 7 (cid:13) ).",
"Deepening the Transformer also improves the modeling capacity (+1.88 BLEU 94 , 4 (cid:13) 8 (cid:13) ).",
"Although deep Transformer performs worse than LALN +L ALT under a similar number of model parameters in terms of BLEU (-1.49 BLEU 94 , 7 (cid:13) 8 (cid:13) ), it shows more consistent improvements across different language pairs (+6.4% WR).",
"We obtain better performance when integrating all approaches ( 9 (cid:13) ).",
"By increasing the model depth to 24 (10 (cid:13) ), Transformer with our approach yields a score of 29.60 BLEU 94 and 21.23 BLEU 4 , beating the baseline ( 3 (cid:13) ) on 92.6% tasks and outperforming the base bilingual model ( 1 (cid:13) ) by 0.33 BLEU 4 .",
"Our approach significantly narrows the performance gap between multilingual NMT and bilingual NMT (20.90 BLEU 4 21.23 BLEU 4 , 1 (cid:13) 10 (cid:13) ), although similarly deepening bilingual models surpasses our approach by 1.52 BLEU 4 (10 (cid:13) 2 (cid:13) ).",
"We train many-to-many NMT models on the concatenation of the one-to-many dataset (English X) and its reversed version (X English), and evaluate the zero-shot performance on X X language pairs.",
"Table 3 and Table 4 show the translation results for English X and X English, respectively.",
"10 We focus on the translation performance w/o ROBT in this subsection.",
"Compared to the one-to-many translation, the many-to-many translation must accommodate twice as many translation directions.",
"We observe that many-to-many NMT models suffer more se-10 Note that the one-to-many training and test sets were not yet aggressively filtered for sentence overlap as described in Section 5, so results in Table 2 and Table 3 are not directly comparable.",
"Table 5 : Test BLEU for High/Medium/Low ( High/Med/Low ) resource language pairs in many-to-many setting on OPUS-100 (100 languages).",
"We report average BLEU for each category.",
"Table 6 : Test BLEU and translation-language accuracy for zero-shot translation in many-to-many setting on OPUS-100 (100 languages).",
"BLEU zero / ACC zero : average BLEU/accuracy over all zero-shot translation directions in test set, Pivot : the pivot-based translation that first translates one source sentence into English (X English NMT), and then into the target language (English X NMT).",
"Lower accuracy indicates severe off-target translation.",
"The average Pearson correlation coefficient between language accuracy and the corresponding BLEU is 0.93 (significant at p < 0 . 01 ).",
"rious capacity issues on English X tasks (-4.93 BLEU 4 , 1 (cid:13) 2 (cid:13) in Table 3 versus -1.95 BLEU 4 in Table 2), where the deep Transformer with LALN + LALT effectively reduces this gap to -0.45 BLEU 4 ( 1 (cid:13) 7 (cid:13) , Table 3), resonating with our findings from Table 2.",
"By contrast, multilingual NMT benefits X English tasks considerably from the multitask learning alone, outperforming bilingual NMT by 2.13 BLEU 4 ( 1 (cid:13) 2 (cid:13) , Table 4).",
"Enhancing model capacity further enlarges this margin to +4.80 BLEU 4 ( 1 (cid:13) 7 (cid:13) , Table 4).",
"We find that the overall quality of English X translation (19.50/23.96 BLEU 94 , 2 (cid:13) / 7 (cid:13) , Table 3) lags far behind that of its X English counterpart (27.60/31.36 BLEU 94 , 2 (cid:13) /12 (cid:13) , Table 4), regardless of the modeling capacity.",
"We ascribe this to the highly skewed training data distribution, where half of the training set uses English as the target.",
"This strengthens the ability of the decoder to translate into English, and also encourages knowledge transfer for X English language pairs.",
"LALN and LALT show the largest benefit for English X (+2.9 BLEU 94 , 3 (cid:13) 4 (cid:13) , Table 3), and only a small benefit for X English (+0.6 BLEU 94 , 3 (cid:13) 4 (cid:13) , Table 4).",
"This makes sense considering that LALN and LALT are specific to the target language, so capacity is mainly increased for English X. Deepening the Transformer yields benefits in both directions (+2.57 BLEU 94 for English X, +3.86 BLEU 94 for X English; 4 (cid:13) 7 (cid:13) , Tables 3 and 4).",
"Our multilingual training data is distributed unevenly across different language pairs, which could affect the knowledge transfer delivered by language-aware modeling and deep Transformer in multilingual translation.",
"We investigate this effect by grouping different language pairs in OPUS-100 into three categories according to their training data size: High ( 0 . 9 M, 45), Low ( < 0 .",
"1 M,",
"18) and Medium (others, 31).",
"Table 5 shows the results.",
"Language-aware modeling benefits low-resource language pairs the most on English X translation (+5.82 BLEU, Low versus +1.37/+3.11 BLEU, High/Med, 2 (cid:13) 3 (cid:13) ), but has marginal impact on X English translation as analyzed in Section 6.3.",
"By contrast, deep Transformers yield similar benefits across different data scales (+2.38 average BLEU, English X and +2.31 average BLEU, X English, 2 (cid:13) 4 (cid:13) ).",
"We obtain the best performance by integrating both ( 1 (cid:13) 6 (cid:13) ) with a clear positive transfer to low-resource language pairs.",
"Previous work shows that a well-trained multilingual model can do zero-shot X Y translation directly (Firat et al., 2016b; Johnson et al., 2017).",
"Our results in Table 6 reveal that the translation quality is rather poor (3.97 BLEU zero , 2 (cid:13) w/o ROBT ) compared to the pivot-based bilingual baseline (12.98 BLEU zero , 1 (cid:13) ) under the massively multilingual setting (Aharoni et al., 2019), although translations into different target languages show varied performance.",
"The marginal gain by the deep Transformer with LALN + LALT (+1.44 BLEU zero , 2 (cid:13) 6 (cid:13) , w/o ROBT ) suggests that weak model capacity is not the major cause of this inferior performance.",
"In a manual analysis on the zero-shot NMT outputs, we found many instances of off-target translation (Table 1).",
"We use translation-language accuracy to measure the proportion of translations that are in the correct target language.",
"Results in Table 6 show that there is a huge accuracy gap between the multilingual and the pivot-based method (-48.83% ACC zero , 1 (cid:13) 2 (cid:13) , w/o ROBT ), from which we conclude that the off-target translation issue is one source of the poor zero-shot performance.",
"We apply ROBT to multilingual models by finetuning them for an extra 100k steps with the same batch size as for training.",
"Table 6 shows that ROBT substantially improves ACC zero by 35% 50%, reaching 85% 87% under different model settings.",
"The multilingual Transformer with ROBT achieves a translation improvement of up to 10.11 BLEU zero ( 2 (cid:13) w/o ROBT 7 (cid:13) w/ ROBT ), outperforming the bilingual baseline by 1.1 BLEU zero ( 1 (cid:13) w/o ROBT 7 (cid:13) w/ ROBT ) and approaching the pivot-based multilingual baseline (-0.63 BLEU zero , 8 (cid:13) w/o ROBT 7 (cid:13) w/ ROBT ).",
"11 The strong Pearson correlation between the accuracy and BLEU (0.92 on average, significant at p < 0 . 01 ) suggests that the improvement on the off-target translation issue explains the increased translation performance to a large extent.",
"Results in Table 3 and 4 show that ROBT 's success on zero-shot translation comes at the cost of sacrificing 0.50 BLEU 94 and 4% WR on English X and X English translation.",
"We also note that models with more capacity yield higher 11 Note that ROBT improves all zero-shot directions due to its randomness in sampling the intermediate languages.",
"We do not bias ROBT to the given zero-shot test set.",
"Figure 1 : Zero-shot average test BLEU for multilingual NMT models finetuned by ROBT .",
"ALL = MATT + LALN + LALT .",
"Multilingual models with ROBT quickly converge on zero-shot directions.",
"Table 7 : Zero-short translation quality for ROBT under different settings.",
"100-to-100 : the setting used in the above experiments; we set T to all target languages.",
"6-to-6 : T only includes the zero-shot languages in the test set.",
"We employ 6-layer Transformer with LALN and LALT for experiments.",
"language accuracy (+7.78%/+13.81% ACC zero , 3 (cid:13) 5 (cid:13) / 3 (cid:13) 4 (cid:13) , w/o ROBT ) and deliver better zero-shot performance before (+1.22/+0.53 BLEU zero , 3 (cid:13) 5 (cid:13) / 3 (cid:13) 4 (cid:13) , w/o ROBT ) and after ROBT (+2.20/+1.56 BLEU zero , 3 (cid:13) 5 (cid:13) / 3 (cid:13) 4 (cid:13) , w/ ROBT ).",
"In other words, increasing the modeling capacity benefits zero-shot translation and improves robustness.",
"Convergence of ROBT .",
"Unlike prior studies (Gu et al., 2019; Lakew et al., 2019), we resort to an online method for backtranslation.",
"The curve in Figure 1 shows that ROBT is very effective, and takes only a few thousand steps to converge, suggesting that it is unnecessary to decode the whole training set for each zero-shot language pair.",
"We leave it to future work to explore whether different back-translation strategies (other than greedy decoding) will deliver larger and continued benefits with ROBT .",
"Impact of T on ROBT .",
"ROBT heavily relies on T , the set of target languages considered, to distribute the modeling capacity on zero-shot directions.",
"To study its impact, we provide a comparison by constraining T to 6 languages in the zero-shot test set.",
"Results in Table 7 show that the biased ROBT outperforms the baseline by 0.75 BLEU zero .",
"By narrowing T , more capacity is scheduled to the focused languages, which results in performance improvements.",
"But the small scale of this improvement suggests that the number of zero-shot directions is not ROBT 's biggest bottleneck.",
"This paper explores approaches to improve massively multilingual NMT, especially on zero-shot translation.",
"We show that multilingual NMT suffers from weak capacity, and propose to enhance it by deepening the Transformer and devising language-aware neural models.",
"We find that multilingual NMT often generates off-target translations on zero-shot directions, and propose to correct it with a random online backtranslation algorithm.",
"We empirically demonstrate the feasibility of backtranslation in massively multilingual settings to allow for massively zero-shot translation for the first time.",
"We release OPUS-100, a multilingual dataset from OPUS including 100 languages with around 55M sentence pairs for future study.",
"Our experiments on this dataset show that the proposed approaches substantially increase translation performance, narrowing the performance gap with bilingual NMT models and pivot-based methods.",
"In the future, we will develop lightweight alternatives to LALT to reduce the number of model parameters.",
"We will also exploit novel strategies to break the upper bound of ROBT and obtain larger zero-shot improvements, such as generative modeling (Zhang et al., 2016; Su et al., 2018; Garca et al., 2020; Zheng et al., 2020).",
"This project has received funding from the European Union's Horizon 2020 Research and Innovation Programme under Grant Agreements 825460 (ELITR) and 825299 (GoURMET).",
"This project has received support from Samsung Electronics Polska sp.",
"z o.o. Samsung R&D Institute Poland.",
"Rico Sennrich acknowledges support of the Swiss National Science Foundation (MUTAMUR; no. 176727)."
] | [
"abstain",
"objective",
"abstain",
"objective",
"objective",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"objective",
"method",
"abstain",
"other",
"objective",
"result",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"abstain",
"method",
"abstain",
"other",
"other",
"other",
"other",
"abstain",
"other",
"abstain",
"other",
"objective",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"abstain",
"objective",
"objective",
"abstain",
"other",
"other",
"other",
"other"
] |
[
"Legal Judgment Prediction (LJP) is the task of automatically predicting a law case's judgment results given a text describing its facts, which has excellent prospects in judicial assistance systems and convenient services for the public.",
"In practice, confusing charges are frequent, because law cases applicable to similar law articles are easily misjudged.",
"For addressing this issue, the existing method relies heavily on domain experts, which hinders its application in different law systems.",
"In this paper, we present an end-to-end model, LADAN , to solve the task of LJP.",
"To distinguish confusing charges, we propose a novel graph neural network to automatically learn subtle differences between confusing law articles and design a novel attention mechanism that fully exploits the learned differences to extract compelling discriminative features from fact descriptions attentively.",
"Experiments conducted on real-world datasets demonstrate the superiority of our LADAN.",
"Exploiting artificial intelligence techniques to assist legal judgment has become popular in recent years.",
"Legal judgment prediction ( LJP ) aims to predict a case's judgment results, such as applicable law articles, charges, and terms of penalty, based on its fact description, as illustrated in Figure 1.",
"LJP can assist judiciary workers in processing cases and offer legal consultancy services to the public.",
"In the literature, LJP is usually formulated as a text classification problem, and several rule-based methods (Liu et al., 2004; Lin et al., 2012) and neural-based methods (Hu et al., 2018; Luo et al., 2017; Zhong et al., 2018) have been proposed.",
"That is, due to the high similarity of several law articles, their corresponding law cases can be easily misjudged.",
"For example, in Figure 2, both Article 385 and Article 163 describe offenses of accepting bribes, and their subtle difference is whether the guilty parties are state staffs or not.",
"The key to solving the confusing charges issue is how to capture essential but rare features for distinguishing confusing law articles.",
"Hu et al. (2018) defined ten discriminative attributes to distinguish confusing charges.",
"However, their method relies too much on experts to hinder its applications in a large number of laws.",
"In practice, we desire a method that can automatically extract textual features from law articles to assist JLP.",
"The most relevant existing work to this requirement is (Luo et al., 2017), which used an attention mechanism to extract features from fact descriptions with respect to a specific law article.",
"As shown in Figure 3a, for each law article, an attention vector is computed, which is used to extract features from the fact description of a law case to predict whether the law article is applicable to the case.",
"Nevertheless, the weakness Any state staffs who, taking advantage of his position, demands money or property from another person, or illegally accepts another person's money or property in return for securing benefits for the person shall be guilty of acceptance of bribes .",
"is that they learn each law article's attention vector independently, and this may result in that similar attention vectors are learned for semantically close law articles; hence, it is ineffective in distinguishing confusing charges.",
"To solve the confusing charges issue, we propose an end-to-end framework, i.e., Law Article Distillation based Attention Network (LADAN).",
"LADAN uses the difference among similar law articles to attentively extract features from law cas-es' fact descriptions, which is more effective in distinguishing confusing law articles, and improve the performance of LJP.",
"To obtain the difference among similar law articles, a straightforward way is to remove duplicated texts between two law articles and only use the leftover texts for the attention mechanism.",
"However, we find that this method may generate the same leftover texts for different law article, and generate misleading information to LJP.",
"As shown in Fig. 2, if we remove the duplicated phrases and sentences between Article 163 and Article 385 (i.e., the red text in Fig. 2), and between Article 164 and Article 389 (i.e., the pink text in Fig. 2), respectively, then Article 385 and Article 389 will be almost same to each other (i.e., the blue text in Fig. 2).",
"We design LADAN based on the following observation: it is usually easy to distinguish dissimilar law articles as sufficient distinctions exist, but challenging to discriminate similar law articles due to the few useful features.",
"We first group law articles into different communities, and law articles in the same community are highly similar to each other.",
"Then we propose a graph-based representation learning method to automatically explore the difference among law articles and comA 1 A 2 A n ... a b Fact Description A n-1 A n-2 Fact Description A n-2 A n A n-1 ... n n-1 n-2 1 2 3 ...",
"pute an attention vector for each community.",
"For an input law case, we learn both macroand microlevel features.",
"Macro-level features are used for predicting which community includes the applicable law articles.",
"Micro-level features are attentively extracted by the attention vector of the selected community for distinguishing confusing law articles within the same community.",
"Our main contributions are summarized as follows: (1) We develop an end-to-end framework, i.e., LADAN, to solve the LJP task.",
"It addresses the confusing charges issue by mining similarities between fact descriptions and law articles as well as the distinctions between confusing law articles.",
"(2) We propose a novel graph distillation operator (GDO) to extract discriminative features for effectively distinguishing confusing law articles.",
"(3) We conduct extensive experiments on real-world datasets.",
"The results show that our model outperforms all state-of-the-art methods.",
"Our work solves the problem of the confusing charge in the LJP task by referring to the calculation principle of graph neural network (GNN).",
"Therefore, in this section, we will introduce related works from these two aspects.",
"Existing approaches for legal judgment prediction (LJP) are mainly divided into three categories.",
"In early times, works usually focus on analyzing existing legal cases in specific scenarios with mathematical and statistical algorithms (Kort, 1957; Nagel, 1963; Keown, 1980; Lauderdale and Clark, 2012).",
"However, these methods are limited to s-mall datasets with few labels.",
"Later, a number of machine learning-based methods (Lin et al., 2012; Liu et al., 2004; Sulea et al., 2017) were developed to solve the problem of LJP, which almost combine some manually designed features with a linear classifier to improve the performance of case classification.",
"The shortcoming is that these methods rely heavily on manual features, which suffer from the generalization problem.",
"In recent years, researchers tend to exploit neural networks to solve LJP tasks.",
"Luo et al. (2017) propose a hierarchical attentional network to capture the relation between fact description and relevant law articles to improve the charge prediction.",
"Zhong et al. (2018) model the explicit dependencies among subtasks with scalable directed acyclic graph forms and propose a topological multi-task learning framework for effectively solving these subtasks together.",
"Yang et al. (2019) further refine this framework by adding backward dependencies between the prediction results of subtasks.",
"To the best of our knowledge, Hu et al. (2018) are the first to study the problem of discriminating confusing charges for automatically predicting applicable charges.",
"They manually define 10 discriminative attributes and propose to enhance the representation of the case fact description by learning these attributes.",
"This method relies too much on experts and cannot be easily extended to different law systems.",
"To solve this issue, we propose a novel attention framework that automatically extracts differences between similar law articles to enhance the representation of fact description.",
"Due to its excellent performance in graph structure data, GNN has attracted significant attention (Kipf and Welling, 2017; Hamilton et al., 2017; Bonner et al., 2019).",
"In general, existing GNNs focus on proposing different aggregation schemes to fuse features from the neighborhood of each node in the graph for extracting richer and more comprehensive information: Kipf et al. (2017) propose graph convolution networks which use mean pooling to pool neighborhood information; GraphSAGE (Hamilton et al., 2017) concatenates the node's features and applies mean/max/LSTM operators to pool neighborhood information for inductively learning node embed-dings; MR-GNN (Xu et al., 2019) aggregates the multi-resolution features of each node to exploit node information, subgraph information, and global information together; Besides, Message Passing Neural Networks (Gilmer et al., 2017) further consider edge information when doing the aggregation.",
"However, the aggregation schemes lead to the over-smoothing issue of graph neural networks (Li et al., 2018), i.e., the aggregated node representations would become indistinguishable, which is entirely contrary to our goal of extracting distinguishable information.",
"So in this paper, we propose our distillation operation, based on a distillation strategy instead of aggregation schemes, to extract the distinguishable features between similar law articles.",
"In this section, we introduce some notations and terminologies, and then formulate the LJP task.",
"Law Cases.",
"Each law case consists of a fact description and several judgment results (cf. Figure 1).",
"The fact description is represented as a text document, denoted by f .",
"The judgment results may include applicable law articles , charges , terms of penalty , etc.",
"Assume there are t kinds of judgment results, and the i -th judgment result is represented as a categorical variable y i which takes value from set Y i .",
"Then, a law case can be represented by a tuple ( f, y 1 , . . . , y t ) .",
"Law Articles.",
"Law cases are often analyzed and adjudicated according to a legislature's statutory law (also known as, written law ).",
"Formally, we denote the statutory law as a set of law articles L (cid:44) { L 1 , . . . , L m } where m is the number of law articles.",
"Similar to the fact description of cases, we also represent each law article L i as a document.",
"Legal Judgment Prediction.",
"In this paper, we consider three kinds of judgment results: applicable law articles , charges , and terms of penalty .",
"Given a training dataset D (cid:44) { ( f, y 1 , y 2 , y 3 ) z } qz =1 of size q , we aim to train a model F ( ) that can predict the judgment results for any test law case with a fact description f test , i.e., F ( f test , L ) = ( y 1 , y 2 , y 3 ) , where y i Y i , i = 1 , 2 , 3 .",
"Following (Zhong et al., 2018; Yang et al., 2019), we assume each case has only one applicable law article.",
"In our framework LADAN (cf. Fig. 4a), the fact description of a case is represented by two parts: a basic representation , denoted by v b f , and a distinguishable representation , denoted by v d f .",
"The basic representation v b f contains basic semantic information for matching a group of law articles that may apply to the case.",
"In contrast, the distinguishable representation v d f captures features that can effectively distinguish confusing law articles.",
"The concatenation of v b f and v d f is fed into subsequent classifiers to predict the labels of the JLP task.",
"As we mentioned, it is easy to distinguish dissimilar law articles as sufficient distinctions exist, and the difficulty in solving confusing charges lies in extracting distinguishable features of similar law articles.",
"To obtain the basic representation v b f , therefore, we use one of the popular document encoding methods (e.g., CNN encoder (Kim, 2014) and Bi-RNN encoder (Yang et al., 2016)).",
"To learn the distinguishable representation v d f , we use a law distillation module first to divide law articles to several communities to ensure that the law articles in each community are highly similar, and then extract each community i 's distinction vector (or, distinguishable features) i from the basic representation of law articles in community i .",
"Given the case's fact description, from all communities' distinction vectors, we select the most relevant one (i.e., c in Fig.",
"4(a)) for attentively extracting the distinguishable features v d f in the fact re-encode module .",
"In the follows, we elaborate law distillation module (Sec. 4.2) and fact re-encode module (Sec. 4.3) respectively.",
"A case might be misjudged due to the high similarity of some law articles.",
"To alleviate this problem, we design a law distillation module (cf.",
"Fig. 4",
"b) to extract distinguishable and representative information from all law articles.",
"Specifically, it first uses a graph construction layer ( GCL ) to divide law articles into different communities.",
"For each law article community, a graph distillation layer is applied to learn its discriminative representation, hereinafter, called distinction vector .",
"To find probably confusing law articles, we first construct a fully-connected graph G for all law articles L , where the weight on the edge between a pair of law article L i , L j L is defined as",
"the cosine similarity between their TF-IDF (Ter-m Frequency-Inverse Document Frequency) representations tf idf i and tf idf j .",
"Since confusing law articles are usually semantically similar and there exists sufficient information to distinguish dissimilar law articles, we remove the edges with weights less than a predefined threshold from graph G .",
"By setting an appropriate , we obtain a new graph G = { g i } Mi =1 composed of several disconnected subgraphs g 1 , . . . , g M (or, com-munities), where each g i , i = 1 , . . . , M contains a specific community of probably confusing articles.",
"Our later experimental results demonstrate that this easy-to-implement method effectively improves the performance of LADAN.",
"To extract the distinguishable information from each community g i , a straightforward way is to delete duplicate words and sentences presented in law articles within the community (as described in Sec. 1).",
"In addition to introducing significant errors, this simple method cannot be plugged into end-to-end neural architectures due to its non-differentiability.",
"To overcome the above issues, inspired by the popular graph convolution operator ( GCO ) (Kipf and Welling, 2017; Hamilton et al., 2017; Velickovic et al., 2017), we propose a graph distillation operator ( GDO ) to effectively extract distinguishable features.",
"Different from GCO, which computes the message propagation between neighbors and aggregate these messages to enrich representations of nodes in the graph, the basic idea behind our GDO is to learn effective features with distinction by removing similar features between nodes.",
"Specifically, for an arbitrary law article L i , GDO uses a trainable weight matrix to capture similar information between it and its neighbors in graph G , and a matrix to extract effective semantic features of L i .",
"At each layer l 0 , the aggregation of similar information between L i and its neighbors is removed from its representation, that is, v ( l +1) L i = ( l ) v ( l ) L i (cid:88) L j N i ( l ) [ v ( l ) L i , v ( l ) L j ] | N i | + b ( l ) where v ( l ) L i R d l refers to the representation of law L i in the l th graph distillation layer, N i refers to the neighbor set of L i in graph G , b ( l ) is the bias, and ( l ) R d l +1 d l and ( l ) R d l +1 2 d l are the trainable self weighted matrix and the neighbor similarity extracting matrix respectively.",
"Note that d l is the dimension of the feature vector in the l th graph distillation layer.",
"We set d 0 = d s , where d s is the dimension of basic representations v b f and v L i .",
"Similar to GCO, our GDO also supports multi-layer stacking.",
"Using GDO with H layers, we output law article representation of the last layer, i.e., v ( H ) L i R d H , which contains rich distinguishable features that can distinguish law article L i from the articles within the same community.",
"To further improve law articles' distinguishable features, for each subgraph g i , i = 1 , 2 , . . . , M in graph G , we compute its distinction vector i by using pooling operators to aggregate the distinguishable features of articles in g i .",
"Formally, i is computed as: i = [ MaP ( { v ( H ) L i } L j g i ) , MiP ( { v ( H ) L i } L j g i )] where MaP ( ) and MiP ( ) are the element-wise max pooling and element-wise min pooling operators respectively.",
"To capture a law case's distinguishable features from its fact description f , we firstly define the following linear function, which is used to predict its most related community g c in graph G :",
"where v b f is the basic representation of fact description f , W g RM d s and b g RM are the trainable weight matrix and bias respectively.",
"Each element X i X , i = 1 , ..., M reflects the closeness between fact description f and law articles community g i .",
"The most relevant community g c is computed as c = arg max i =1 ,...,M X i .",
"Then, we use the corresponding community's distinction vector c to attentively extract distinguishable features from fact description f .",
"Inspired by (Yang et al., 2016), we attentively extract distinguishable features based on word-level and sentence-level Bi-directional Gated Recurrent Units (Bi-GRUs).",
"Specifically, for each input sentence S i = [ w i, 1 , , w i,n i ] in fact description f , word-level Bi-GRUs will output a hidden state sequence, that is, h i,j = [ GRU ( w i,j ) , GRU ( w i,j )] , j = 1 , ..., n i , where w i,j represents the word embedding of word w i.j and h i,j R d w .",
"i,j is formally computed as: i,j = exp( tanh ( W w h i,j ) T ( W gw c )) (cid:80) j exp( tanh ( W w h i,j ) T ( W gw c )) , where W w and W gw are trainable weight matrices.",
"Then, we get a representation of sentence S i as: v s i = n i (cid:88) j =1 i,j h i,j , where n i denotes the word number in sentence S i .",
"Based on this hidden state sequence and the distinction vector c , we calculate an attentive vector [ i, 1 , . . . , i,n i ] , where each i,j evaluates the discrimination ability of word w i,j S i .",
"By the above word-level Bi-GRUs, we get a sentence representations sequence [ v s 1 , . . . , v s nf ] , where n f refers to the number of sentences in the fact description f .",
"Based on this sequence, similarly, we build sentence-level Bi-GRUs and calculate a sentence-level attentive vector [ 1 , . . . , n f ] that reflects the discrimination ability of each sentence, and then get the fact's distinguishable representation v d f R d s .",
"Our sentence-level Bi-GRUs are formulated as: h i = [ GRU ( v s i ) , GRU ( v s i )] , i = 1 , 2 , ..., n f , i = exp( tanh ( W s h i ) T ( W gs c )) (cid:80) i exp( tanh ( W s h i ) T ( W gs c )) , v d f = (cid:88) i i h i .",
"We concatenate the basic representation v b f and the distinguishable representation v d f as the final representation of fact description f , i.e., v f = v b f , v d f ] .",
"Based on v f , we generate a corresponding feature vector v jf for each subtask t j , j = 1 , 2 , 3 mentioned in Sec. 3, i.e., t 1 : law article prediction ; t 2 : charge prediction ; t 3 : term of penalty prediction .",
"To obtain the prediction for each subtask, we use a linear classifier: y j = softmax ( W jp v jf + b jp ) , where W jp and b jp are parameters specific to task t j .",
"For training, we compute a cross-entropy loss function for each subtask and take the loss sum of all subtasks as the overall prediction loss: L p = 3 (cid:88) j =1 | Y j | (cid:88) k =1 y j,k log( y j,k ) , where | Y j | denotes the number of different classes (or, labels) for task t j and [ y j, 1 , y j, 2 , . . . , y j, | Y j | ] refers to the ground-truth vector of task t j .",
"Besides, we also consider the loss of law article community prediction (i.e., Eq. 1): L c = M (cid:88) j =1 X j log( X j ) , where [ X 1 , X 2 , . . . , XM ] is the ground-truth vector of the community including the correct law article applied to the law case.",
"In summary, our final overall loss function is: L = L p + L c (2) 5 Experiments 5.1 Datasets To evaluate the performance of our method, we use the publicly available datasets of the C hinese AI and L aw challenge (CAIL2018) 1 (Xiao et al., 2018): CAIL-small (the exercise stage dataset) and CAIL-big (the first stage dataset).",
"The case samples in both datasets contain fact description, applicable law articles, charges, and the terms of penalty.",
"For data processing, we first filter out samples with fewer than 10 meaningful words.",
"To be consistent with state-of-the-art methods, we filter out the case samples with multiple applicable law articles and multiple charges.",
"Meanwhile, referring to (Zhong et al., 2018), we only keep the law articles and charges that apply to not less than 100 corresponding case samples and divide the terms of penalty into non-overlapping intervals.",
"The detailed statistics of the datasets are shown in Table 1.",
"Baselines.",
"We compare LADAN with some baselines, including: 1 http://cail.cipsc.org.cn/index.html Dataset CAIL-small CAIL-big #Training Set Cases 101,619 1,587,979 #Test Set Cases 26,749 185,120 #Law Articles 103 118 #Charges 119 130 #Term of Penalty 11 11 Table 1: Statistics of datasets.",
"CNN (Kim, 2014): a CNN-based model with multiple filter window widths for text classification.",
"HARNN (Yang et al., 2016): an RNN-based neural network with a hierarchical attention mechanism for document classification.",
"FLA (Luo et al., 2017): a charge prediction method that uses an attention mechanism to capture the interaction between fact description and applicable laws.",
"Few-Shot (Hu et al., 2018): a discriminating confusing charge method, which extracts features about ten predefined attributes from fact descriptions to enforce semantic information.",
"TOPJUDGE (Zhong et al., 2018): a topological multi-task learning framework for LJP, which formalizes the explicit dependencies over subtasks in a directed acyclic graph.",
"MPBFN-WCA (Yang et al., 2019): a multitask learning framework for LJP with multi-perspective forward prediction and backward verification, which is the state-of-the-art method.",
"Similar to existing works (Luo et al., 2017; Zhong et al., 2018), we train the baselines CNN, HLSTM and FLA using a multi-task framework (recorded as MTL) and select a set of the best experimental parameters according to the range of the parameters given in their original papers.",
"Besides, we use our method LADAN with the same multi-task framework (i.e., Landan+MTL, LADAN+TOPJUDGE, and LADAN+MPBFN) to demonstrate our superiority in feature extraction.",
"Experimental Settings.",
"We use the THU-LAC (Sun et al., 2016) tool to get the word segmentation because all case samples are in Chinese.",
"Afterward, we use the Skip-Gram model (Mikolov et al., 2013) to pre-train word embed-dings on these case documents, where the mod-el's embedding size and frequency threshold are set to 200 and 25 respectively.",
"Meanwhile, we set the maximum document length as 512 words for CNN-based models in baselines and set the maximum sentence length to 100 words and maximum document length to 15 sentences for LSTM-based models.",
"As for hyperparameters setting, we set the dimension of all latent states (i.e., d w , d s , d l and d f ) as 256 and the threshold as 0 .",
"3 .",
"In our method LADAN, we use two graph distillation layers, and a Bi-GRU with a randomly initialized attention vector u is adopted as the basic document encoder.",
"For training, we set the learning rate of Adam optimizer to 10 3 , and the batch size to 128.",
"After training every model for 16 epochs, we choose the best model on the validation set for testing.",
"2 5.3 Experimental Results To compare the performance of the baselines and our methods, we choose four metrics that are widely used for multi-classification tasks, including accuracy (Acc.), macro-precision (MP), macro-recall (MR), and macro-F1 (F1).",
"Since the problem of confusing charges often occurs between a few categories, the main metric is the F1 score.",
"Tables 2 and 3 show the experimental results on datasets CAIL-small and CAIL-big, respectively.",
"Our method LADAN performs the best in terms of all evaluation metrics.",
"Because both CAIL-small and CAIL-big are imbalanced datasets, we focus on comparing the F1-score, which more objectively reflects the effectiveness of our LADAN and other baselines.",
"Compared with the state-of-the-art MPBFN-WCA, LADAN improved the F1-scores of law article prediction, charge prediction, and term of penalty prediction on dataset CAIL-small by 2 .",
"02 %, 2 .",
"42 % and 4 .",
"20 % respectively, and about 3 .",
"18 %, 1 .",
"44 % and 5 .",
"79 % on dataset CAIL-big.",
"Meanwhile, the comparison under the same multi-task framework (i.e., MTL, TOPJUDGE, and MPBFN) shows that our LADAN extracted more effective features from fact descriptions than all baselines.",
"Meanwhile, we can observe that the performance of Few-shot on charge prediction is close to LADAN, but its performance on the term of penalty prediction is far from ideal.",
"It is because the ten predefined attributes of Few-Shot are only effective for identifying charges, which also proves the robustness 2 Our source codes are available at https://github.",
"of our LADAN.",
"The highest MPand MR-scores of LADAN also demonstrates its ability to distinguish confusing law articles.",
"Note that all method-s' performance on dataset CAIL-big is better than that on CAIL-small, which is because the training set on CAIL-big is more adequate.",
"To further illustrate the significance of considering the difference between law articles, we conducted ablation experiments on model LADAN+MTL with dataset CAIL-small.",
"To prove the effectiveness of our graph construction layer ( GCL ), we build a LADAN model with the GCL's removing threshold = 0 (i.e., -no GCL in Table 4), which directly applies the GDO on the fully-connected graph G to generate a global distinction vector g for re-encoding the fact description.",
"To verify the effectiveness of our graph distillation operator ( GDO ), we build a no-GDO LADAN model (i.e., -no GDO in Table 4), which directly pools each subgraph g i to a distinction vector i without GDOs.",
"To evaluate the importance of considering the difference among law articles, we remove both GCL and GDO from LADAN by setting = 1 .",
"0 (i.e., -no both in Table 4), i.e., each law article independently extracts the attentive feature from fact description.",
"In Table 4, we Tasks Law Charge Penalty Metrics Acc.",
"see that both GCL and GDO effectively improve the performance of LADAN.",
"GCL is more critical than GDO because GDO has a limited performance when the law article communities obtained by GCL are not accurate.",
"When removing both GCL and GDO, the accuracy of LADAN decreases to that of HARNN+MTL, which powerfully demonstrates the effectiveness of our method exploiting differences among similar law articles.",
"To intuitively verify that LADAN effectively extracts distinguishable features, we visualize the attention of LADAN's encoders.",
"Figure 5 shows two law case examples, each for Article 385 and Article 163 , respectively, where the darker the word is, the higher the attention weight it gets in the corresponding encoder, i.e., its information is more important to the encoder.",
"For the basic encoder, we see that the vital information in these two cases is very similar, which both contain the Fact Re-encoder: Basic Encoder: Case example of Law Article 163 Bribery crime of non-state emplotees Basic Encoder: Case example of Law Article 185 Crimeof acceptance of bribes Fact Re-encoder: Figure 5: The attention visualization on case examples for Article 185 and Article 163.",
"word like use position accept benefit accept ... cash , etc.",
"Therefore, when using just the representation of basic encoder to predict acceptable law articles, charges and terms of penalty, these two cases tend to be misjudged.",
"As we mentioned in Sec. 4.3, with the distinction vector, our fact re-encoder focuses on extracting distinguishable features like defendants' identity information (e.g., company manager working in the Cadastral Unit of Luocheng Branch of Luohe City Land and Resources Bureau in our examples), which effectively distinguish the applicable law articles and charges of these two cases.",
"In this paper, we present an end-to-end model, LADAN, to solve the issue of confusing charges in LJP.",
"In LADAN, a novel attention mechanism is proposed to extract the key features for distinguishing confusing law articles attentively.",
"Our attention mechanism not only considers the interaction between fact description and law articles but also the differences among similar law articles, which are effectively extracted by a graph neural network GDL proposed in this paper.",
"The experimental results on real-world datasets show that our LADAN raises the F1-score of state-of-the-art by up to 5 .",
"79 %.",
"In the future, we plan to study complicated situations such as a law case with multiple defendants and charges.",
"The research presented in this paper is supported in part by National Key R&D Program of China (2018YFC0830500),Shenzhen Basic Research Grant (JCYJ20170816100819428), National Natural",
"Natural Science Foundation of China (61922067, U1736205, 61902305), MoE-CMCC Artifical Intelligence Project (MCM20190701), National Science Basic Research Plan in Shaanxi Province of China (2019JM-159), National Science Basic Research Plan in Zhejiang Province of China (LGG18F020016)."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"result",
"method",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"method",
"result",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"objective",
"other",
"other",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"objective",
"objective",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"objective",
"result",
"abstain",
"method",
"other",
"other"
] |
[
"Most language understanding models in task-oriented dialog systems are trained on a small amount of annotated training data, and evaluated in a small set from the same distribution.",
"However, these models can lead to system failure or undesirable output when being exposed to natural language perturbation or variation in practice.",
"In this paper, we conduct comprehensive evaluation and analysis with respect to the robustness of natural language understanding models, and introduce three important aspects related to language understanding in real-world dialog systems, namely, language variety , speech characteristics , and noise perturbation .",
"We propose a model-agnostic toolkit LAUG to approximate natural language perturbations for testing the robustness issues in task-oriented dialog.",
"Four data augmentation approaches covering the three aspects are assembled in LAUG, which reveals critical robustness issues in state-of-the-art models.",
"The augmented dataset through LAUG can be used to facilitate future research on the robustness testing of language understanding in task-oriented dialog.",
"Recently task-oriented dialog systems have been attracting more and more research efforts (Gao et al., 2019; Zhang et al., 2020b), where understanding user utterances is a critical precursor to the success of such dialog systems.",
"While modern neural networks have achieved state-of-the-art results on language understanding (LU) (Wang et al., 2018; Zhao and Feng, 2018; Goo et al., 2018; Liu et al., 2019; Shah et al., 2019), their robustness to changes in the input distribution is still one of the biggest challenges in practical use.",
"Real dialogs between human participants involve language phenomena that do not contribute so much to the intent of communication.",
"As shown in Fig. 1, user expressions can be of high lexical and syntactic diversity when a system is deployed to users; typed texts may differ significantly from those recognized from voice speech; interaction environments may be full of chaos and even users themselves may introduce irrelevant noises such that the system can hardly get clean user input.",
"Unfortunately, neural LU models are vulnerable to these natural perturbations that are legitimate inputs but not observed in training data.",
"For example, Bickmore et al. (2018) found that popular conversational assistants frequently failed to understand real health-related scenarios and were unable to deliver adequate responses on time.",
"Although many studies have discussed the LU robustness (Ray et al., 2018; Zhu et al., 2018; Iyyer et al., 2018; Yoo et al., 2019; Ren et al., 2019; Jin et al., 2020; He et al., 2020), there is a lack of systematic studies for real-life robustness issues and corresponding benchmarks for evaluating task-oriented dialog systems.",
"In order to study the real-world robustness issues, we define the LU robustness from three aspects: language variety , speech characteristics and noise perturbation .",
"While collecting dialogs from deployed systems could obtain realistic data distribution, it is quite costly and not scalable since a large number of conversational interactions with real users are required.",
"Therefore, we propose an automatic method LAUG for L anguage understanding AUG mentation in this paper to approximate the natural perturbations to existing data.",
"LAUG is a black-box testing toolkit on LU robustness composed of four data augmentation methods, including word perturbation, text paraphrasing, speech recognition, and speech disfluency.",
"Frames (El Asri et al., 2017) and MultiWOZ (Budzianowski et al., 2018) to demonstrate the toolkit's effectiveness.",
"Quality evaluation by annotators indicates that the utterances augmented by LAUG are reasonable and appropriate with regards to each augmentation approach's target.",
"A number of LU models with different categories and training paradigms are tested as base models with in-depth analysis.",
"Experiments indicate a sharp performance decline in most baselines in terms of each robustness aspect.",
"Real user evaluation further verifies that LAUG well reflects real-world robustness issues.",
"Since our toolkit is model-agnostic and does not require model parameters or gradients, the augmented data can be easily obtained for both training and testing to build a robust dialog system.",
"Our contributions can be summarized as follows: (1) We classify the LU robustness systematically into three aspects that occur in real-world dialog, including linguistic variety, speech characteristics and noise perturbation; (2) We propose a general and model-agnostic toolkit, LAUG , which is an integration of four data augmentation methods on LU that covers the three aspects.",
"(3) We conduct an in-depth analysis of LU robustness on two dialog corpora with a variety of baselines and standardized evaluation measures.",
"(4) Quality and user evaluation results demonstrate that the augmented data are representative of real-world noisy data, therefore can be used for future research to test the LU robustness in task-oriented dialog 1 .",
"We summarize several common interleaved challenges in language understanding from three aspects, as shown in Fig. 1b:",
"Language Variety A modern dialog system in a text form has to interact with a large variety of real users.",
"The user utterances can be characterized by a series of linguistic phenomena with a long tail of variations in terms of spelling, vocabulary, lex-ical/syntactic/pragmatic choice (Ray et al., 2018; Jin et al., 2020; He et al., 2020; Zhao et al., 2019; Ganhotra et al., 2020).",
"1 The data, toolkit, and codes are available at https: //github.com/thu-coai/LAUG , and will be merged into https://github.com/thu-coai/ConvLab-2 (Zhu et al., 2020).",
"tends to be more complex and intricate with longer sentences and many subordinate clauses, whereas spoken language can contain repetitions, incomplete sentences, self-corrections and interruptions (Wang et al., 2020a; Park et al., 2019; Wang et al., 2020b; Honal and Schultz, 2003; Zhu et al., 2018).",
"Noise Perturbation Most dialog systems are trained only on noise-free interactions.",
"However, there are various noises in the real world, including background noise, channel noise, misspelling, and grammar mistakes (Xu and Sarikaya, 2014; Li and Qiu, 2020; Yoo et al., 2019; Henderson et al., 2012; Ren et al., 2019).",
"This section introduces commonly observed out-of-distribution data in real-world dialog into existing corpora.",
"We approximate natural perturbations in an automatic way instead of collecting real data by asking users to converse with a dialog system.",
"To achieve our goals, we propose a toolkit LAUG , for black-box evaluation of LU robustness.",
"It is an ensemble of four data augmentation approaches, including Word Perturbation (WP), Text Paraphrasing (TP), Speech Recognition (SR), and Speech Disfluency (SD).",
"Noting that LAUG is model-agnostic and can be applied to any LU dataset theoretically.",
"Each augmentation approach tests one or two proposed aspects of robustness as Table 1 shows.",
"The intrinsic evaluation of the chosen approaches will be given in Sec. 4.",
"Task Formulation Given the dialog context X t = { x 2 t m , . . . , x 2 t 1 , x 2 t } at dialog turn t , where each x is an utterance and m is the size of sliding window that controls the length of utilizing dialog history, the model should recognize y t , the dialog act (DA) of x 2 t .",
"Empirically, we set m = 2 in the experiment.",
"Let U , S denote the set of user/system utterances, respectively.",
"Then, we have x 2 t 2 i U and x 2 t 2 i 1 S .",
"The task of this paper is to examine different LU models whether they can predict y t correctly given a perturbed input X t .",
"The perturbation is only performed on user utterances.",
"Word Perturbation Inspired by EDA ( Easy Data Augmentation ) (Wei and Zou, 2019), we propose its semantically conditioned version, SC-EDA, which considers task-specific augmentation operations in LU.",
"SC-EDA injects word-level perturbation into each utterance x (cid:48) and updates its corresponding semantic label y (cid:48) .",
"Table 2 shows an example of SC-EDA.",
"Original EDA randomly performs one of the four operations, including synonym replacement , random insertion , random swap and random deletion 2 .",
"Noting that, to keep the label unchanged, words related to slot 2 See the EDA paper for details of each operation.",
"values of dialog acts are not modified in these four operations.",
"Additionally, we design slot value replacement , which changes the utterance and label at the same time to test model's generalization to unseen entities .",
"Some randomly picked slot values are replaced by unseen values with the same slot name in the database or crawled from web sources.",
"For example in Table 2, Cambridge is replaced by Liverpool, where both belong to the same slot name dest (destination).",
"Synonym replacement and slot value replacement aim at increasing the language variety, while random word insertion/deletion/swap test the robustness of noise perturbation.",
"From another perspective, four operations from EDA perform an Invariance test, while slot value replacement conducts a Directional Expectation test according to CheckList (Ribeiro et al., 2020).",
"Text Paraphrasing The target of text paraphrasing is to generate a new utterance x (cid:48) (cid:54) = x while maintaining its dialog act unchanged, i.e. y (cid:48) = y .",
"We applied SC-GPT (Peng et al., 2020), a finetuned language model conditioned on the dialog acts, to paraphrase the sentences as data augmentation.",
"Specifically, it characterizes the conditional probability p ( x | y ) = (cid:81) Kk =1 p ( x k | x <k , y ) , where x <k denotes all the tokens before the k -th position.",
"The model parameters are trained by maximizing the log-likelihood of p .",
"DA train * { inform ( dest = Cambridge ; arrive = 20:45 ) } Text Hi, I'm looking for a train that is going to Cambridge and arriving there by 20:45, is there anything like that?",
"DA train { inform ( dest = Cambridge ; arrive = 20:45 ) } Text Yes, to Cambridge, and I would like to arrive by 20:45.",
"We observe that co-reference and ellipsis frequently occurs in user utterances.",
"Therefore, we propose different encoding strategies during paraphrasing to further evaluate each model's capacity for context resolution .",
"In particular, if the user mentions a certain domain for the first time in a dialog, we will insert a * mark into the sequential dialog act y (cid:48) to indicate that the user tends to express without co-references or ellipsis, as shown in Table 3.",
"Then SC-GPT is finetuned on the processed data so that it can be aware of dialog context when generating paraphrases.",
"As a result, we find that the average token length of generated utterances with/without * is 15.96/12.67 respectively after SC-GPT's finetuning on MultiWOZ.",
"It should be noted that slot values of an utterance can be paraphrased by models, resulting in a different semantic meaning y (cid:48) .",
"To prevent generating irrelevant sentences, we apply automatic value detection in paraphrases with original slot values by fuzzy matching 3 , and replace the detected values in bad paraphrases with original values.",
"In addition, we filter out paraphrases that have missing or redundant information compared to the original utterance.",
"Speech Recognition We simulate the speech recognition (SR) process with a TTS-ASR pipeline (Park et al., 2019).",
"First we transfer textual user utterance x to its audio form a using gTTS 4 (Oord et al., 2016), a Text-to-Speech system.",
"Then audio data is translated back into text x (cid:48) by DeepSpeech2 (Amodei et al., 2016), an Automatic Speech Recognition (ASR) system.",
"We directly use the released models in the DeepSpeech2 repository 5 with the original configuration, where the speech model is trained on Baidu Internal English Dataset, and the language model is trained on CommonCrawl Data.",
"Table 4 shows some typical examples of our SR augmentation.",
"ASR sometimes wrongly identifies one word as another with similar pronunciation.",
"Liaison constantly occurs between successive words.",
"Expressions with numbers including time and price are written in numerical form but different in spoken language.",
"Since SR may modify the slot values in the translated utterances, fuzzy value detection is employed here to handle similar sounds and liaison problems when it extracts slot values to obtain a semantic label y (cid:48) .",
"However, we do not replace the noisy value with the original value as we encourage such misrecognition in SR, thus y (cid:48) (cid:54) = y is allowed.",
"Moreover, numerical terms are normalized to deal with the spoken number problem.",
"Most slot values could 3 https://pypi.org/project/fuzzywuzzy/ 4 https://pypi.org/project/gTTS/ 5 https://github.com/PaddlePaddle/ DeepSpeech be relocated by our automatic value detection rules.",
"The remainder slot values which vary too much to recognize are discarded along with their corresponding labels.",
"Speech Disfluency Disfluency is a common feature of spoken language.",
"We follow the categorization of disfluency in previous works (Lickley, 1995; Wang et al., 2020b): filled pauses, repeats, restarts, and repairs.",
"We present some examples of SD in Table 5. Filler words (um, uh) are injected into the sentence to present pauses.",
"Repeats are inserted by repeating the previous word.",
"In order to approximate the real distribution of disfluency, the interruption points of filled pauses and repeats are predicted by a Bi-LSTM+CRF model (Zayats et al., 2016) trained on an annotated dataset SwitchBoard (God-frey et al., 1992), which was collected from real human talks.",
"For restarts, we insert false start terms (I just) as a prefix of the utterance to simulate self-correction.",
"In LU task, we apply repairs on slot values to fool the models to predict wrong labels.",
"We take the original slot value as Repair (Cam-bridge) and take another value with the same slot name as Reparandum (Liverpool).",
"An edit term (sorry, I mean) is inserted between Repair and Reparandum to construct a correction.",
"The filler words, restart terms, and edit terms and their occurrence frequency are all sampled from their distribution in SwitchBoard.",
"In order to keep the spans of slot values intact, each span is regarded as one whole word.",
"No insertions are allowed to operate inside the span.",
"Therefore, SD augmentation do not change the original semantic and labels of the utterance, i.e. y (cid:48) = y .",
"6 As data division was not defined in Frames, we split the data into training/validation/test set with a ratio of 8:1:1.",
"semantic labels of user utterances are annotated.",
"In particular, MultiWOZ is one of the most challenging datasets due to its multi-domain setting and complex ontology, and we conduct our experiments on the latest annotation-enhanced version MultiWOZ 2.3 (Han et al., 2020), which provides cleaned annotations of user dialog acts (i.e. semantic labels).",
"The dialog act consists of four parts: domain, intent, slot names, and slot values.",
"The statistics of two datasets are shown in Table 6. Following Takanobu et al. (2020), we calculate overall F1 scores as evaluation metrics due to the multi-intent setting in LU.",
"The data are augmented with the inclusion of its copies, leading to a composite of all 4 augmentation types with equal proportion.",
"Other setups are described in each experiment 7 .",
"as-7 See appendix for the hyperparameter setting of LAUG.",
"pects by comparing our augmented utterances with the original counterparts.",
"We could find each augmentation method has a distinct effect on the data.",
"For instance, TP rewrites the text without changing the original meaning, thus lexical and syntactic representations dramatically change, while most slot values remain unchanged.",
"In contrast, SR makes the lowest change rate in characters and words but modifies the most slot values due to the speech misrecognition.",
"To ensure the quality of our augmented test set, we conduct human annotation on 1,000 sampled utterances in each augmented test set of MultiWOZ.",
"We ask annotators to check whether our augmented utterances are reasonable and our auto-detected value annotations are correct (two true-or-false questions).",
"According to the feature of each augmentation method, different evaluation protocols are used.",
"For TP and SD, annotators check whether the meaning of utterances and dialog acts are unchanged.",
"For WP, changing slot values is allowed due to slot value replacement, but the slot name should be the same.",
"For SR, annotators are asked to judge on the similarity of pronunciation rather than semantics.",
"In summary, all the high scores in Table 7 demonstrate that LAUG makes reasonable augmented examples.",
"LU models roughly fall into two categories: classification-based and generation-based models.",
"Classification based models (Hakkani-Tur et al., 2016; Goo et al., 2018) extract semantics by intent detection and slot tagging.",
"Intent detection is commonly regarded as a multi-label classification task, and slot tagging is often treated as a sequence labeling task with BIO format (Ramshaw and Marcus, 1999), as shown in Fig. 2a.",
"Generation-based mod-Model Train Ori.",
"els (Liu and Lane, 2016; Zhao and Feng, 2018) generate a dialog act containing intent and slot values.",
"They treat LU as a sequence-to-sequence problem and transform a dialog act into a sequential structure as shown in Fig. 2b.",
"Five base models with different categories are used in the experiments, as shown in Table 9.",
"To support a multi-intent setting in classification-based models, we decouple the LU process as follows: first perform domain classification and intent detection, then concatenate two special tokens which indicate the detected domain and intent (e.g. [ restaurant ][ inform ] ) at the beginning of the input sequence, and last encode the new sequence to predict slot tags.",
"In this way, the model can address overlapping slot values when values are shared in different dialog acts.",
"We conduct robustness testing on all three capacities for five base models using four augmentation methods in LAUG.",
"All baselines are first trained on the original datasets, then finetuned on the augmented datasets.",
"Overall F1-measure performance on Frames and MultiWOZ is shown in Table 8.",
"All experiments are conducted over 5 runs, and averaged results are reported.",
"Robustness for each capacity can be measured by performance drops on the corresponding augmented test sets.",
"All models achieve some performance recovery on augmented test sets after trained on the augmented data, while keeping a comparable result on the original test set.",
"This indicates the effectiveness of LAUG in improving the model's robustness.",
"We observe that pre-trained models outperform non-pre-trained ones on both original and augmented test sets.",
"Classification-based models have better performance and are more robust than generation-based models.",
"ToD-BERT, the state-93.44 93.24 93.32 92.74 92.82 91.08 92.02 92.23 92.06 92.36 89.07 89.32 89.45 89.78 90.19 88.02 89.55 89.86 89.73 90.23 90.45 92.57 92.71 92.77 93.19 89.66 90.87 91.06 91.09 91.49 84 85 86 87 88 89 90 91 92 93 94 0.1 0.5 1 2 4 F 1 m e a s u re Augmentation Ratio Ori.",
"of-the-art model which was further pre-trained on task-oriented dialog data, has comparable performance with BERT.",
"With most augmentation methods, ToD-BERT shows slightly better robustness than BERT.",
"Since the data volume of Frames is far less than that of MultiWOZ, the performance improvement of pre-trained models on Frames is larger than that on MultiWOZ.",
"Due to the same reason, augmented training data benefits the non-pre-trained models performance of on Ori.",
"test set more remarkably in Frames where data is not sufficient.",
"Among the four augmentation methods, SR has the largest impact on the models' performance, and SD comes the second.",
"The dramatic performance drop when testing on SR and SD data indicates that robustness for speech characteristics may be the most challenging issue.",
"Fig. 3 shows how the performance of BERT and GPT-2 changes on MultiWOZ when the ratio of augmented training data to the original data varies from 0.1 to 4.0.",
"F1 scores on augmented test sets increase when there are more augmented data for training.",
"The performance of BERT on augmented test sets is improved when augmentation ratio is less than 0.5 but becomes almost unchanged after 0.5 while GPT-2 keeps increasing stably.",
"This result shows the different characteristics between classification-based models and generation-based models when finetuned with augmented data.",
"Between augmentation approaches In order to study the influence of each augmentation approach",
"in LAUG, we test the performance changes when one augmentation approach is removed from constructing augmented training data.",
"Results on MultiWOZ are shown in Table 10.",
"Large performance decline on each augmented test set is observed when the corresponding augmentation approach is removed in constructing training data.",
"The performance after removing an augmentation method is comparable to the one without augmented training data.",
"Only slight changes are observed without other approaches.",
"These results indicate that our four augmentation approaches are relatively orthogonal.",
"Original EDA consists of four functions as described in Table 2.",
"Performance differences (Diff.) can reflect the influences of those components in Table 11a.",
"The additional function of our SC-EDA is slot value replacement.",
"We can also observe an increase in performance when it is removed, especially for MILU.",
"This implies a lack of LU robustness in detecting unseen entities.",
"Table 11b shows the results of ablation study on SD.",
"Among the four types of disfluencies described in Table 5, repairs has the largest impact on models' performance.",
"The performance is also affected by pauses but to a less extent.",
"The influences of repeats and restarts are small, which indicates that neural models are robust to handle these two problems.",
"In order to test whether the data automatically augmented by LAUG can reflect and alleviate practical robustness problems, we conduct a real user evaluation.",
"We collected 240 speech utterances from real humans as follows: First, we sampled 120 combinations of DA from the test set of MultiWOZ.",
"Given a combination, each user was asked to speak two utterances with different expressions, in their own language habits.",
"Then the audio signals were recognized into text using DeepSpeech2, thereby constructing a new test set in real scenarios 8 .",
"Results on this real test set are shown in Table 12.",
"The performance on the real test set is substantially lower than that on Ori.",
"and",
"Avg., indicating that real user evaluation is much more challenging.",
"This is because multiple robustness issues may be included in one real case, while each augmentation method in LAUG evaluates them separately.",
"Despite the difference, model performance on the real data is remarkably improved after every model is finetuned on the augmented data, verifying that LAUG effectively enhances the model's real-world robustness.",
"Table 13 investigates which error type the model has made on the real test set by manually checking all the error outputs of BERT Ori.",
"Others are the error cases which are not caused by robustness issues, for example, because of the model's poor performance.",
"It can be observed that the model seriously suffers to LU robustness (over 70%), and that almost half of the error is due to Language Variety.",
"We find that this is because there are more diverse expressions in real user evaluation than in the original data.",
"After augmented training, we can observe that the number of error cases of Speech Characteristics and Noise Perturbation is relatively decreased.",
"This shows that BERT Aug. can solve these two kinds of problems better.",
"Noting that the sum of four percentages is over 100% since 25% error cases involve multiple robustness issues.",
"8 See appendix for details on real data collection.",
"Robustness in LU has always been a challenge in task-oriented dialog.",
"Several studies have investigated the model's sensitivity to the collected data distribution, in order to prevent models from over-fitting to the training data and improve robustness in the real world.",
"Kang et al. (2018) collected dialogs with templates and paraphrased with crowd-sourcing to achieve high coverage and diversity in training data.",
"Dinan et al. (2019) proposed a training schema that involves human in the loop in dialog systems to enhance the model's defense against human attack in an iterative way.",
"Ganhotra et al. (2020) injected natural perturbation into the dialog history manually to refine over-controlled data generated through crowd-sourcing.",
"All these methods require laborious human intervention.",
"This paper aims to provide an automatic way to test the LU robustness in task-oriented dialog.",
"Various textual adversarial attacks (Zhang et al., 2020a) have been proposed and received increasing attentions these years to measure the robustness of a victim model.",
"Most attack methods perform white-box attacks (Papernot et al., 2016; Li et al., 2019; Ebrahimi et al., 2018) based on the model's internal structure or gradient signals.",
"Even some black-box attack models are not purely black-box, which require the prediction scores (classification probabilities) of the victim model (Jin et al., 2020; Ren et al., 2019; Alzantot et al., 2018).",
"However, all these methods address random perturbation but do not consider linguistic phenomena to evaluate the real-life generalization of LU models.",
"While data augmentation can be an efficient method to address data sparsity, it can improve the generalization abilities and measure the model robustness as well (Eshghi et al., 2017).",
"Paraphrasing that rewrites the utterances in dialog has been used to get diverse representation and thus enhancing robustness (Ray et al., 2018; Zhao et al., 2019; Iyyer et al., 2018).",
"Word-level operations (Kolomiyets et al., 2011; Li and Qiu, 2020; Wei and Zou, 2019) including replacement, insertion, and deletion were also proposed to increase language variety.",
"Other studies (Shah et al., 2019; Xu and Sarikaya, 2014) worked on the out-of-vocabulary problem when facing unseen user expression.",
"Some other research 9 See appendix for case study.",
"focused on building robust spoken language understanding (Zhu et al., 2018; Henderson et al., 2012; Huang and Chen, 2019) from audio signals beyond text transcripts.",
"Simulating ASR errors (Schatz-mann et al., 2007; Park et al., 2019; Wang et al., 2020a) and speaker disfluency (Wang et al., 2020b; Qader et al., 2018) can be promising solutions to enhance robustness to voice input when only textual data are provided.",
"As most work tackles LU robustness from only one perspective, we present a comprehensive study to reveal three critical issues in this paper, and shed light on a thorough robustness evaluation of LU in dialog systems.",
"In this paper, we present a systematic robustness evaluation of language understanding (LU) in task-oriented dialog from three aspects: language variety , speech characteristics , and noise perturbation .",
"Accordingly, we develop four data augmentation methods to approximate these language phenomena.",
"In-depth experiments and analysis are conducted on MultiWOZ and Frames, with both classificationand generation-based LU models.",
"The performance drop of all models on augmented test data indicates that these robustness issues are challenging and critical, while pre-trained models are relatively more robust to LU.",
"Ablation studies are carried out to show the effect and orthogonality of each augmentation approach.",
"We also conduct a real user evaluation and verifies that our augmentation methods can reflect and help alleviate real robustness problems.",
"Existing and future dialog models can be evaluated in terms of robustness with our toolkit and data, as our augmentation model does not depend on any particular LU models.",
"Moreover, our proposed robustness evaluation scheme is extensible.",
"In addition to the four approaches in LAUG, more methods to evaluate LU robustness can be considered in the future.",
"This work was partly supported by the NSFC projects (Key project with No. 61936010 and regular project with No. 61876096).",
"This work was also supported by the Guoqiang Institute of Tsinghua University, with Grant No. 2019GQG1 and 2020GQG0005.",
"We would like to thank colleagues from HUAWEI for their constant support and valuable discussion."
] | [
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"method",
"objective",
"abstain",
"other",
"other",
"other"
] |
[
"Linguistic Code-switching (CS) is still an understudied phenomenon in natural language processing.",
"The NLP community has mostly focused on monolingual and multi-lingual scenarios, but little attention has been given to CS in particular.",
"This is partly because of the lack of resources and annotated data, despite its increasing occurrence in social media platforms.",
"In this paper, we aim at adapting monolingual models to code-switched text in various tasks.",
"Specifically, we transfer English knowledge from a pre-trained ELMo model to different code-switched language pairs (i.e., Nepali-English, Spanish-English, and Hindi-English) using the task of language identification.",
"Our method, CS-ELMo, is an extension of ELMo with a simple yet effective position-aware attention mechanism inside its character convolutions.",
"We show the effectiveness of this transfer learning step by outperforming multilingual BERT and homologous CS-unaware ELMo models and establishing a new state of the art in CS tasks, such as NER and POS tagging.",
"Our technique can be expanded to more English-paired code-switched languages, providing more resources to the CS community.",
"Although linguistic code-switching (CS) is a common phenomenon among multilingual speakers, it is still considered an understudied area in natural language processing.",
"The lack of annotated data combined with the high diversity of languages in which this phenomenon can occur makes it diffi-cult to strive for progress in CS-related tasks.",
"Even though CS is largely captured in social media platforms, it is still expensive to annotate a sufficient amount of data for many tasks and languages.",
"Additionally, not all the languages have the same incidence and predominance, making annotations impractical and expensive for every combination Hindi-English Tweet Original: Keep calm and keep kaam se kaam",
"of languages.",
"Nevertheless, code-switching often occurs in language pairs that include English (see examples in Figure 1).",
"These aspects lead us to explore approaches where English pre-trained models can be leveraged and tailored to perform well on code-switching settings.",
"In this paper, we study the CS phenomenon using English as a starting language to adapt our models to multiple code-switched languages, such as Nepali-English, Hindi-English and Spanish-English.",
"In the first part, we focus on the task of language identification (LID) at the token level using ELMo (Peters et al., 2018) as our reference for English knowledge.",
"Our hypothesis is that English pre-trained models should be able to recognize whether a word belongs to English or not when such models are fine-tuned with code-switched text.",
"To accomplish that, we introduce CS-ELMo, an extended version of ELMo that contains a position-aware hierarchical attention mechanism over ELMo's character n-gram representations.",
"These enhanced representations allow the model to see the location where particular n-grams occur within a word (e.g., affixes or lemmas) and to associate such behaviors with one language or another.",
"1 With the help of this mechanism, our models consistently outperform the state of the art on LID for Nepali-English (Solorio et al., 2014), Spanish-English (Molina et al., 2016), and Hindi-English (Mave et al., 2018).",
"Moreover, we conduct experiments that emphasize the importance of the position-aware hierarchical attention and the different effects that it can have based on the similarities of the code-switched languages.",
"In the second part, we demonstrate the effectiveness of our CS-ELMo models by further fine-tuning them on tasks such as NER and POS tagging.",
"Specifically, we show that the resulting models significantly outperform multilingual BERT and their homologous ELMo models directly trained for NER and POS tagging.",
"Our models establish a new state of the art for Hindi-English POS tagging (Singh et al., 2018) and Spanish-English NER (Aguilar et al., 2018).",
"Our contributions can be summarized as follows: 1) we use transfer learning from models trained on a high-resource language (i.e., English) and effectively adapt them to the code-switching setting for multiple language pairs on the task of language identification; 2) we show the effectiveness of transferring a model trained for LID to downstream code-switching NLP tasks, such as NER and POS tagging, by establishing a new state of the art; 3) we provide empirical evidence on the importance of the enhanced character n-gram mechanism, which aligns with the intuition of strong morphological clues in the core of ELMo (i.e., its convolutional layers); and 4) our CS-ELMo model is self-contained, which allows us to release it for other researchers to explore and replicate this technique on other code-switched languages.",
"2 2 Related Work Transfer learning has become more practical in the last years, making possible to apply very large neural networks to tasks where annotated data is limited (Howard and Ruder, 2018; Peters et al., 1 Note that there are more than two labels in the LID tagset, as explained in Section",
"2018; Devlin et al., 2019).",
"CS-related tasks are good candidates for such applications, since they are usually framed as low-resource problems.",
"However, previous research on sequence labeling for code-switching mainly focused on traditional ML techniques because they performed better than deep learning models trained from scratch on limited data (Yirmibesoglu and Eryigit, 2018; Al-Badrashiny and Diab, 2016).",
"Nonetheless, some researchers have recently shown promising results by using pre-trained monolingual embeddings for tasks such as NER (Trivedi et al., 2018; Winata et al., 2018) and POS tagging (Soto and Hirschberg, 2018; Ball and Garrette, 2018).",
"Other efforts include the use of multilingual sub-word embeddings like fastText (Bojanowski et al., 2017) for LID (Mave et al., 2018), and cross-lingual sentence embeddings for text classification like LASER (Schwenk, 2018; Schwenk and Li, 2018; Schwenk and Douze, 2017), which is capable of handling code-switched sentences.",
"These results show the potential of pre-trained knowledge and they motivate our efforts to further explore transfer learning in code-switching settings.",
"Our work is based on ELMo (Peters et al., 2018), a large pre-trained language model that has not been applied to CS tasks before.",
"We also use attention (Bahdanau et al., 2015) within ELMo's convolutions to adapt it to code-switched text.",
"Even though attention is an effective and successful mechanism in other NLP tasks, the code-switching literature barely covers such technique (Sitaram et al., 2019).",
"Wang et al. (2018) use a different attention method for NER, which is based on a gated cell that learns to choose appropriate monolingual embeddings according to the input text.",
"Recently, Winata et al. (2019) proposed multilingual meta embeddings (MME) combined with self-attention (Vaswani et al., 2017).",
"Their method establishes a state of the art on Spanish-English NER by heavily relying on monolingual embeddings for every language in the code-switched text.",
"Our model outperforms theirs by only fine-tuning a generic CS-aware model, without relying on task-specific designs.",
"Another contribution of our work are position embeddings, which have not been considered for code-switching either.",
"These embeddings, combined with CNNs, have proved useful in computer vision (Gehring et al., 2017); they help to localize non-spatial features extracted by convolutional networks within an image.",
"We apply the same prin-ciple to code-switching: we argue that character n-grams without position information may not be enough for a model to learn the actual morphological aspects of the languages (e.g., affixes or lemmas).",
"We empirically validate those aspects and discuss the incidence of such mechanism in our experiments.",
"ELMo is a character-based language model that provides deep contextualized word representations (Peters et al., 2018).",
"We choose ELMo for this study for the following reasons: 1) it has been trained on a large amount of English data as a general-purpose language model and this aligns with the idea of having English knowledge as starting point; 2) it extracts morphological information out of character sequences, which is essential for our case since certain character n-grams can reveal whether a word belongs to one language or another; and 3) it generates powerful word representations that account for multiple meanings depending on the context.",
"Nevertheless, some aspects of the standard ELMo architecture could be improved to take into account more linguistic properties.",
"In Section 3.1, we discuss these aspects and propose the position-aware hierarchical attention mechanism inside ELMo.",
"In Section 3.2 and Section 3.3, we describe our overall sequence labeling model and the training details, respectively.",
"ELMo convolves character embeddings in its first layers and uses the resulting convolutions to represent words.",
"During this process, the convolutional layers are applied in parallel using different kernel sizes, which can be seen as character n-gram feature extractors of different orders.",
"The feature maps per n-gram order are max-pooled to reduce the dimensionality, and the resulting single vectors per n-gram order are concatenated to form a word representation.",
"While this process has proven effective in practice, we notice the following shortcomings:",
"1. Convolutional networks do not account for the positions of the character n-grams (i.e., convolutions do not preserve the sequential order), losing linguistic properties such as affixes.",
"2. ELMo down-samples the outputs of its convolutional layers by max-pooling over the feature maps.",
"However, this operation is not ideal to adapt to new morphological patterns from other languages as the model tends to discard patterns from languages other than English.",
"To address these aspects, we introduce CS-ELMo, an extension of ELMo that incorporates a position-aware hierarchical attention mechanism that enhances ELMo's character n-gram representations.",
"This mechanism is composed of three elements: position embeddings, position-aware attention, and hierarchical attention.",
"Figure 2A describes the overall model architecture, and Figure 2B details the components of the enhanced character n-gram mechanism.",
"Position embeddings.",
"Consider the word x of character length l , whose character n-gram vectors are ( x 1 , x 2 , . . . , x l j +1 ) for an n-gram order j { 1 , 2 , . . . , n } .",
"3 The n-gram vector x i R c is the output of a character convolutional layer, where c is the number of output channels for that layer.",
"Also, consider n position embedding matrices, one per n-gram order, { E 1 , E 2 , . . . , E n } defined as E j R ( k j +1) e where k is the maximum length of characters in a word (note that l k ), e is the dimension of the embeddings and j is the specific n-gram order.",
"Then, the position vectors for the sequence x are defined by p = ( p 1 , p 2 , . . . , p l j +1 ) where p i R e is the i -th vector from the position embedding matrix E j .",
"We use e = c to facilitate the addition of the position embeddings and the n-gram vectors.",
"4 Figure 2B illustrates the position embeddings for bi-grams and tri-grams.",
"Position-aware attention.",
"Instead of down-sampling with the max-pooling operation, we use an attention mechanism similar to the one introduced by Bahdanau et al. (2015).",
"The idea is to concentrate mass probability over the feature maps that capture the most relevant n-gram information along the word, while also considering positional information.",
"At every individual n-gram order, our attention mechanism uses the following equations: u i = v (cid:124) tanh(W x x i + p i + b x ) (1) i = exp( u i ) (cid:80) Nj =1 exp( u j ) , s.t. (cid:88) i =1 i = 1 (2) z = (cid:88) i =1 i x i (3) 3 ELMo has seven character convolutional layers, each layer with a kernel size from one to seven characters ( n = 7 ).",
"4 ELMo varies the output channels per convolutional layer, so the dimensionality of E j varies as well.",
"where W x R a c is a projection matrix, a is the dimension of the attention space, c is the number of channels for the n-gram order j , and p i is the position embedding associated to the x i n-gram vector.",
"v R a is the vector that projects from the attention space to the unnormalized scores, and i is a scalar that describes the attention probability associated to the x i n-gram vector.",
"z is the weighted sum of the input character n-gram vectors and the attention probabilities, which is our down-sampled word representation for the n-gram order j .",
"Note that this mechanism is used independently for every order of n-grams resulting in a set of n vectors { z 1 , z 2 , . . . , z n } from Equation",
"3. This allows the model to capture relevant information across individual n-grams before they are combined (i.e., processing independently all bi-grams, all tri-grams, etc.).",
"Hierarchical attention.",
"With the previous mechanisms we handle the problems aforementioned.",
"That is, we have considered positional information as well as the attention mechanism to down-sample the dimensionality.",
"These components retrieve one vector representation per n-gram order per word.",
"While ELMo simply concatenates the n-gram vectors of a word, we decide to experiment with another layer of attention that can prioritize n-gram vectors across all the orders.",
"We use a similar formulation to Equations 1 and 3, except that we do not have p i , and instead of doing the weighted sum, we concatenate the weighted inputs.",
"This concatenation keeps the original dimensionality expected in the upper layers of ELMo, while it also emphasizes which n-gram order should receive more attention.",
"We follow Peters et al. (2018) to use ELMo for sequence labeling.",
"They reported state-of-the-art performance on NER by using ELMo followed by a bidirectional LSTM layer and a linear-chain conditional random field (CRF).",
"We use this architecture as a backbone for our model (see Figure 2A), but we add some modifications.",
"The first modification is the concatenation of static English word embeddings to ELMo's word representation, such as Twitter (Pennington et al., 2014) and fastText (Bojanowski et al., 2017) embeddings similar to Howard and Ruder (2018) and Mave et al. (2018).",
"The idea is to enrich the context of the words by providing domain-specific embeddings and subword level embeddings.",
"The second modification is the concatenation of the enhanced character n-gram representation with the input to the CRF layer.",
"This emphasizes even further the extracted morphological patterns, so that they are present during inference time for the task at hand (i.e., not only LID, but also NER and POS tagging).",
"The last modification is the addition of a secondary task on a simplified 5 language identification label scheme (see Section 4 for more details), which only uses 5 The LID label set uses eight labels ( lang1 , lang2 , ne , mixed , ambiguous , fw , other , and unk ), but for the simplified LID label set, we only consider three labels ( lang1 , lang2 and other ) to predict only based on characters.",
"the output of the enhanced character n-gram mechanism.",
"Intuitively, this explicitly forces the model to associate morphological patterns (e.g., affixes, lemmas, etc.) to one or the other language.",
"We train the model by minimizing the negative log-likelihood loss of the CRF classifier.",
"Additionally, we force the model to minimize a secondary loss over the simplified LID label set by only using the morphological features from the enhanced character n-gram mechanism (see the softmax layer in Figure 2A).",
"The overall loss L of our model is defined as follows: L task t = 1 NN (cid:88) i y i log p ( y i | ) (4) L = L task 1 + L task 2 + | | (cid:88) k w 2 k (5) where L task 1 and L task 2 are the negative log-likelihood losses conditioned by the model parameters as defined in Equation",
"4. L task 1 is the loss of the primary task (i.e., LID, NER, or POS tag-ging), whereas L task 2 is the loss for the simplified LID task weighted by to smooth its impact on the model performance.",
"Both losses are the average over N tokens.",
"6 The third term provides (cid:96) 2 regularization, and is the penalty weight.",
"7 4 Datasets Language identification.",
"We experiment with code-switched data for Nepali-English, Spanish-English, and Hindi-English.",
"The first two datasets were collected from Twitter, and they were introduced at the Computational Approaches to Linguistic Code-Switching (CALCS) workshops in 2014 and 2016 (Solorio et al., 2014; Molina et al., 2016).",
"The Hindi-English dataset contains Twitter and Facebook posts, and it was introduced by Mave et al. (2018).",
"These datasets follow the CALCS label scheme, which has eight labels: lang1 (En-glish), lang2 (Nepali, Spanish, or Hindi), mixed , ambiguous , fw , ne , other , and unk .",
"We show the distribution of lang1 and lang2 in Table",
"1. Moreover, we add a second set of labels using a simplified LID version of the original CALCS label set.",
"The simplified label set uses lang1 , 6 While Equation 4 is formulated for a given sentence, in practice N is the number of tokens in a batch of sentences.",
"We use this 3-way token-level labels in the secondary loss of our model where only morphology, without any context, is being exploited.",
"This is because we are interested in predicting whether a word's morphology is associated to English more than to another language (or vice versa), instead of whether, for example, its morphology describes a named entity ( ne ).",
"Part-of-speech tagging.",
"Singh et al. (2018) provide 1,489 tweets (33,010 tokens) annotated with POS tags.",
"The labels are annotated using the universal POS tagset proposed by Petrov et al. (2012) with the addition of two labels: PART NEG and PRON WH .",
"This dataset does not provide training, development, or test splits due to the small number of samples.",
"Therefore, we run 5-fold cross validations and report the average scores.",
"Named entity recognition.",
"We use the Spanish-English NER corpus introduced in the 2018 CALCS competition (Aguilar et al., 2018), which contains a total of 67,223 tweets with 808,663 tokens.",
"The entity types are person , organization , location , group , title , product , event , time , and other , and the labels follow the BIO scheme.",
"We used the fixed training, development, and testing splits provided with the datasets to benchmark our models.",
"Importantly, Hindi and Nepali texts in these datasets appear transliterated using the English alphabet (see Figure 1).",
"The lack of a standardized transliteration process leads code-switchers to employ mostly ad-hoc phonological rules that conveniently use the English alphabet when they write in social media.",
"This behavior makes the automated processing of these datasets more challenging be-Exp ID Experiment Nepali-English Spanish-English Hindi-English Dev Test Dev Test Dev Test Approach 1 (Baseline models) Exp 1.1 ELMo 96.192 95.700 95.508 96.363 95.997 96.420 Exp 1.2 ELMo + BLSTM + CRF 96.320 95.882 95.615 96.748 96.545 96.717 Exp 1.3 ML-BERT 95.436 96.571 96.212 96.212 95.924 96.440 Approach 2 (Upon Exp 1.2) Exp 2.1 Attention on each n-gram 96.413 96.771 95.952 96.519 96.579 96.069 Exp 2.2 Position-aware attention on each n-gram 96.540 96.640 95.994 96.791 96.629 96.141 Exp 2.3 Position-aware hierarchical attention 96.582 96.798 96.072 96.692 96.705 96.186 Approach 3 (Upon Exp 2.3) Exp 3.1 Concatenating character n-grams at the top 96.485 96.761 96.033 96.775 96.665 96.188 Exp 3.2 Adding simplified LID (secondary) task 96.612 96.734 96.051 96.932 96.565 96.215 Exp 3.3 Adding static word embeddings 96.879 97.026 96.757 97.532 96.776 97.001 Comparison: Previous best published results Mave et al. (2018) -96.510 97.060 96.6045 96.840 Table 2: The results of incremental experiments on each LID dataset.",
"We describe our experiments for LID in Section 5.1, including insights of the optimized models.",
"In Section 5.2, the optimized LID models are further fine-tuned on downstream NLP tasks, such as NER and POS tagging, to show the effectiveness of our preliminary CS adaptation step.",
"We test for statistical significance across our incremental experiments following Dror et al. (2018), and we report p-values below 0 .",
"02 for LID.",
"We discuss hyperparameters and fine-tuning details in Appendix D. 5.1 Language Identification Approach",
"1. We establish three strong baselines using a vanilla ELMo (Exp 1.1), ELMo combined with BLSTM and CRF (Exp 1.2) as suggested by Peters et al. (2018), and a multilingual BERT (Exp 1.3) provided by Devlin et al. (2019).",
"We experiment with frozen weights for the core parameters of ELMo and BERT, but we find the best results when the full models are fine-tuned, which we report in Table",
"2. Approach",
"2. In the second set of experiments, we add the components of our mechanism upon ELMo combined with BLSTM and CRF (Exp 1.2).",
"We start by replacing the max-pooling operation with the attention layer at every individual n-gram order in Exp 2.1.",
"In Exp 2.2, we incorporate the position information.",
"The third experiment, Exp 2.3, adds the hierarchical attention across all n-gram order vectors.",
"It is worth noting that we experiment by accumulating consecutive n-gram orders, and we find that the performance stops increasing when n > 3 .",
"Intuitively, this can be caused by the small size of the datasets since n-gram features of greater order are infrequent and would require more data to be trained properly.",
"We apply our mechanism for n-gram orders in the set { 1, 2, 3 } , which we report in Table",
"2. Approach",
"3. For the third set of experiments, we focus on emphasizing the morphological clues extracted by our mechanism (Exp 2.3).",
"First, in Exp 3.1, we concatenate the enhanced character n-grams with their corresponding word representation before feeding the input to the CRF layer.",
"In POS System Dev F1 Test F1 ML-BERT 86.84 84.70 ELMo + BLSTM + CRF 87.42 88.12 Prev.",
"Exp 3.2, we add the secondary task over the previous experiment to force the model to predict the simplified LID labels by only using the morphological clues (i.e., no context is provided).",
"Finally, in Exp 3.3, we add static word embeddings that help the model to handle social media style and domain-specific words.",
"We achieve the best results on Exp 3.3, which outperforms both the baselines and the previous state of the art on the full LID label scheme (see Table 2).",
"However, to compare with other work, we also calculate the average of the weighted F1 scores over the labels lang1 and lang2 .",
"Table 3 shows a comparison of our results and the previous state of the art.",
"Note that, for Spanish-English and Hindi-English, the gap of improvement is reasonable, considering that similar gaps in the validation experiments are statistically significant.",
"In contrast, in the case of Nepali-English, we cannot determine whether our improvement is marginal or substantial since the authors only provide one decimal in their scores.",
"Nevertheless, Al-Badrashiny and Diab (2016) use a CRF with hand-crafted features (Al-Badrashiny and Diab, 2016), while our approach does not require any feature engineering.",
"We use LID to adapt the English pre-trained knowledge of ELMo to the code-switching setting, effectively generating CS-ELMo.",
"Once this is achieved, we fine-tune the model on downstream NLP tasks such as POS tagging and NER.",
"In this section, our goal is to validate whether the CS-ELMo model can improve over vanilla ELMo, multilingual BERT, and the previous state of the art for both tasks.",
"More specifically, we use our best architecture (Exp 3.3) from the LID experiments 1) without the code-switching adaptation, 2) with the code-switching NER System Dev F1 Test F1 ML-BERT 61.11 64.56 ELMo + BLSTM + CRF 59.91 63.53 Best at CALCS (Trivedi et al., 2018) -63.76 Prev.",
"adaptation and only retraining the inference layer, and 3) with the code-switching adaptation and retraining the entire model.",
"POS tagging experiments.",
"Table 4 shows our experiments on POS tagging using the Hindi-English dataset.",
"When we compare our CS-ELMO + BLSTM + CRF model without CS adaptation (Exp 4.1) against the baseline (ELMo + BLSTM + CRF), the performance remains similar.",
"This suggests that our enhanced n-gram mechanism can be added to ELMo without impacting the performance even if the model has not been adapted to CS.",
"Slightly better performance is achieved when the CS-ELMo has been adapted to code-switching, and only the BLSTM and CRF layers are retrained (Exp 4.2).",
"This result shows the convenience of our model since small improvements can be achieved faster by leveraging the already-learned CS knowledge while avoiding to retrain the entire model.",
"Nevertheless, the best performance is achieved by the adapted CS-ELMO + BLSTM + CRF when retraining the entire model (Exp 4.3).",
"Our results are better than the baselines and the previous state of the art.",
"Interestingly, our model improves over multilingual BERT, which is a powerful and significantly bigger model in terms of parameters.",
"Our intuition is that this is partly due to the word-piece tokenization process combined with the transliteration of Hindi.",
"The fact that we use the multilingual version of BERT does not necessarily help to handle transliterated Hindi, since Hindi is only present in BERT's vocabulary with the Devanagari script.",
"Indeed, we notice that in some tweets, the original number of tokens was almost doubled by the greedy tokenization process in BERT.",
"This behavior tends to degrade the syntactic and semantic Figure 3: Visualization of the tri-gram attention weights for the 2016 Spanish-English LID dataset.",
"information captured in the original sequence of tokens.",
"In contrast, ELMo generates contextualized word representations out of character sequences, which makes the model more suitable to adapt to the transliteration of Hindi.",
"NER experiments.",
"Table 5 contains our experiments on NER using the 2018 CALCS Spanish-English dataset.",
"Exp 5.1 shows that the enhanced n-gram mechanism can bring improvements over the ELMo + BLSTM + CRF baseline, even though the CS-ELMo has not been adapted to the code-switching setting.",
"However, better results are achieved when the CS-ELMo model incorporates the code-switching knowledge in both Exp 5.2 and 5.3.",
"Unlike the POS experiments 4.2 and 4.3, fixing the parameters of CS-ELMo model yields better results than updating them during training.",
"Our intuition is that, in the NER task, the model needs the context of both languages to recognize entities within the sentences, and having the code-switching knowledge fixed becomes ben-eficial.",
"Also, by freezing the CS-ELMo model, we can accelerate training because there is no back-propagation for the CS-ELMo parameters, which makes our code-switching adapatation very practical for downstream tasks.",
"Position embeddings.",
"Localizing n-grams within a word is an important contribution of our method.",
"We explore this mechanism by using our fine-tuned CS-ELMo to predict the simplified LID labels on the validation set from the secondary task (i.e., the predictions solely rely on morphology) in two scenarios.",
"The first one uses the position embeddings corresponding to the actual place of the character n-gram, whereas the second one chooses position embeddings randomly.",
"We notice a consistent de-cay in performance across the language pairs, and a variation in the confidence of the predicted classes.",
"The most affected language pair is Spanish-English, with an average difference of 0.18 based on the class probability gaps between both scenarios.",
"In contrast, the probability gaps in Hindi-English and Nepali-English are substantially smaller; their average differences are 0.11 and 0.09, respectively.",
"Position distribution.",
"Considering the previous analysis and the variations in the results, we gather insights of the attention distribution according to their n-gram positions (see position-aware attention in Section 3.1).",
"Although the distribution of the attention weights across n-gram orders mostly remain similar along the positions for all language pairs, Spanish-English has a distinctive concentration of attention at the beginning and end of the words.",
"This behavior can be caused by the differences and similarities between the language pairs.",
"For Spanish-English, the model may rely on in-flections of similar words between the languages, such as affixes.",
"On the other hand, transliterated Hindi and Nepali tend to have much less overlap with English words (i.e., words with few characters can overlap with English words), making the distinction more spread across affixes and lemmas.",
"Attention analysis.",
"Figure 3 shows the tri-gram attention weights in the Spanish-English LID dataset.",
"The model is able to pick up affixes that belong to one or the other language.",
"For instance, the tri-gram -ing is commonly found in English at the end of verbs in present progressive, like in the word com ing from the figure, but it also appears in Spanish at different places (e.g., ing eniero ) making the position information relevant.",
"On the contrary, the tri-grams aha and hah from the figure do not seem to rely on position information because the attention distribution varies along the words.",
"See more examples in Appendix E. Error analysis.",
"Morphology is very useful for LID, but it is not enough when words have similar spellings between the languages.",
"We inspect the predictions of the model, and find cases where, for example, miserable is gold-labeled as ambiguous but the model predicts a language (see the top-right tweet in Figure 3).",
"Although we find similar cases for Nepali-English and Hindi-English, it mostly happens for words with few characters (e.g., me , to , use ).",
"The model often gets such cases mislabeled due to the common spellings in both languages.",
"Although this should be handled by context, our contribution relies more on morphology than contextualization, which we leave for future work.",
"We present a transfer learning method from English to code-switched languages using the LID task.",
"Our method enables large pre-trained models, such as ELMo, to be adapted to code-switching settings while taking advantage of the pre-trained knowledge.",
"We establish new state of the art on LID for Nepali-English, Spanish-English, and Hindi-English.",
"Additionally, we show the effectiveness of our CS-ELMo model by further fine-tuning it for NER and POS tagging.",
"We outperform multilingual BERT and homologous ELMo models on Spanish-English NER and Hindi-Enlgish POS tagging.",
"In our ongoing research, we are investigating the expansion of this technique to language pairs where English may not be involved.",
"This work was supported by the National Science Foundation (NSF) on the grant #1910192.",
"We thank Deepthi Mave for providing general statistics of the code-switching datasets and Mona Diab for insightful discussions on the topic."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"method",
"abstain",
"abstain",
"result",
"objective",
"objective",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"objective",
"result",
"result",
"objective",
"other",
"other"
] |
[
"Abstract Inferring missing links in knowledge graphs (KG) has attracted a lot of attention from the research community.",
"In this paper, we tackle a practical query answering task involving predicting the relation of a given entity pair.",
"We frame this prediction problem as an inference problem in a probabilistic graphical model and aim at resolving it from a variational inference perspective.",
"In order to model the relation between the query entity pair, we assume that there exists an underlying latent variable (paths connecting two nodes) in the KG, which carries the equivalent semantics of their relations.",
"However, due to the intractability of connections in large KGs, we propose to use variation inference to maximize the evidence lower bound.",
"More specifically, our framework (DIVA ) is composed of three modules, i.e. a posterior approximator, a prior (path finder), and a likelihood (path reasoner).",
"By using variational inference, we are able to incorporate them closely into a unified architecture and jointly optimize them to perform KG reasoning.",
"With active interactions among these sub-modules, DIVA is better at handling noise and coping with more complex reasoning scenarios.",
"In order to evaluate our method, we conduct the experiment of the link prediction task on multiple datasets and achieve state-of-the-art performances on both datasets.",
"Large-scaled knowledge graph supports a lot of downstream natural language processing tasks like question answering, response generation, etc.",
"However, there are large amount of important facts missing in existing KG, which has significantly limited the capability of KG's application.",
"Therefore, automated reasoning, or the ability for computing systems to make new inferences from the observed evidence, has attracted lots of attention from the research community.",
"In recent years, there are surging interests in designing machine learning algorithms for complex reasoning tasks, especially in large knowledge graphs (KGs) where the countless entities and links have posed great challenges to traditional logic-based algorithms.",
"Specifically, we situate our study in this large KG multi-hop reasoning scenario, where the goal is to design an automated inference model to complete the missing links between existing entities in large KGs.",
"For examples, if the KG contains a fact like president ( BarackObama , USA ) and spouse ( Michelle, BarackObama ), then we would like the machines to complete the missing link livesIn ( Michelle , USA ) automatically.",
"Systems for this task are essential to complex question answering applications.",
"To tackle the multi-hop link prediction problem, various approaches have been proposed.",
"Some earlier works like PRA (Lao et al., 2011; Gardner et al., 2014, 2013) use bounded-depth random walk with restarts to obtain paths.",
"More recently, DeepPath (Xiong et al., 2017) and MINERVA (Das et al., 2018), frame the path-finding problem as a Markov Decision Process (MDP) and utilize reinforcement learning (RL) to maximize the expected return.",
"Another line of work along with ours are Chain-of-Reasoning (Das et al., 2016) and Compositional Reasoning (Neelakantan et al., 2015), which take multi-hop chains learned by PRA as input and aim to infer its relation.",
"Here we frame the KG reasoning task as a two sub-steps, i.e. Path-Finding and Path-Reasoning.",
"We found that most of the related research is only focused on one step, which leads to major drawbackslack of interactions between these two steps.",
"More specifically, DeepPath (Xiong et al., 2017) and MINERVA (Das et al., 2018) can be interpreted as enhancing the Path-Finding step while compositional reasoning (Neelakantan et al., 2015) and chains of rea-1823 soning (Das et al., 2016) can be interpreted as enhancing the Path-Reasoning step.",
"DeepPath is trained to find paths more efficiently between two given entities while being agnostic to whether the entity pairs are positive or negative, whereas MINERVA learns to reach target nodes given an entity-query pair while being agnostic to the quality of the searched path 1 .",
"In contrast, chains of reasoning and compositional reasoning only learn to predict relation given paths while being agnostic to the path-finding procedure.",
"The lack of interaction prevents the model from understanding more diverse inputs and make the model very sensitive to noise and adversarial samples.",
"In order to increase the robustness of existing KG reasoning model and handle noisier environments, we propose to combine these two steps together as a whole from the perspective of the latent variable graphic model.",
"This graphic model views the paths as discrete latent variables and relation as the observed variables with a given entity pair as the condition, thus the path-finding module can be viewed as a prior distribution to infer the underlying links in the KG.",
"In contrast, the path-reasoning module can be viewed as the likelihood distribution, which classifies underlying links into multiple classes.",
"With this assumption, we introduce an approximate posterior and design a variational auto-encoder (Kingma and Welling, 2013) algorithm to maximize the evidence lower-bound.",
"This variational framework closely incorporates two modules into a unified framework and jointly train them together.",
"By active cooperations and interactions, the path finder can take into account the value of searched path and resort to the more meaningful paths.",
"Meanwhile, the path reasoner can receive more diverse paths from the path finder and generalizes better to unseen scenarios.",
"Our contributions are three-fold: We introduce a variational inference framework for KG reasoning, which tightly integrates the path-finding and path-reasoning processes to perform joint reasoning.",
"We have successfully leveraged negative samples into training and increase the robustness of existing KG reasoning model.",
"The rest of the paper is organized as follow.",
"In Section 2 we will outline related work on KG embedding, multi-hop reasoning, and variational auto-encoder.",
"We describe our variational knowledge reasoner DIVA in Section 3.",
"Experimental results are presented in Section 4, and we conclude in Section 5.",
"Embedding methods to model multi-relation data from KGs have been extensively studied in recent years (Nickel et al., 2011; Bordes et al., 2013; Socher et al., 2013; Lin et al., 2015; Trouillon et al., 2017).",
"From a representation learning perspective, all these methods are trying to learn a projection from symbolic space to vector space.",
"For each triple ( e s , r, e d ) in the KG, various score functions can be defined using either vector or matrix operations.",
"Although these embedding approaches have been successful capturing the semantics of KG symbols (entities and relations) and achieving impressive results on knowledge base completion tasks, most of them fail to model multi-hop relation paths, which are indispensable for more complex reasoning tasks.",
"Besides, since all these models operate solely on latent space, their predictions are barely interpretable.",
"The Path-Ranking Algorithm (PRA) method is the first approach to use a random walk with restart mechanism to perform multi-hop reasoning.",
"Later on, some research studies (Gardner et al., 2014, 2013) have revised the PRA algorithm to compute feature similarity in the vector space.",
"These formula-based algorithms can create a large fan-out area, which potentially undermines the inference accuracy.",
"To mitigate this problem, a Convolutional Neural Network(CNN)-based model (Toutanova et al., 2015) has been proposed to perform multi-hop reasoning.",
"Recently, DeepPath (Xiong et al., 2017) and MINERVA (Das et al., 2018) view the multi-hop reasoning problem as a Markov Decision Process, and leverages REINFORCE (Williams, 1992) to efficiently search for paths in large knowledge graph.",
"These two methods are reported to achieve state-of-the-art results, however, these two models both use heuristic rewards to drive the policy 1824 search, which could make their models sensitive to noises and adversarial examples.",
"Variational Auto-Encoder (Kingma and Welling, 2013) is a very popular algorithm to perform approximate posterior inference in large-scale scenarios, especially in neural networks.",
"Recently, VAE has been successfully applied to various complex machine learning tasks like image generation (Mansimov et al., 2015), machine translation (Zhang et al., 2016), sentence generation (Guu et al., 2017a) and question answering (Zhang et al., 2017).",
"Zhang et al. (2017) is closest to ours, this paper proposes a variational framework to understand the variability of human language about entity referencing.",
"In contrast, our model uses a variational framework to cope with the complex link connections in large KG.",
"Unlike the previous research in VAE, both Zhang et al. (2017) and our model uses discrete variables as the latent representation to infer the semantics of given entity pairs.",
"More specifically, we view the generation of relation as a stochastic process controlled by a latent representation, i.e. the connected multi-hop link existed in the KG.",
"Though the potential link paths are discrete and countable, its amount is still very large and poses challenges to direct optimization.",
"Therefore, we resort to variational auto-encoder as our approximation strategy.",
"Here we formally define the background of our task.",
"Let E be the set of entities and R be the set of relations.",
"Then a KG is defined as a collection of triple facts ( e s , r, e d ) , where e s , e d E and r R .",
"We are particularly interested in the problem of relation inference, which seeks to answer the question in the format of ( e s , ? , e d ) , the problem setting is slightly different from standard link prediction to answer the question of ( e s , r, ?) .",
"Next, in order to tackle this classification problem, we assume that there is a latent representation for given entity pair in the KG, i.e. the collection of linked paths, these hidden variables can reveal the underlying semantics between these two entities.",
"Therefore, the link classification problem can be decomposed into two modules acquire underlying paths (Path Finder) and infer relation from latent representation (Path Reasoner).",
"Path Finder The state-of-the-art approach (Xiong et al., 2017; Das et al., 2018) is to view this process as a Markov Decision Process (MDP).",
"A tuple < S, A, P > is defined to represent the MDP, where S denotes the current state, e.g. the current node in the knowledge graph, A is the set of available actions, e.g. all the outgoing edges from the state, while P is the transition probability describing the state transition mechanism.",
"In the knowledge graph, the transition of the state is deterministic, so we do not need to model the state transition P .",
"Path Reasoner The common approach (Lao et al., 2011; Neelakantan et al., 2015; Das et al., 2016) is to encode the path as a feature vector and use a multi-class discriminator to predict the unknown relation.",
"PRA (Lao et al., 2011) proposes to encode paths as binary features to learn a log-linear classifier, while (Das et al., 2016) applies recurrent neural network to recursively encode the paths into hidden features and uses vector similarity for classification.",
"Here we draw a schematic diagram of our model in Figure",
"1. Formally, we define the objective function for the general relation classification problem as follows: Obj = X ( e s ,r,e d ) D log p ( r | ( e s , e d )) = X ( e s ,r,e d ) D log XL p ( L | ( e s , e d )) p ( r | L ) (1) where D is the dataset, ( e s , r, e d ) is the triple contained in the dataset, and L is the latent connecting paths.",
"The evidence probability p ( r | ( e s , e d )) can be written as the marginalization of the product of two terms over the latent space.",
"However, this evidence probability is intractable since it requires summing over the whole latent link space.",
"Therefore, we propose to maximize its variational lower bound as follows: ELBO = EL q ( L | r, ( e s ,e d )) [log p ( r | L )] KL ( q ( L | r, ( e s , e d )) || p ( L | ( e s , e d ))) (2) 1825 \" ## $ #% %% Path connecting entity pair Query: ( \" ,?, $ ) Relation: Triple: ( \" ,, $ ) Posterior (| \" , $ ,) Figure 1: The probabilistic graphical model of our proposed approach.",
"Specifically, the ELBO (Kingma and Welling, 2013) is composed of three different terms likelihood p (cid:0) r | L ) , prior p (cid:0) L | ( e s , e t )) , and posterior q (cid:0) L | ( e s , e d ) , r ) .",
"In this paper, we use three neural network models to parameterize these terms and then follow (Kingma and Welling, 2013) to apply variational auto-encoder to maximize the approximate lower bound.",
"We describe these three models in details below: Path Reasoner (Likelihood).",
"Here we propose a path reasoner using Convolutional Neural Networks (CNN) (LeCun et al., 1995) and a feed-forward neural network.",
"This model takes path sequence L = { a 1 , e 1 , , a i , e i , a n , e n } to output a softmax probability over the relations set R , where a i denotes the i -th intermediate relation and e i denotes the i -th intermediate entity between the given entity pair.",
"Here we first project them into embedding space and concatenate i-th relation embedding with i -th entity embedding as a combined vector, which we denote as { f 1 , f 2 , , f n } and f i R 2 E .",
"As shown in Figure 2, we pad the embedding sequence to a length of N .",
"Then we design three convolution layers with window size of (1 2 E ) , (2 2 E ) , (3 2 E ) , input channel size 1 and filter size D .",
"After the convolution layer, we use ( N 1) , ( N 1 1) , ( N 2 1) to max pool the convolution feature map.",
"Finally, we concatenate the three vectors as a combined vector F R 3 D .",
"Finally, we use two-layered MLP with intermediate hidden size of M to output a softmax distribution over all the relations set R .",
"where f denotes the convolution and max-pooling function applied to extract reasoning path feature F , and W r , b r denote the weights and bias for the output feed-forward neural network.",
"Path Finder (Prior).",
"Here we formulate the path finder p ( L | ( e s , e d )) as an MDP problem, and recursively predict actions (an outgoing relation-entity edge ( a, e ) ) in every time step based on the previous history h t 1 as follows: c t = ReLU ( W h [ h t ; e d ] + b h ) p (( a t +1 , e t +1 ) | h t , ) = softmax ( A t c t ) (4) where the h t RH denotes the history embedding, e d RE denotes the entity embedding, A t R | A | 2 E is outgoing matrix which stacks the concatenated embeddings of all outgoing edges 1826 and | A | denotes the number of outgoing edge, we use W h and b h to represent the weight and bias of the feed-forward neural network outputting feature vector c t R 2 E .",
"The history embedding h t is obtained using an LSTM network (Hochreiter and Schmidhuber, 1997) to encode all the previous decisions as follows: h t = LST M ( h t 1 , ( a t , e t )) (5) As shown in Figure 3, the LSTM-based path finder interacts with the KG in every time step and decides which outgoing edge ( a t +1 , e t +1 ) to follow, search procedure will terminate either the target node is reached or the maximum step is reached.",
"Approximate Posterior.",
"We formulate the posterior distribution q ( L | ( e s , e d ) , r ) following the similar architecture as the prior.",
"The main difference lies in the fact that posterior approximator is aware of the relation r , therefore making more relevant decisions.",
"The posterior borrows the history vector from finder as h t , while the feed-forward neural network is distinctive in that it takes the relation embedding also into account.",
"Formally, we write its outgoing distribution as follows: u t = ReLU ( W hp [ h t ; e d ; r ] + b hp ) q (( a t +1 , e t +1 ) | h t ; ) = softmax ( A t u t ) (6) where W hp and b hp denote the weight and bias for the feed-forward neural network.",
"In order to maximize the ELBO with respect to the neural network models described above, we follow VAE (Kingma and Welling, 2013) to interpret the negative ELBO as two separate losses and minimize these them jointly using a gradient descent:",
"this loss function is motivated to reconstruct the relation R from the latent variable L sampled from approximate posterior, optimizing this loss function jointly can not only help the approximate posterior to obtain paths unique to particular relation r , but also teaches the path reasoner to reason over multiple hops and predict the correct relation.",
"this loss function is motivated to push the prior distribution towards the posterior distribution.",
"The intuition of this loss lies in the fact that an entity pair already implies their relation, therefore, we can teach the path finder to approach the approximate posterior as much as possible.",
"During test-time when we have no knowledge about relation, we use path finder to replace posterior approximator to search for high-quality paths.",
"Derivatives.",
"We show the derivatives of the loss function with respect to three different models.",
"For the approximate posterior, we re-weight the KL-diverge loss and design a joint loss function as follows: J = JR + w KLJKL (9) where w KL is the re-weight factor to combine these two losses functions together.",
"where f re ( L ) = log p + w KL log p q denotes the probability assigned by path reasoner.",
"In practice, we found that the KL-reward term log p q causes severe instability during training, so we fi-nally leave this term out by setting w KL as",
"0. For the path reasoner, we also optimize its parameters with regard to the reconstruction as follows: J R = EL q ( L ) log p ( r | L ) (11) For the path finder, we optimize its parameters with regard to the KL-divergence to teach it to infuse the relation information into the found links.",
"Train & Test During training time, in contrast to the preceding methods like Das et al. (2018); Xiong et al. (2017), we also exploit negative samples by introducing an pseudo n/a relation,",
"which indicates no-relation between two entities.",
"Therefore, we manage to decompose the data sample ( e q , r q , [ e 1 , e 2 , , e + n ]) into a series of tuples ( e q , r 0 q , e i ) , where r 0 q = r q for positive samples and r 0 q = n/a for negative samples.",
"During training, we alternatively update three submodules with SGD.",
"During test, we apply the path-finder to beam-search the top paths for all tuples and rank them based on the scores assign by path-reasoner.",
"More specifically, we demonstrate the pseudo code in Algorithm",
"1. 3.4 Discussion We here interpret the update of the posterior approximator in equation Equation 10 as a special case of REINFORCE (Williams, 1992), where we use Monte-Carlo sampling to estimate the expected return log p ( r | L ) for current posterior policy.",
"This formula is very similar to DeepPath and MINERVA (Xiong et al., 2017; Das et al., 2018) in the sense that path-finding process is described as an exploration process to maximize the pol-icy's long-term reward.",
"Unlike these two models assigning heuristic rewards to the policy, our model assigns model-based reward log p ( r | L ) , which is known to be more sophisticated and considers more implicit factors to distinguish between good and bad paths.",
"Besides, our update formula for path reasoner Equation 11 is also similar to chain-of-reasoning (Das et al., 2016), both models are aimed at maximizing the likelihood of relation given the multi-hop chain.",
"However, our model is distinctive from theirs in a sense that the obtained paths are sampled from a dynamic policy, by exposing more diverse paths to the path reasoner, it can generalize to more conditions.",
"By the active interactions and collaborations of two models, DIVA is able to comprehend more complex inference scenarios and handle more noisy environments.",
"To evaluate the performance of DIVA , we explore the standard link prediction task on two different-sized KG datasets and compare with the state-of-the-art algorithms.",
"Link prediction is to rank a list of target entities ( e 1 , e 2 , , e + n ) given a query entity e q and query relation r q .",
"The dataset is arranged in the format of ( e q , r q , [ e 1 , e 2 , , e + n ]) , and the evaluation score (Mean Averaged Precision, MAP) is based on the ranked position of the positive sample.",
"We perform experiments on two datasets, and the details of the statistics are described in Table",
"1. The samples of FB15k-237 (Toutanova et al., 2015) are sampled from FB15k (Bordes et al., 2013), here we follow DeepPath (Xiong et al., 2017) to select 20 relations including Sports, Locations, Film, etc.",
"Our NELL dataset is downloaded from the released dataset 2 , which contains 12 relations for evaluation.",
"Besides, both datasets contain negative samples obtained by using the PRA code released by Lao et al. (2011).",
"For each query r q , we remove all the triples with r q and r 1 q during reasoning.",
"During training, we set number of rollouts to 20 for each training sample and update the posterior distribution using Monte-Carlo REINFORCE (Williams, 1992) algorithm.",
"During testing, we use a beam of 5 to approximate the whole search space for path finder.",
"We follow MINERVA (Das et al., 2018) to set the maximum reasoning length to 3, which lowers the burden for the path-reasoner model.",
"For both datasets, we set the embedding size E to 200, the history embedding size H to 200, the convolution kernel feature size D to 128, we set the hidden size of MLP for both path finder and path reasoner to 400.",
"2 https://github.com/xwhan/DeepPath 1828 Dataset #Ent #R #Triples #Tasks FB15k-237 14,505 237 310,116 20 NELL-995 75,492 200 154,213 12 Table 1: Dataset statistics.",
"We mainly compare with the embedding-based algorithms (Bordes et al., 2013; Lin et al., 2015; Ji et al., 2015; Wang et al., 2014), PRA (Lao et al., 2011), MINERVA (Das et al., 2018), DeepPath (Xiong et al., 2017) and Chain-of-Reasoning (Das et al., 2016), besides, we also take our standalone CNN path-reasoner from DIVA .",
"Besides, we also try to directly maximize the marginal likelihood p ( r | e s , e d ) = PL p ( L | e s , e d ) p ( r | L ) using only the prior and likelihood model following MML (Guu et al., 2017b), which enables us to understand the superiority of introducing an approximate posterior.",
"Here we first report our results for NELL-995 in Table 2, which is known to be a simple dataset and many existing algorithms already approach very significant accuracy.",
"Then we test our methods in FB15k (Toutanova et al., 2015) and report our results in Table 3, which is much harder than NELL and arguably more relevant for real-world scenarios.",
"Besides, we also evaluate our model on FB-15k 20-relation subset with HITS@N score.",
"Since our model only deals with the relation classification problem ( e s , ? , e d ) with e d as input, so it's hard for us to directly compare with MINERVA (Das et al., 2018).",
"However, here we compare with chain-RNN (Das et al., 2016) and CNN Path-Reasoner model, the results are demonstrated as Table 4.",
"Please note that the HITS@N score is computed against relation rather than entity.",
"Result Analysis We can observe from the above tables Table 3 and Table 2 that our algorithm has significantly outperformed most of the existing algorithms and achieves a very similar result as MINERVA (Das et al., 2018) on NELL dataset and achieves state-of-the-art results on FB15k.",
"We conclude that our method is able to deal with more complex reasoning scenarios and is more robust to the adversarial examples.",
"Besides, we also observe that our CNN Path-Reasoner can outperform the RNN-Chain (Das et al., 2016) on both datasets, we speculate that it is due to the short lengths of reasoning chains, which can extract more useful information from the reasoning chain.",
"From these two pie charts in Figure 5, we can observe that in NELL-995, very few errors are coming from the path reasoner since the path length is very small.",
"A large proportion only contains a single hop.",
"In contrast, most of the failures in the FB15k dataset are coming from the path reasoner, which fails to classify the multi-hop chain into correct relation.",
"This analysis demonstrates that FB15k is much harder dataset and may be closer to real-life scenarios.",
"Here we are especially interested in studying the impact of different beam sizes in the link prediction tasks.",
"With larger beam size, the path finder can obtain more linking paths, meanwhile, more noises are introduced to pose greater challenges for the path reasoner to infer the relation.",
"With smaller beam size, the path finder will struggle to find connecting paths between positive entity 1829 0 20 40 60 80 100 1 5 10 20 30 40 50 MAPBEAM SIZEMAP VS. BEAM SIZENELL-MAP FB15k-MAP 0 0.010.020.030.040.050.060.07 1 5 10 20 30 40 50 PERCENTBEAM SIZEERROR TYPE RATIO VS. BEAM SIZE (NELL) Neg>Pos=0 (Type 1) Neg>Pos>0 (Type 2) Neg=Pos=0 (Type 3) 0 0.1 0.2 0.3 0.4 0.5 1 5 10 20 30 40 PERCENTBEAM SIZEERROR TYPE RATIO VS. BEAM SIZE (FB15K) Neg>Pos=0 (Type 1) Neg>Pos>0 (Type 2) Neg=Pos=0 (Type 3) Figure 4: MAP results varying beam size and the error type's occurrence w.r.t to beam size.",
"pairs, meanwhile eliminating many noisy links.",
"Therefore, we first mainly summarize three different types and investigate their changing curve under different beam size conditions:",
"1. No paths are found for positive samples, while paths are found for negative samples, which we denote as Neg > Pos=0.",
"2. Both positive samples and negative samples found paths, but the reasoner assigns higher scores to negative samples, which we denote as Neg > Pos",
"> 0. 3.",
"Both negative and positive samples are not able to find paths in the knowledge graph, which we denote as Neg=Pos=0.",
"We draw the curves for MAP and error ratios in Figure 4 and we can easily observe the tradeoffs, we found that using beam size of 5 can bal-ance the burden of path-finder and path-reasoner optimally, therefore we keep to this beam size for the all the experiments.",
"In order to investigate the bottleneck of DIVA , we take a subset from validation dataset to summarize the causes of different kinds of errors.",
"Roughly, we classify errors into three categories, 1) KG noise: This error is caused by the KG itself, e.g some important relations are missing; some entities are duplicate; some nodes do not have valid outgoing edges.",
"2) Path-Finder error: This error is caused by the path finder, which fails to arrive destination.",
"3) Path-Reasoner error: This error 1830 is caused by the path reasoner to assign a higher score to negative paths.",
"Here we draw two pie charts to demonstrate the sources of reasoning errors in two reasoning tasks.",
"We also show some failure samples in Table 5 to help understand where the errors are coming from.",
"We can conclude that the duplicate entity and missing entity problems are mainly caused by the knowledge graph or the dataset, and the link prediction model has limited capability to resolve that.",
"In contrast, the wrong reasoning problem is mainly caused by the reasoning model itself and can be improved with better algorithms.",
"In this paper, we propose a novel variational inference framework for knowledge graph reasoning.",
"In contrast to prior studies that use a random walk with restarts (Lao et al., 2011) and explicit reinforcement learning path finding (Xiong et al., 2017), we situate our study in the context of variational inference in latent variable probabilistic graphical models.",
"Our framework seamlessly integrates the path-finding and path-reasoning processes in a unified probabilistic framework, leveraging the strength of neural network based representation learning methods.",
"Empirically, we show that our method has achieved the state-of-the-art performances on two popular datasets.",
"The authors would like to thank the anonymous reviewers for their thoughtful comments.",
"This research was sponsored in part by the Army Research Laboratory under cooperative agreements W911NF09-2-0053 and NSF IIS 1528175.",
"The views and conclusions contained herein are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S. Government.",
"The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notice herein."
] | [
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"objective",
"method",
"method",
"other",
"abstain",
"method",
"other",
"other",
"method",
"method",
"other",
"other",
"other",
"method",
"other",
"other",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"other",
"other",
"other",
"other"
] |
[
"Structured document understanding has attracted considerable attention and made significant progress recently, owing to its crucial role in intelligent document processing.",
"However, most existing related models can only deal with the document data of specific language(s) (typically English) included in the pre-training collection, which is extremely limited.",
"To address this issue, we propose a simple yet effective L anguagei ndependent L ayout T ransformer ( LiLT ) for structured document understanding.",
"LiLT can be pre-trained on the structured documents of a single language and then directly fine-tuned on other languages with the corresponding off-the-shelf monolingual/multilingual pre-trained textual models.",
"Experimental results on eight languages have shown that LiLT can achieve competitive or even superior performance on diverse widely-used downstream benchmarks, which enables language-independent benefit from the pre-training of document layout structure.",
"Code and model are publicly available at https://github.com/jpWang/LiLT.",
"Structured document understanding (SDU) aims at reading and analyzing the textual and structured information contained in scanned/digital-born documents.",
"With the acceleration of the digitization process, it has been regarded as a crucial part of intelligent document processing and required by many real-world applications in various industries such as finance, medical treatment and insurance.",
"Recently, inspired by the rapid development of pre-trained language models of plain texts (Devlin et al., 2019; Liu et al., 2019b; Bao et al., 2020; Chi et al., 2021), many researches on structured document pre-training (Xu et al., 2020, 2021a,b; Li et al., 2021a,b,c; Appalaraju et al., 2021) have also Corresponding author.",
"pushed the limit of a variety of SDU tasks.",
"However, almost all of them only focus on pre-training and fine-tuning on the documents in a single language, typically English.",
"This is extremely limited for other languages, especially in the case of lacking pre-training structured document data.",
"In this regard, we consider how to make the SDU tasks enjoy language-independent benefit from the pre-training of document layout structure.",
"Here, we give an observation as shown in Figure 1.",
"When the layout structure remains unchanged, the substitution of language does not make obvious unnaturalness.",
"It fully motivates us to decouple and reuse the layout invariance among different languages.",
"Based on this inspiration, in this paper, we propose a simple yet effective L anguagei ndependent L ayout T ransformer ( LiLT ) for structured document understanding.",
"In our framework, the text and layout information are first decoupled and joint-optimized during pre-training, and then re-coupled for fine-tuning.",
"To ensure that the two modalities have sufficient language-independent interaction, we further propose a novel bi-directional attention complementation mechanism (BiACM) to enhance the cross-modality cooperation.",
"Moreover, we present the key point location (KPL) and cross-modal alignment identification (CAI) tasks, which are combined with the widely-used masked visual-7747 language modeling (MVLM) to serve as our pretraining objectives.",
"During fine-tuning, the layout flow (LiLT) can be separated and combined with the off-the-shelf pre-trained textual models (such as RoBERTa (Liu et al., 2019b), XLM-R (Conneau et al., 2020), InfoXLM (Chi et al., 2021), etc) to deal with the downstream tasks.",
"In this way, our method decouples and learns the layout knowledge from the monolingual structured documents before generalizing it to the multilingual ones.",
"To the best of our knowledge, the only preexisting multilingual SDU model is LayoutXLM (Xu et al., 2021b).",
"It scraps multilingual PDF documents of 53 languages from a web crawler and introduces extra pre-processing steps to clean the collected data, filter the low-quality documents, and classify them into different languages.",
"After this, it utilizes a heuristic distribution to sample 22 million multilingual documents, which are further combined with the 8 million sampled English ones from the IIT-CDIP (Lewis et al., 2006) dataset (11 million English documents), resulting 30 million for pre-training with the LayoutLMv2 (Xu et al., 2021a) framework.",
"However, this process is time-consuming and laborious.",
"On the contrary, LiLT can be pre-trained with only IIT-CDIP and then adapted to other languages.",
"In this respect, LiLT is the first language-independent method for structured document understanding.",
"Experimental results on eight languages have shown that LiLT can achieve competitive or even superior performance on diverse widely-used downstream benchmarks, which substantially benefits numerous real-world SDU applications.",
"Our main contributions can be summarized as follows: We introduce a simple yet effective language-independent layout Transformer called LiLT for monolingual/multilingual structured document understanding.",
"We propose BiACM to provide language-independent cross-modality interaction, along with an effective asynchronous optimization strategy for textual and non-textual flows in pre-training.",
"Moreover, we present two new pre-training objectives, namely KPL and CAI.",
"LiLT achieves competitive or even superior performance on various widely-used downstream benchmarks of different languages under different settings, which fully demonstrates its effectiveness.",
"Figure 2 shows the overall illustration of our method.",
"Given an input document image, we first use off-the-shelf OCR engines to get text bounding boxes and contents.",
"Then, the text and layout information are separately embedded and fed into the corresponding Transformer-based architecture to obtain enhanced features.",
"Bi-directional attention complementation mechanism (BiACM) is introduced to accomplish the cross-modality interaction of text and layout clues.",
"Finally, the encoded text and layout features are concatenated and additional heads are added upon them, for the self-supervised pre-training or the downstream fine-tuning.",
"The whole framework can be regarded as a parallel dual-stream Transformer.",
"The layout flow shares a similar structure as text flow, except for the reduced hidden size and intermediate size to achieve computational efficiency.",
"Following the common practice (Devlin et al., 2019; Xu et al., 2020), in the text flow, all text strings in the OCR results are first tokenized and concatenated as a sequence S t by sorting the corresponding text bounding boxes from the top-left to bottom-right.",
"Intuitively, the special tokens [CLS] and [SEP] are also added at the beginning and end of the sequence respectively.",
"After this, S t will be truncated or padded with extra [PAD] tokens until its length equals the maximum sequence length N .",
"Finally, we sum the token embedding E token of S t and the 1D positional embedding P 1D to obtain the text embedding ET RN d T as: ET = LN( E token + P 1D ) , (1) where d T is the number of text feature dimension and LN is the layer normalization (Ba et al., 2016).",
"As for the layout flow, we construct a 2D position sequence S l with the same length as the token sequence S t using the corresponding text bounding boxes.",
"To be specific, we normalize and dis-cretize all box coordinates to integers in the range [0 , 1000] , and use four embedding layers to generate x -axis, y -axis, height, and width features separately.",
"Given the normalized bounding boxes B = ( x min , x max , y min , y max , width, height ) , the 2D 7748 MatMul MatMul MaskOut SoftMax QTKTVT Transformer Layer i MatMul MatMul MaskOut SoftMax QLKLVL Transformer Layer i Token Embedding 1D Position Embedding + + + + + + + + t 1 t M t 2 t 4 t R t 7 t 6 t 8 1 3 2 4 5 7 6 8 2D Position Embedding 1D Position Embedding + + + + + + + + b 1 b 3 b R b 4 b 5 b 7 b 6 b M 1 3 2 4 5 7 6 8 + + Scale The Text Flow (RoBERTa/XLM-R/InfoXLM/...) The Layout Flow (LiLT) || Pre-training Objectives MaskedVisual-Language Modeling t 3 -t 5 t 7 --r 2 r 4 -r 8 Key Point Location Cross-modalAlignment Identification (0:Mis-aligned, 1:Aligned) -0 1 0 1 -1 3 2 4 5 7 6 8 Fine-tuning Tasks Semantic Entity Recognition (H:Header, Q:Question, A:Answer, O:Other) O O H Q A Q O A -------------0 ---0 1 -------0 0 0 --0 0 0 1 -Relation Extraction (0:None, 1:Key-Value Pair) OCR Engines Scale N l + || Concatenate Add Detach (only exists in pre-training) BiACM Figure 2: The overall illustration of our framework.",
"positional embedding P 2D RN d L (where d L is the number of layout feature dimension) is constructed as follows: P 2D = Linear (CAT( E x min , E x max , E y min , E y max ,E width , E height )) .",
"Here, the E s are embedded vectors.",
"Linear is a linear projection layer and CAT is the channel-wise concatenation operation.",
"The special tokens [CLS] , [SEP] and [PAD] are also attached with (0,0,0,0,0,0), (1000,1000,1000,1000,0,0) and (0,0,0,0,0,0) respectively.",
"It is worth mentioning that, for each token, we directly utilize the bounding box of the text string it belongs to, because the fine-grained token-level information is not always included in the results of some OCR engines.",
"Since Transformer layers are permutation-invariant, here we introduce the 1D positional embedding again.",
"The resulting layout embedding EL RN d L can be formulated as: EL = LN( P 2D + P 1D ) .",
"The text embedding ET and layout embedding EL are fed into their respective sub-models to generate high-level enhanced features.",
"However, it will considerably ignore the cross-modal interaction process if we simply combine the text and layout features at the encoder output only.",
"The network also needs to comprehensively analyse them at earlier stages.",
"In view of this, we propose a new bi-directional attention complementation mechanism (BiACM) to strengthen the cross-modality interaction across the entire encoding pipeline.",
"Experiments in Section 3.2 will further verify its effectiveness.",
"The vanilla self-attention mechanism in Transformer layers captures the correlation between query x i and key x j by projecting the two vectors and calculating the attention score as: ij = ( x i WQ )( x j WK ) d h .",
"Here, the description is for a single head in a single self-attention layer with hidden size of d h and projection metrics WQ , WK for simplicity.",
"Given Tij and Lij of the text and layout flows located in the same head of the same layer, BiACM shares them as common knowledge, which is formulated as: (cid:102) Tij = Lij + Tij , (5) (cid:102) Lij = (cid:40) Lij + DETACH( Tij ) if Pre train , Lij + Tij if Fine tune .",
"In order to maintain the ability of LiLT to cooperate with different off-the-shelf text models in fine-tuning as much as possible, we heuristically adopt the detached Tij for (cid:102) Lij , so that the textual stream will not be affected by the gradient of non-textual",
"one during pre-training, and its overall consistency can be preserved.",
"Finally, the modified attention scores are used to weight the projected value vectors for subsequent modules in both flows.",
"We conduct three self-supervised pre-training tasks to guide the model to autonomously learn joint representations with cross-modal cooperation.",
"The details are introduced below.",
"This task is originally derived from (Devlin et al., 2019).",
"MVLM randomly masks some of the input tokens and the model is asked to recover them over the whole vocabulary using the output encoded features, driven by a cross-entropy loss.",
"Meanwhile, the non-textual information remains unchanged.",
"MVLM improves model learning on the language side with cross-modality information.",
"The given layout embedding can also help the model better capture both interand intra-sentence relationships.",
"We mask 15% text tokens, among which 80% are replaced by the special token [MASK] , 10% are replaced by random tokens sampled from the whole vocabulary, and 10% remain the same.",
"We propose this task to make the model better understand layout information in the structured documents.",
"KPL equally divides the entire layout into several regions (we set 7 7=49 regions by default) and randomly masks some of the input bounding boxes.",
"The model is required to predict which regions the key points (top-left corner, bottom-right corner, and center point) of each box belong to using separate heads.",
"To deal with it, the model is required to fully understand the text content and know where to put a specific word/sentence when the surrounding ones are given.",
"We mask 15% boxes, among which 80% are replaced by (0,0,0,0,0,0), 10% are replaced by random boxes sampled from the same batch, and 10% remain the same.",
"Cross-entropy loss is adopted.",
"Since there may exist detection errors in the output of OCR engines, we let the model predict the discretized regions (as mentioned above) instead of the exact location.",
"This strategy can moderately relax the punishment criterion while improving the model performance.",
"We collect those encoded features of token-box pairs that are masked and further replaced (mis-aligned) or kept unchanged (aligned) by MVLM and KPL, and build an additional head upon them to identify whether each pair is aligned.",
"To achieve this, the model is required to learn the cross-modal perception capacity.",
"CAI is a binary classification task, and a cross-entropy loss is applied for it.",
"Utilizing a unified learning rate for all model parameters to perform the end-to-end training process is the most common optimization strategy.",
"While in our case, it will cause the layout flow to continuously update in the direction of coupling with the evolving text flow in the pre-training stage, which is harmful to the ability of LiLT to cooperate with different off-the-shelf textual models during fine-tuning.",
"Based on this consideration, we explore multiple ratios to greatly slow down the pre-training optimization of the text stream.",
"We also find that an appropriate reduction ratio is better than parameter freezing.",
"Note that, we adopt a unified learning rate for end-to-end optimization during fine-tuning.",
"The DETACH operation of BiACM is also canceled at this time, as shown in Equation 6. 3 Experiments 3.1 Pre-training Setting We pre-train LiLT on the IIT-CDIP Test Collection 1.0 (Lewis et al., 2006), which is a large-scale scanned document image dataset and contains more than 6 million documents with more than 11 million scanned document images.",
"We use TextIn API 1 to obtain the text bounding boxes and strings for this dataset.",
"In this paper, we initialize the text flow from the existing pre-trained English RoBERTa BASE (Liu et al., 2019b) for our document pre-training, and combine LiLT BASE with the pre-trained InfoXLM BASE (Chi et al., 2021)/a new pre-trained RoBERTa BASE for multilingual/monolingual fine-tuning.",
"They have an equal number of self-attention layers, attention heads and maximum sequence length, which ensures that BiACM can work normally.",
"In this BASE setting, LiLT has a 12-layer encoder with 192 hidden size, 768 feed-forward filter size and 12 attention heads, resulting 1 https://www.textin.com 7750 # Inter-modal Operation Average F1 1 CAT 0.6751 2 CAT + Co-Attention (Lu et al., 2019) 0.6276 3 CAT + BiACM 0.7963 4 CAT + BiACM DETACH in pre-training 0.7682 5 CAT + BiACM + DETACH in fine-tuning 0.7822 6 The text flow alone (InfoXLM BASE , as shown in Table 6) 0.7207",
"LiLT BASE is pre-trained using Adam optimizer (Kingma and Ba, 2015; Loshchilov and Hutter, 2018), with the learning rate 2 10 5 , weight decay 1 10 2 , and ( 1 , 2 ) = (0.9, 0.999).",
"The learning rate is linearly warmed up over the first 10% steps and then linearly decayed.",
"We set the batch size as 96 and train LiLT BASE for 5 epochs on the IIT-CDIP dataset using 4 NVIDIA A40 48GB GPUs.",
"Considering that the complete pre-training takes a relatively long time, we pre-train LiLT BASE with 2M documents randomly sampled from IIT-CDIP for 5 epochs to conduct ablation experiments, as shown in Table 1.",
"We first evaluate the effect of introducing BiACM.",
"In setting",
"(a)#1, the text and layout features are concatenated at the model output without any further interaction.",
"Compared with",
"(a)#6, Model Precision Recall F1 BERT BASE1 0.5469 0.6710 0.6026 RoBERTa BASE2 0.6349 0.6975 0.6648 UniLMv2 BASE3 0.6349 0.6975 0.6648 LayoutLM BASE4 0.7597 0.8155 0.7866 BROS BASE5 0.8056 0.8188 0.8121 SelfDoc 6 -0.8336 LayoutLMv2 BASE7 0.8029 0.8539 0.8276 StrucTexT BASE8 0.8568 0.8097 0.8309 DocFormer BASE9 0.8076 0.8609 0.8334 LayoutXLM BASE10 0.7913 0.8158 0.8034 LiLT[ EN-R 2 ] BASE 0.8721 0.8965 0.8841 LiLT[ InfoXLM 11 ] BASE 0.8467 0.8709 0.8586 Table 2: Comparison on the semantic entity recognition (SER) task of FUNSD (Jaume et al., 2019) dataset.",
"we find that such a plain design results in a much worse performance than using the text flow alone.",
"From",
"(a)#1 to",
"(a)#3, the significant improvement demonstrates that it is the novel BiACM that makes the transfer from monolingual to multilingual successful.",
"Beside this, we have also tried to replace BiACM with the co-attention mechanism (Lu et al., 2019) which is widely adopted in dual-stream Transformer architecture.",
"It can be seen as a deeper cross-modal interaction, since the keys and values from each modality are passed as input to the other modality's dot-product attention calculation.",
"However, severe drops are observed as shown in",
"(a)#2 vs",
"(a)#1#3.",
"We attribute it to the damage of such a deeper interaction to the overall consistency of the text flow in the pre-training optimization.",
"In contrast, BiACM can maintain LiLT's cross-model cooperation ability on the basis of providing cross-modal information.",
"Moreover, the necessity of DETACH in pre-training is proved in",
"(a)#4 vs",
"(a)#3.",
"Compared",
"(a)#3 to",
"(a)#5, we can also infer that removing DETACH in fine-tuning leads to a better performance.",
"Then, we compare the proposed KPL and CAI tasks.",
"As shown in Table",
"1(b), both tasks improve the model performance substantially, and the proposed CAI benefits the model more than KPL.",
"Using both tasks together is more effective than using either one alone.",
"Finally, we explore the most suitable slow-down ratio for the pre-training optimization of the text flow.",
"A ratio equal to 1 in",
"(c)#1 means there is no slow-down and a unified learning rate is adopted.",
"It can be found that the F1 scores keep rising with the growth of slow-down ratios and begin to fall when the ratio is greater than 1000.",
"Consequently, we set the slow-down ratio as 1000 by default.",
"To demonstrate the performance of LiLT, we conduct experiments on several widely-used monolingual datasets and the multilingual XFUND benchmark (Xu et al., 2021b).",
"In addition to the experiments involving typical language-specific fine-tuning, we also follow the two settings designed Model Accuracy VGG-16 1 90.97% Stacked CNN Single 2 91.11% Stacked CNN Ensemble 2 92.21% InceptionResNetV2 3 92.63% LadderNet 4 92.77% Multimodal Single 5 93.03% Multimodal Ensemble 5 93.07% BERTBASE 89.81% UniLMv2 BASE 90.06% LayoutLM BASE (w/ image) 94.42% BROSBASE 95.58% SelfDoc 93.81% TILTBASE 93.50% LayoutLMv2 BASE 95.25% DocFormer BASE 96.17% LayoutXLM BASE 95.21% LiLT[ EN-R ] BASE 95.68% LiLT[ InfoXLM ] BASE 95.62% Table 5: Comparison on the document classification (DC) task of RVL-CDIP (Harley et al., 2015) dataset.",
"in (Xu et al., 2021b) to demonstrate the ability to transfer knowledge among different languages, which are zero-shot transfer learning and multitask fine-tuning, for fair comparisons.",
"Specifically, (1) language-specific fine-tuning refers to the typical fine-tuning paradigm of fine-tuning on language X and testing on language X. (2) Zero-shot transfer learning means the models are fine-tuned on English data only and then evaluated on each target language.",
"(3) Multitask fine-tuning requires the model to fine-tune on data in all languages.",
"We first evaluate LiLT on four widely-used monolingual datasets FUNSD (Jaume et al., 2019), CORD (Park et al., 2019), EPHOIE (Wang et al., 2021a) and RVL-CDIP (Lewis et al., 2006), and the results are shown in Table 2, 3, 4 and 5. We have found that (1) LiLT is flexible since it can work with monolingual or multilingual plain text models to deal with downstream tasks.",
"(2) Although LiLT is designed for the transfer from monolingual to multilingual, it can surprisingly cooperate with monolingual textual models to achieve competitive or even superior performance (especially on the FUNSD dataset with only a few training samples available), compared with existing language-specific SDU models such as LayoutLMv2 and 7752 Task Model Pre-training Docs FUNSD XFUND Avg.",
"DocFormer.",
"(3) On these datasets which are widely adopted for monolingual evaluation, LiLT generally performs better than LayoutXLM.",
"This fully demonstrates the effectiveness of our pre-training framework and indicates that the layout and text information can be successfully decoupled in pretraining and re-coupled in fine-tuning.",
"Then we evaluate LiLT on language-specific fine-tuning tasks of FUNSD and the multilingual XFUND (Xu et al., 2021b), and the results are shown in Table 6. Compared with the plain text models (XLM-R/InfoXLM) or the LayoutXLM model pre-trained with 30M multilingual structured documents, LiLT achieves the highest F1 scores on both the SER and RE tasks of each language while using 11M monolingual data.",
"This significant improvement shows LiLT's capability to transfer language-independent knowledge from pre-training to downstream tasks.",
"The results of cross-lingual zero-shot transfer are presented in Table 7. It can be observed that the LiLT model transfers the most knowledge from English to other languages, and significantly outperforms its competitors.",
"This fully verifies that LiLT can capture the common layout invariance among different languages.",
"Moreover, LiLT has never seen non-English documents before evaluation under this setting, while the LayoutXLM model has been pre-trained with them.",
"This is to say, LiLT faces a stricter cross-lingual zero-shot transfer scenario but achieves better performance.",
"Table 8 shows the results of multitask learning.",
"In this setting, the pre-trained LiLT model is simultaneously fine-tuned with all eight languages and evaluated for each specific language.",
"We observe that this setting further improves the model performance compared to the language-specific fine-tuning, which confirms that SDU can benefit from commonalities in the layout of multilingual structured documents.",
"In addition, LiLT once again outperforms its counterparts by a large margin.",
"During the past decade, deep learning methods became the mainstream for document understanding tasks (Yang et al., 2017; Augusto Borges Oliveira et al., 2017; Siegel et al., 2018).",
"Grid-based methods (Katti et al., 2018; Denk and Reisswig, 2019; Lin et al., 2021) were proposed for 2D document representation where text pixels were encoded using character or word embeddings and classified into specific field types, using a convolutional neural network.",
"GNN-based approaches (Liu et al., 2019a; Yu et al., 2021; Tang et al., 2021) adopted multi-modal features of text segments as nodes to model the document graph, and used graph neural networks to propagate information between neighboring nodes to attain a richer representation.",
"In recent years, self-supervised pre-training has achieved great success.",
"Inspired by the development of the pre-trained language models in various NLP tasks, recent studies on structured document pre-training (Xu et al., 2020, 2021a,b; Li et al., 2021a,b,c; Appalaraju et al., 2021) have pushed the limits.",
"LayoutLM (Xu et al., 2020) modified the BERT (Devlin et al., 2019) architecture by adding 2D spatial coordinate embeddings.",
"In comparison, our LiLT can be regarded as a more powerful and flexible solution for structured document understanding.",
"LayoutLMv2 (Xu et al., 2021a) improved over LayoutLM by treating the visual fea-7753 Task Model Pre-training Docs FUNSD XFUND Avg.",
"tures as separate tokens.",
"Furthermore, additional pre-training tasks were explored to improve the utilization of unlabeled document data.",
"SelfDoc (Li et al., 2021b) established the contextualization over a block of content, while StructuralLM (Li et al., 2021a) proposed cell-level 2D position embeddings and the corresponding pre-training objective.",
"Recently, StrucTexT (Li et al., 2021c) introduced a unified solution to efficiently extract semantic features from different levels and modalities to handle the entity labeling and entity linking tasks.",
"DocFormer (Appalaraju et al., 2021) designed a novel multi-modal self-attention layer capable of fusing textual, vision and spatial features.",
"Nevertheless, the aforementioned SDU approaches mainly focus on a single language typically English, which is extremely limited with respect to multilingual application scenarios.",
"To the best of our knowledge, LayoutXLM (Xu et al., 2021b) was the only pre-existing multilingual SDU model, which adopted the multilingual textual model InfoXLM (Chi et al., 2021) as the initialization, and adapted the LayoutLMv2 (Xu et al., 2021a) framework to multilingual structured document pre-training.",
"However, it required a heavy process of multilingual data collection, cleaning and pre-training.",
"On the contrary, our LiLT can deal with the multilingual structured documents by pre-training on the monolingual IIT-CDIP Test Collection 1.0 (Lewis et al., 2006) only.",
"In this paper, we present LiLT, a language-independent layout Transformer that can learn the layout knowledge from monolingual structured documents and then generalize it to deal with multilingual ones.",
"Our framework successfully first decouples the text and layout information in pre-training and then re-couples them for fine-tuning.",
"Experimental results on eight languages under three settings (language-specific, cross-lingual zero-shot transfer, and multi-task fine-tuning) have fully illustrated its effectiveness, which substantially bridges the language gap in real-world structured document understanding applications.",
"The public availability of LiLT is also expected to promote the development of document intelligence.",
"For future research, we will continue to follow the pattern of transferring from monolingual to multilingual and further unlock the power of LiLT.",
"In addition, we will also explore the generalized rather than language-specific visual information contained in multilingual structured documents.",
"This research is (Grant No.:",
"supported in part by NSFC 61936003) and GD-NSF (No.",
"BERT-grid: Contextualized embedding for 2D document representation and understanding.",
"In Workshop on Document Intelligence at NeurIPS .",
"Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova.",
"2019.",
"BERT: Pre-training of deep bidirectional Transformers for language understanding.",
"In NAACL-HLT , pages 41714186.",
"ukasz Garncarek, Rafa Powalski, Tomasz Stanisawek, Bartosz Topolski, Piotr Halama, and Filip Gralinski.",
"2021.",
"LAMBERT: Layout-aware (language) modeling using BERT for information extraction.",
"In ICDAR .",
"Adam W Harley et al. 2015.",
"Evaluation of deep convolutional nets for document image classification and retrieval.",
"In ICDAR , pages 991995.",
"Teakgyu Hong, DongHyun Kim, Mingi Ji, Wonseok Hwang, Daehyun Nam, and Sungrae Park.",
"2020.",
"BROS: A pre-trained language model for understanding texts in document.",
"Guillaume Jaume et al. 2019.",
"FUNSD: A dataset for form understanding in noisy scanned documents.",
"In ICDAR , volume 2, pages 16.",
"Diederik P Kingma and Jimmy Ba.",
"2015.",
"Adam: A method for stochastic optimization.",
"In ICLR .",
"Guillaume Lample, Miguel Ballesteros, Sandeep Sub-ramanian, Kazuya Kawakami, and Chris Dyer.",
"2016.",
"Neural architectures for named entity recognition.",
"In NAACL-HLT , pages 260270.",
"David Lewis, Gady Agam, Shlomo Argamon, Ophir Frieder, David Grossman, and Jefferson Heard.",
"2006.",
"Building a test collection for complex document information processing.",
"In ACM SIGIR , pages 665 666.",
"Chenliang Li, Bin Bi, Ming Yan, Wei Wang, Songfang Huang, Fei Huang, and Luo Si.",
"2021a.",
"StructuralLM: Structural pre-training for form understanding.",
"In ACL .",
"Peizhao Li, Jiuxiang Gu, Jason Kuen, Vlad I Morariu, Handong Zhao, Rajiv Jain, Varun Manjunatha, and Hongfu Liu.",
"2021b.",
"SelfDoc: Self-supervised document representation learning.",
"In CVPR , pages 5652 5660.",
"2017A030312006).",
"References Muhammad Zeshan Afzal, Andreas Klsch, Sheraz Ahmed, and Marcus Liwicki.",
"2017.",
"Cutting the error by half: Investigation of very deep CNN and advanced training strategies for document image classification.",
"In ICDAR , volume 1, pages 883888.",
"Srikar Appalaraju, Bhavan Jasani, Bhargava Urala Kota, Yusheng Xie, and R Manmatha.",
"2021.",
"DocFormer: End-to-end Transformer for document understanding.",
"In ICCV .",
"Dario Augusto Borges Oliveira et al. 2017.",
"Fast CNN-based document layout analysis.",
"In ICCV Workshop , pages 11731180.",
"Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton.",
"2016.",
"Layer normalization.",
"arXiv preprint arXiv:1607.06450 .",
"Hangbo Bao, Li Dong, Furu Wei, Wenhui Wang, Nan Yang, Xiaodong Liu, Yu Wang, Jianfeng Gao, Song-hao Piao, Ming Zhou, et al. 2020.",
"UniLMv2: Pseudo-masked language models for unified language model pre-training.",
"In ICML , pages 642652.",
"Zewen Chi, Li Dong, Furu Wei, Nan Yang, Saksham Singhal, Wenhui Wang, Xia Song, Xian-Ling Mao, He-Yan Huang, and Ming Zhou.",
"2021.",
"InfoXLM: An information-theoretic framework for cross-lingual language model pre-training.",
"In NAACL-HLT , pages 35763588.",
"Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmn, douard Grave, Myle Ott, Luke Zettle-moyer, and Veselin Stoyanov.",
"2020.",
"Unsupervised cross-lingual representation learning at scale.",
"In ACL , pages 84408451.",
"Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Shijin Wang, and Guoping Hu.",
"2020.",
"Revisiting pre-trained models for Chinese natural language processing.",
"In Findings of EMNLP , pages 657668.",
"Arindam Das, Saikat Roy, Ujjwal Bhattacharya, and Swapan K Parui.",
"2018.",
"Document image classification with intra-domain transfer learning and stacked generalization of deep convolutional neural networks.",
"In ICPR , pages 31803185.",
"Tyler Dauphinee, Nikunj Patel, and Mohammad Rashidi.",
"2019.",
"Modular multimodal architecture for document classification.",
"arXiv preprint arXiv:1912.04376 ."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"method",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other"
] |
[
"Current dialogue summarization systems usually encode the text with a number of general semantic features (e.g., keywords and topics) to gain more powerful dialogue modeling capabilities.",
"However, these features are obtained via open-domain toolkits that are dialog-agnostic or heavily relied on human annotations.",
"In this paper, we show how DialoGPT (Zhang et al., 2020b), a pre-trained model for conversational response generation, can be developed as an unsupervised dialogue annotator, which takes advantage of dialogue background knowledge encoded in DialoGPT.",
"We apply DialoGPT to label three types of features on two dialogue summarization datasets, SAMSum and AMI, and employ pre-trained and non pre-trained models as our summarizers.",
"Experimental results show that our proposed method can obtain remarkable improvements on both datasets and achieves new state-of-the-art performance on the SAMSum dataset 1 .",
"Dialogue summarization aims to generate a succinct summary while retaining essential information of the dialogue (Gurevych and Strube, 2004; Chen and Yang, 2020).",
"Theoretically, Peyrard (2019) point out that a good summary is intuitively related to three aspects, including Informativeness , Redundancy and Relevance .",
"To this end, previous works have taken the above three aspects into account by incorporating auxiliary annotations into the dialogue.",
"To improve informativeness, some works annotated linguistically specific words (e.g., nouns and verbs), domain terminologies and topic words in the dialogue (Riedhammer et al., 2008; Koay et al., 2020; Zhao et al., 2020).",
"To reduce redundancy, some works Corresponding author.",
"used sentence similarity-based methods to annotate redundant utterances.",
"(Zechner, 2002; Murray et al., 2005).",
"To improve relevance, some works annotated topics for the dialogue (Li et al., 2019; Liu et al., 2019; Chen and Yang, 2020).",
"However, these annotations are usually obtained via open-domain toolkits, which are not suitable for dialogues, or require manual annotations, which are labor-consuming.",
"To alleviate the above problem, we explore the pre-trained language model as an unsupervised annotator to automatically provide annotations for the dialogue.",
"Recently, some works have investigated the use of pre-trained language models in an unsupervised manner.",
"For example, Sainz and Rigau (2021) exploited pre-trained models for assigning domain labels to WordNet synsets.",
"The successful recipe is that a model is obtained extensive knowledge via pre-training on a huge volume of data.",
"When it comes to the dialogue domain, DialoGPT (Zhang et al., 2020b) is a SOTA conversational response generation model, which is pre-trained on the massive dialogue data.",
"Therefore, we draw support from DialoGPT and present our DialoGPT annotator , which can perform three dialogue annotation tasks, including keywords extraction, redundancy detection and topic segmentation, to measure informativeness, redundancy and relevance of the input dialogue, respectively.",
"Keywords Extraction aims to automatically identify important words in the dialogue (shown in Figure",
"1(a)).",
"Our DialoGPT annotator extracts unpredictable words as keywords.",
"We assume that keywords contain high information, which are difficult to be predicted considering both background knowledge encoded in the DialoGPT and contextual information of dialogue context.",
"Redundancy Detection aims to detect redundant utterances that have no core contribution to the overall meaning of the dialogue (shown in Figure",
"annotator detects utterances that are useless for dialogue context representation as redundant.",
"We assume that if adding a new utterance does not change the dialogue context representation, then this utterance has no effect on predicting the response, so it is redundant.",
"Topic Segmentation aims to divide a dialogue into topically coherent segments (shown in Figure",
"1(c)).",
"Our DialoGPT annotator inserts a topic segmentation point before one utterance if it is unpredictable.",
"We assume that if an utterance is difficult to be inferred from the dialogue context based on DialoGPT, this utterance may belong to a new topic.",
"We use our DialoGPT annotator to annotate the SAMSum (Gliwa et al., 2019) and AMI (Carletta et al., 2005) datasets.",
"Each annotation is converted into a specific identifier and we insert them into the dialogue text.",
"Then, we employ pre-traind BART (Lewis et al., 2020) and non pre-trained PGN (See et al., 2017) as our summarizers.",
"Extensive experimental results show that our method can obtain consistent and remarkable improvements over strong baselines on both datasets and achieves new state-of-the-art performance on the SAMSum dataset.",
"In this section, we will describe the task definition as well as the background of DialoGPT.",
"Given an input dialogue D , a dialogue summarizer aims to produce a condensed summary S , where D consists of |D| utterances [ u 1 , u 2 , ...u |D| ] and S consists of |S| words [ s 1 , s 2 , ...s |S| ] .",
"Each utterance u i is compose of a sequence of words [ u i, 1 , u i, 2 , ...u i, | u i | , EOS i ] , where i [1 : |D| ] and EOS i indicates the end of the utterance.",
"Besides, each utterance u i associates with a speaker p i .",
"Thus, this task can be formalized as producing the summary S given the dialogue sequence: D = [ p 1 , u 1 , 1 , ..., EOS 1 , ..., p |D| , u |D| , 1 , ..., EOS |D| ] 2.2 DialoGPT DialoGPT (Zhang et al., 2020b) is a neural conversational response generation model, which inherits from GPT-2 (Radford et al., 2019) and is trained on 147M conversation-like exchanges extracted from Reddit comment chains.",
"There are 3 different sizes of the model with total parameters of 117M, 345M and 762M respectively.",
"It achieves state-of-the-art results over various dialogue generation benchmarks.",
"Given the dialogue context u i 1 = [ u i 1 , 1 , ..., u i 1 , | u i 1 | , EOS i 1 ] , DialoGPT aims to produce the response u i = [ u i, 1 , ..., u i, | u i | , EOS i ] , which can be formalized as the conditional probability of P ( u i | u i 1 ) .",
"It first takes the context word sequence of no more than 1024 tokens and outputs the representation of the sequence h i = ( h i 1 , 1 , ..., h i 1 , | u i 1 | , h i 1 , EOS i 1 ) , where h i 1 , EOS i 1 can be viewed as the representation of dialogue context u i 1 .",
"Then, DialoGPT starts decoding the response by attending to the context token representations and partially decoded response tokens until reaching EOS .",
"The loss function is the negative log-likelihood of the response word sequence L DialoGPT = (cid:80) | u i | t =1 log p ( u i,t | u i, 1 . . . u i,t 1 , u i 1 ) .",
"It's worth noting that DialoGPT tokenizes texts with the same byte-pair encoding as GPT-2, thus either context or response tokens are tokenized into subwords.",
"In this section, we will first introduce our DialoGPT annotator.",
"The workflow consists of three steps (1) dialogue preprocessing; (2) DialoGPT forward passing; (3) annotation.",
"The overall framework is shown in Figure 2. Then, we will describe our dialogue summarizer, including BART and PGN.",
"Dialogue preprocessing aims to transform the original dialogue D = [ p 1 , u 1 , 1 , ..., EOS 1 , ..., p |D| , u |D| , 1 , ..., EOS |D| ] into the format that DialoGPT can process.",
"Specifically, we transform it into two formats.",
"The first one is context-response pairs (shown in Figure",
"2(a)).",
"Given a dialogue D , two adjacent utterances ( u i 1 , u i ) are combined into a context-response pair, where i [2 : |D| ] .",
"The second one is dialogue sequence (shown in Figure",
"2(b)).",
"All the utterances in the dialogue D are serialized into a sequence [ u 1 , 1 , ..., EOS 1 , ..., u |D| , 1 , ..., EOS |D| ] , with EOS separates each utterance.",
"Note that either for context-response pairs or the dialogue sequence, we do not take speaker information p into consideration.",
"The reason is that DialoGPT is trained on a huge volume of conversational data without speaker information.",
"Even so, Zhang et al. (2020b) proved that DialoGPT can simulate real-world dialogues in various scenes and has already learned diverse response generation patterns between the same speakers or different speakers according to the given context.",
"DialoGPT forward passing has two purposes.",
"(1) For each context-response pair, we aim to get the word-level and utterance-level predicted losses for the response (shown in Figure",
"2(c)).",
"(2) For the dialogue sequence, we aim to get the representations for each EOS (shown in Figure",
"2(d)).",
"For the first purpose, given one context-response pair ( u i 1 , u i ) , we input the context words u i 1 = [ u i 1 , 1 , u i 1 , 2 , ..., u i 1 , | u i 1 | , EOS i 1 ] into the DialoGPT and start to decode the response.",
"At each decode step t , we calculate the negative log-likelihood between the predicted distribution and the golden target from the given response.",
"loss i,t = log p ( u i,t | u i,<t , u i 1 ) loss i = 1 | u i | + 1 | u i | +1 (cid:88) t =1 loss i,t (1) where loss i,t and loss i are the predicted losses for each word and each utterance respectively 2 .",
"For the second purpose, after the single forward pass of DialoGPT over the dialogue sequence, we can get representations H for each token on the top of the DialoGPT.",
"Afterward, we extract all representations for each EOS .",
"h EOS 1 , h EOS 2 , ..., h EOS |D| = H ( EOS ) (2) where each h EOS i can be viewed as the representation for the dialogue context [ u 1 , ..., u i ] .",
"2 Note that DialoGPT uses BPE to tokenize texts, thus, losses are calculated at the sub-word level.",
"We recover the word-level predicted loss by averaging the losses of multiple sub-words.",
"Besides, since the first utterance u 1 can only be served as the context, so we do not compute loss for u 1 .",
"Motivation Considering both background knowledge encoded in the DialoGPT and contextual information of the dialogue context, if one word in the golden response is difficult to be inferred from DialoGPT, we assume that it contains high information and can be viewed as a keyword.",
"Given a dialogue D , we have loss loss i,j for each word u i,j , where i [2 : |D| ] .",
"We extract r KE percent of words with the highest loss as keywords, where r KE is a hyper-parameter 3 .",
"Moreover, the names of all speakers P mentioned in the dialogue are also added into the keywords set.",
"Finally, we append a specific tag #KEY# and the keywords to the end of the original dialogue D .",
"The new dialogue with keywords annotation is DKE = [ p 1 , u 1 , 1 , ..., (cid:124) (cid:123)(cid:122) (cid:125) D #KEY# , P , Key 1 , Key 2 , ... (cid:124) (cid:123)(cid:122) (cid:125) keywords ] .",
"4 3 We use a heuristic rule to predetermine the possible value of r KE by calculating the average of length of summaries (remove stopwords) divided by the length of dialogues in the train set.",
"We search the best r KE based on the calculated score.",
"4 In experiments, we find that the predicted loss for the first word of each utterance is extremely high, probably due to the first word in the response is the most uncertain and hard to be predicted.",
"Thus, we ignore the first word of each utterance.",
"Motivation DialoGPT inherits a decoder architecture, where one token attends to all previous tokens to aggregate information.",
"Thus, given the representation h EOS i for each EOS i , it can be viewed as the representation for the dialogue context [ u 1 , u 2 , ..., u i ] .",
"Adding a new utterance u i +1 , if the new context representation h EOS i +1 is similar to the previous h EOS i , we assume that the new utterance u i +1 brings little information and has small effects on predicting the response, thus u i +1 becomes a redundant utterance.",
"We start with the last two dialogue context representations h EOS |D| 1 and h EOS |D| , and calculate the cosine similarity between them.",
"If the similarity score exceeds the threshold t RD , the utterance u |D| is detected as redundant.",
"t RD is a hyper-parameter.",
"If the similarity score doesn't exceed the threshold t RD , we move forward one step to calculate the similarity between h EOS |D| 2 and h EOS |D| 1 , and repeat the process until reaching h EOS 1 .",
"An example is shown in Figure 3. We insert a specific tag [RD] before each redundant utterance.",
"For example, if utterance u 1 is redundant, the new dialogue with redundant utterances annotation is DRD = [ p 1 , [RD] , u 1 , 1 , ..., EOS 1 , ..., p |D| , ..., EOS |D| ] .",
"Motivation DialoGPT is skilled in generating the context-consistent response.",
"Therefore, if the response is difficult to be predicted given the context based on DialoGPT, we assume the response may belong to another topic and there is a topic segmentation between the context and response.",
"Given a dialogue D , we have loss loss i for each utterance u i , where i [2 : |D| ] .",
"We select r TS percent of utterances with the highest loss as topic segmentation points.",
"r TS is a hyper-parameter 5 .",
"Before each selected utterance, we insert a specific tag [TS] .",
"For example, if there is a segmentation point between utterance u 1 and utterance u 2 , the new dialogue with topic annotation is DTS = [ p 1 , u 1 , 1 , ..., EOS 1 , [TS] , p 2 , u 2 , 1 , ..., EOS 2 , ... ] .",
"5 We use a heuristic rule to predetermine the possible value of r TS by calculating the average of the number of summary sentences divided by the number of dialogue utterances in the train set.",
"This is based on the observation that each sentence in golden summary tends to correspond to one topic of the dialogue.",
"We search the best r TS based on the calculated score.",
"We employ two kinds of summarizer, one is BART (Lewis et al., 2020), which is a Transformer-based model and pre-trained on a huge volume of data.",
"The other one is PGN (See et al., 2017), which is a LSTM-based model.",
"Both models inherit a typical sequence-to-sequence framework, which first encodes the source dialogue D to distributed representations and then generates the target summary S with the decoder.",
"BART BART adopts the Transformer (Vaswani et al., 2017) as the backbone architecture.",
"It first map the source dialogue into distributed representations, based on which a decoder generates the target sequence: XN = ENCODER ( X 0 ) N := n =1 FFN (cid:0) ATT ( X n 1 ) (cid:1) YM = DECODER ( Y 0 , XN ) M := m =1 FFN (cid:0) ATT (cid:0) ATT ( Y m 1 ) , XN (cid:1)(cid:1) (3) where N := n =1 denotes N identical encoding layers, M := m =1 denotes M identical decoding layers, X 0 denotes the sum of the word embeddings X emb and position embeddings X pos of D , Y 0 denotes that of the shifted right S , FFN ( ) denotes a position-wise feed-forward network, and ATT ( ) denotes a multi-head attention.",
"Residual connection (He et al., 2016) and layer normalization (Ba et al., 2016) are used in each sub-layer, which are suppressed in Equation 3 for clarity.",
"Finally, the output representation YM of the decoder is projected into the vocabulary space and the decoder outputs the highest probability token.",
"PGN PGN is a hybrid model of the typical Seq2Seq Attention model (Nallapati et al., 2016) and Pointer-Network (Vinyals et al., 2015).",
"The input dialogue is fed into the LSTM encoder token by token, producing the encoder hidden states.",
"The decoder receives word embedding of the previous word and generates a distribution to decide the target token, retaining decoder hidden states.",
"PGN not only allows to generate from the fixed vocabulary, but also allows to copy from the input tokens.",
"the outputs in a parallel training corpus ( D , S ) : arg max (cid:88) ( D , S ) ( D , S ) log p ( S | D ; ) .",
"We experiment on 2 datasets (statistics in Table 1): SAMSum (Gliwa et al., 2019) is a human-generated dialogue summary dataset, which contains dialogues in various scenes of the real-life.",
"AMI (Carletta et al., 2005) is a meeting summary dataset.",
"Each meeting contains four participants and is about a remote control design project.",
"DialoGPT We initialize DialoGPT with DialoGPT-large 6 .",
"For SAMSum, we set keywords extraction ratio r KE to 15, similarity threshold t RD to 0.99 and topic segmentation ratio r TS to 15.",
"For AMI, r KE is 4, t RD is 0.95 and r TS is 5 7 .",
"BART We initialize BART with bart.large 8 .",
"For fine-tuning on SAMSum, the learning rate is set to 3e-05, the dropout rate is 0.1, the warmup is set to 400.",
"At the test process, beam size is 5, minimum decoded length is 5 and maximum length is 100.",
"PGN The word embedding size is set to 300 and initialized with the pre-trained GloVe vector.",
"The dimension of encoder and pointer decoder is set to 200.",
"The dropout is set to 0.5.",
"The learning rate is 0.001.",
"At the test process, beam size is 10, minimum decoded length is 280 and maximum length is 450 9 .",
"For SAMSum, LONGEST-3 views the first three utterances as the summary.",
"TextRank (Mihal-cea and Tarau, 2004) is a traditional graph-based method.",
"Transformer (Vaswani et al., 2017) is a seq2seq method based on full self-attention operations.",
"D-HGN (Feng et al., 2020a) incorporates commonsense knowledge to help understand dialogues.",
"TGDGA (Zhao et al., 2020) uses topic words and models graph structures for dialogues.",
"DialoGPT (Zhang et al., 2020b) means that fine-tuning DialoGPT on the SAMSum.",
"MV-BART (Chen and Yang, 2020) is a BART-based method that incorporates topic and stage information.",
"For AMI, SummaRunner (Nallapati et al., 2017) is an extractive method based on hierarchical RNN network.",
"UNS (Shang et al., 2018) is a fully unsupervised and graph-based method.",
"TopicSeg (Li et al., 2019) incorporates topics to model the meeting.",
"HMNet (Zhu et al., 2020) is a transformer-based method that incorporates POS and entity information and is pre-trained on news summarization dataset.",
"We adopt ROUGE (Lin, 2004) and BERTScore (Zhang et al., 2020a) for evaluating our models.",
"The results on SAMSum and AMI are shown in Table 2 and 3 respectively.",
"We can see that using our annotated datasets DKE , DRD and DTS , both BART and PGN can obtain improvements.",
"Furthermore, our BART( DALL ) achieves SOTA performance.",
"For SAMSum, it's worth noting that BART( DKE ) performs better compared with BART( DRD ) and BART( DTS ).",
"We attribute this to the fact that keywords can retain essential information for shorter dialogues.",
"For AMI, PGN( DRD ) contributes the most, which shows the importance of detecting redundancy in verbose meeting transcripts.",
"Although HMNet and TopicSeg achieve better scores, HMNet needs news summarization dataset to pre-train the model and TopicSeg designs complex attention mechanism to incorporate topic information.",
"In terms of new embedding-based metric BERTScore (shown in Table 4), our method BART( DALL ) and PGN( DALL ) can consistently outperform the baseline models 10 .",
"10 Evaluation details are shown in the supplementary file.",
"We conduct a human evaluation of the dialogue summary to assess its informativeness, conciseness and coverage.",
"Informativeness measures how well the summary includes key information.",
"Conciseness measures how well the summary discards the redundant information.",
"Coverage measures how well the summary covers each part of the dialogue.",
"We randomly sample 100 dialogues (SAMSum) and 10 meetings (AMI) with corresponding generated summaries to conduct the evaluation.",
"In order to reduce variance caused by humans, we have 4 human evaluators and they were asked to rate each summary on a scale of 1 to 5 (higher is better) for each metric.",
"The results are shown in Table 5. We can see that our method can achieve higher scores in all three metrics.",
"Especially, combined with DRD , our model can get the best score in conciseness.",
"Besides, combined with DTS , our model can perform better in coverage.",
"However, HMNet gets the best score in informativeness and coverage.",
"We argue this is because HMNet forces a minimum summary length of 400.",
"Due to this, it scores the worst in conciseness.",
"For the AMI, we also find there is still a gap between the scores of generated summaries and the scores of golden summaries, indicating that the AMI is more difficult.",
"Effect of DialoGPT KE .",
"To verify the effectiveness of our DialoGPT KE method, we fine-tune BART on SAMSum, which is annotated by various keywords extraction methods.",
"The results are shown in Table 6. We can see that our method achieves higher scores.",
"The results also show that entities play an important role in the summary generation.",
"Besides, combined with DialoGPT embeddings, KeyBERT can get better results.",
"To give a quantitative evaluation, we view reference summary words as golden keywords and calculate the precision, recall and F 1 scores for extracted keywords.",
"The results are shown in Table 7. Directly using entities as keywords can get the best precision score.",
"However, both TextRank and Entities perform poorly in recall.",
"Our method gets the best score in terms of F 1 and its advantage is mainly reflected in recall score, which shows our method can extract more diverse keywords.",
"Effect of DialoGPT RD .",
"To verify the effectiveness of our DialoGPT RD method, we compare it with a Rule-based method (Dinarelli et al., 2009), which annotates utterances without noun, verb and adjective as redundant.",
"The results are shown in Table 8. We can see that our method performs better.",
"Especially, our method shows more advantages for long and verbose meeting transcripts in the AMI.",
"Effect of DialoGPT TS .",
"To verify the effectiveness of our DialoGPT TS method, we compare it with the C99 algorithm (Choi, 2000), which is a sentence similarity-based segmentation method.",
"Chen and Yang (2020) enhance it with BERT (De-vlin et al., 2019) embeddings.",
"We further combine the algorithm with DialoGPT embeddings.",
"The results are shown in Table 9. We can see that our method can get comparable results with the strong baseline C99(w/ DialoGPT emb).",
"For AMI, combined with golden topic annotation, PGN can achieve the best result, which shows modeling topics is an essential task for dialogue summarization.",
"Figure 4 shows summaries generated by different models for an example dialogue in the SAMSum dataset.",
"We can see that BART (Lewis et al., 2020) tends to generate long and redundant summaries.",
"By incorporating topic and stage information, MV-BART (Chen and Yang, 2020) can generate summaries that cover main topics of the dialogue.",
"However, it still suffers from redundancy problem.",
"Our BART( DALL ) can get higher ROUGE scores while generating better summaries.",
"The generated summary can include extracted keywords and correspond to each topic of the dialogue.",
"We also find that even some redundant utterances have already been detected, our model still generate the summary contains some redundant information.",
"We Model R-1 R-2 R-L SAMSum C99 w/ BERT emb 52.80 27.78 49.50 w/ DialoGPT emb 53.33 28.04 49.39 DialoGPT TS 53.34 27.85 49.64 AMI Golden 50.28 19.73 24.45 C99 w/ BERT emb 48.53 15.84 23.63 w/ DialoGPT emb 49.22 16.79 23.88 DialoGPT TS 48.59 16.07 24.05 Table 9: Test set results on SAMSum and AMI that are annotated with topic segmentation in various methods.",
"attribute this to the fact that the small dataset leads to insufficient training of the model.",
"Dialogue Summarization Current works mainly incorporate auxiliary information to help better modeling dialogues.",
"Some works used various types of keywords to identify the core part of the dialogue, including entities (Zhu et al., 2020), domain terminologies (Koay et al., 2020) and topic words (Zhao et al., 2020).",
"Some works aimed to reduce redundancy , Zechner (2002); Murray et al. (2005) used sentence-level similarity-based methods.",
"Some works incorporate topics as a coarse-grained dialogue structure (Li et al., 2019; Liu et al., 2019; Chen and Yang, 2020).",
"Other works also explored dialogue act (Goo and Chen, 2018), dialogue discourse (Feng et al., 2020b) and commonsense knowledge (Feng et al., 2020a).",
"In this paper, we combine three types of auxiliary information to help better modeling dialogues, including keywords, redundant utterances and topics.",
"Pre-trained Language Models Pre-trained models such as BERT (Devlin et al., 2019) and GPT-3 (Brown et al., 2020) have advanced various NLP tasks.",
"On one hand, some works utilized the knowledge contained in pre-trained models by fine-tuning on supervised data of downstream tasks (Qin et al., 2019; Liu and Lapata, 2019; Qin et al., 2020).",
"On the other hand, some works examined the knowledge in an unsupervised manner (Jiang et al., 2020; Xiao et al., 2020; Lin et al., 2020).",
"Ku-Rob : Hey there , what's up ?",
"mar et al. (2020) explored pre-trained models for conditional data augmentation.",
"Wang et al. (2020) used the knowledge in pre-trained models to construct knowledge graphs.",
"In this paper, we belong to the second paradigm and propose our DialoGPT annotator that can perform three annotation tasks in an unsupervised manner.",
"We investigate to use DialoGPT as unsupervised annotators for dialogue summarization, including keywords extraction, redundancy detection and topic segmentation.",
"We conduct our DialoGPT annotator on two datasets, SAMSum and AMI.",
"Experimental results show that our method consistently obtains improvements upon pre-traind summarizer (BART) and non pre-trained summarizer (PGN) on both datasets.",
"Besides, combining all three annotations, our summarizer can achieve new state-of-the-art performance on the SAMSum dataset.",
"This work is supported by the National Key R&D Program of China via grant 2018YFB1005103 and National Natural Science Foundation of China (NSFC) via grant 61906053 and 61976073.",
"We thank all the anonymous reviewers for their insightful comments.",
"We also thank Lifu Huang and Xin-wei Geng for helpful discussion."
] | [
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"objective",
"method",
"result",
"objective",
"other",
"other",
"other"
] |
[
"When developing topic models, a critical question that should be asked is: How well will this model work in an applied setting?",
"Because standard performance evaluation of topic interpretability uses automated measures modeled on human evaluation tests that are dissimilar to applied usage, these models' generalizability remains in question.",
"In this paper, we probe the issue of validity in topic model evaluation and assess how informative coherence measures are for specialized collections used in an applied setting.",
"Informed by the literature, we propose four understandings of interpretability.",
"We evaluate these using a novel experimental framework reflective of varied applied settings, including human evaluations using open labeling, typical of applied research.",
"These evaluations show that for some specialized collections, standard coherence measures may not inform the most appropriate topic model or the optimal number of topics, and current interpretability performance validation methods are challenged as a means to confirm model quality in the absence of ground truth data.",
"Topic modeling has become a popular tool for applied research such as social media analysis, as it facilitates the exploration of large document-collections and yields insights that would not be accessible by manual methods (Sinnenberg et al., 2017; Karami et al., 2020).",
"However, social media data can be challenging to model as it is both sparse and noisy (Zhao et al., 2011).",
"This has resulted in increased demand for short-text topic models that can handle these challenges (Lim et al., 2013; Zuo et al., 2016; Chen et al., 2015).",
"Topic word-sets, denoted T ws , are considered to be semantically related words that represent the latent component of the underlying topic's document-collection, denoted T dc .",
"Meaning is derived from these topics through the interpretation of either the T ws (Nerghes and Lee, 2019), the corresponding T dc (Maier et al., 2018), or both (Trnberg and Trnberg, 2016).",
"Since meaning requires topics to be interpretable to humans, empirical assurance is needed to confirm a novel topic models' capacity to generate semantically interpretable topics, as well as a method to guide model selection and other parameters such as the number of topics, K .",
"This is often achieved by calculating the coherence scores for T ws (Lau and Baldwin, 2016) Recent literature contradicts previous evaluations of some short-text topic models that claim superior interpretability (Li et al., 2018; Eickhoff and Wieneke, 2018; Bhatia et al., 2017).",
"Such rethinking flows from the fact there is no agreement on the best measure of interpretability (Lau et al., 2014b; Morstatter and Liu, 2017) and is compounded by the unclear relationship between human evaluation methodologies and automated coherence scores (Lau et al., 2014b).",
"Finally, despite assurances of generalizability and applicability, topic model evaluations in machine learning are conducted in experimental settings that are not representative of typical applied use.",
"This raises questions of whether coherence measures are suitably robust to measure topic interpretability and inform model selection in applied settings, particularly with challenging datasets like that of social media.",
"Advances in topic modeling for static document-collections have produced non-parametric approaches such as HDP-LDA, which employ sophisticated hierarchical priors that allow for different prior proportions (Teh et al., 2006).",
"Non-negative matrix factorization (Zhou and Carin, 2015), the use of word embeddings, and neural network methods (Zhao et al., 2021) are a few of these other innovations.",
"To support these advances, it is crucial to establish the robustness of topic modeling interpretability measures, especially given the growing trend towards evaluating topic models using coherence measures, often in the absence of perplexity or other predictive scores ( ? ).",
"Additionally, increasingly sophisticated methods for automatic topic labeling have been developed.",
"Beginning with Lau et al. (2011), this research relies on models which generate interpretable topics.",
"While these advances enhance the technologies available to conduct applied research, they do not address the underlying question of whether topic interpretability can be adequately assessed using coherence measures.",
"In this paper, we demonstrate a research gap in topic model evaluation methods in light of their growing use in specialized settings.",
"Previously declared state-of-the-art models are under-performing in applied settings (Li et al., 2018; Arnold et al., 2016), and little work has been done to improve application relevance (Hecking and Leydesdorff, 2019).",
"Following the work of (Lau and Baldwin, 2016; Bhatia et al., 2017; Hecking and Leydesdorff, 2019), this study examines whether coherence is a valid predictor of topic model interpretability when interpretability is defined as more than just the ability to label a T ws , and as the diversity of topic models, datasets and application tasks increases.",
"Earlier research has established a correlation between novel coherence measures and human ranking of interpretability, as measured by qualitative tests (Cheng et al., 2014; Newman et al., 2010a).",
"However, since bounded experimental settings constrain these tests, they are unlikely to reliably and consistently indicate topic quality in applied research settings.",
"As a result, we ask the following question: To what extent can we rely on current coherence measures as proxies for topic model interpretability in applied settings?",
"This work has significant practical implications.",
"It signals the need to re-develop interpretability measures and reappraise best-practice for validating and evaluating topic models and their applications.",
"Our research contributes the following:",
"1. Introduces a novel human-centered qualitative framework for evaluating interpretability in model development that mimics those processes seen in applied settings.",
"2. Demonstrates that the ranking of topic quality using state-of-the-art coherence measures is inconsistent with those produced through validation tasks performed in an applied setting.",
"3. Systematically quantifies the impact of model behavior, dataset composition, and other previously reported factors (Morstatter and Liu, 2017; Lau and Baldwin, 2016), on coherence measures for many topics across four variant datasets and two topic models.",
"4. Provide evidence to show that interpretability measures for evaluating T ws and T dc for applied work in specialized contexts (e.g., Twitter) are ill-suited and may hinder model development and topic selection.",
"The remainder of this paper is organized as follows.",
"Section 2 provides a review of related work around the interpretability of topic models.",
"Section 3 describes five propositions that have informed the design of interpretable topic models and their evaluation measures.",
"This is followed by a description of the experimental framework we designed to test these propositions.",
"Section 4 provides the results of these evaluations and Section 5 contains a discussion of findings.",
"This section provides a brief overview of work related to interpretability evaluation, followed by a review of the challenges associated with coherence optimization for specialized contexts.",
"Topic model interpretability is a nebulous concept (Lipton, 2018) related to other topic model qualities, but without an agreed-upon definition.",
"Measures of semantic coherence influence how easily understood the topNT ws are (Morstatter and Liu, 2017; Lund et al., 2019; Newman et al., 2010a; Lau et al., 2014b).",
"This is also referred to as topic understandability (Rder et al., 2015; Aletras et al., 2015).",
"A coherent topic is said to be one that can be easily labeled and thus interpreted (Newman et al., 2011; Morstatter and Liu, 2017), but only if the label is meaningful (Hui, 2001; Newman et al., 2010b,a).",
"Some have modeled coherence measures based on topic meaningfulness (Lau et al., 2014a); others state that a meaningful topic is not necessarily a useful one (Boyd-Graber et al., 2015).",
"Indeed, the literature remains divided over whether usefulness is a property of an interpretable topic (Rder et al., 2015), or if interpretability is a property of a useful topic (Aletras and Stevenson, 2013; Newman et al., 2010b).",
"Such terminological disagreement suggests that there are challenges to the progression of this area of research.",
"The ease of labeling a topic is assumed to be an expression of how coherent that topic is and thus its degree of interpretability.",
"This assump-tion is challenged when annotators provide different labels for a topic.",
"Morstatter and Liu (2017) presented interpretability from the perspective of both coherence and consensus, where consensus is a measure of annotator agreement about a top-ics' representation in its T dc .",
"Alignment is how representative a topic is of its T dc and is another understanding of interpretability (Ando and Lee, 2001; Chang et al., 2009; Mimno et al., 2011; Bhatia et al., 2017; Alokaili et al., 2019; Morstatter and Liu, 2017; Lund et al., 2019).",
"However, the probabilistic nature of topic models impede this measure.",
"The ambiguity of interpretability as a performance target raises questions about how topic models are used and evaluated.",
"Following the seminal work of Chang et al. (2009), the development of coherence measures and the human evaluation tasks that guide their design has been actively pursued (Newman et al., 2010a; Bhatia et al., 2017, 2018; Morstatter and Liu, 2017; Lau and Baldwin, 2016; Lund et al., 2019; Alokaili et al., 2019).",
"Newman et al. (2010a) showed that human ratings of topic coherence (observed coherence) correlated with their coherence measure when the aggregate Pointwise Mutual Information (PMI) pairwise scores were calculated over the topNT ws .",
"In addition to the word intrusion task (Chang et al., 2009), Mimno et al. (2011) validated their coherence measure for modeling domain-specific corpora using expert ratings of topic quality.",
"The measure takes the order of the topN T ws into account using a smoothed conditional probability derived from document co-occurrence counts.",
"This performance was further improved by substituting PMI for Normalized PMI ( CNPMI ) (Aletras and Stevenson, 2013; Lau et al., 2014b).",
"Aletras and Stevenson (2013) used crowdsourced ratings of topic usefulness to evaluate distributional semantic similarity methods for automated topic coherence.",
"Rder et al. (2015) conducted an exhaustive study evaluating prior work and developing several improved coherence measures.",
"Similarly, Ramrakhiyani et al. (2017) made use of the same datasets and evaluations and presented a coherence measure which is approximated with the size of the largest cluster produced from embeddings of the topN T ws .",
"Human evaluation tasks have also been created to measure how representative a topic model is of the underlying T dc (Chang et al., 2009; Bhatia et al., 2017; Morstatter and Liu, 2017; Alokaili et al., 2019; Lund et al., 2019).",
"Within computer science, topic modeling has been used for tasks such as word-sense disambiguation (Boyd-Graber and Blei, 2007), hierarchical information retrieval (Blei et al., 2003), topic correlation (Blei and Lafferty, 2007), trend tracking (Al-Sumait and Domeniconi, 2008), and handling short-texts (Wang et al., 2018).",
"Outside of computer science, topic modeling is predominantly used to guide exploration of large datasets (Agrawal et al., 2018), often with a human-in-the-loop approach.",
"Here topics are generated before some form of qualitative method is used to gain insights into the data.",
"These methods include exploratory content analysis (Korencic et al., 2018), critical discourse analysis (Trnberg and Trnberg, 2016), digital autoethnography (Brown, 2019), grounded theory (Baumer et al., 2017), and thematic analysis (Doogan et al., 2020; Andreotta et al., 2019).",
"Qualitative techniques make use of topics in different ways.",
"Open labeling of topics by Subject Matter Experts (SME) is followed by a descriptive analysis of that topic (Kim et al., 2016; Morstatter et al., 2018; Karami et al., 2018).",
"However, this method is subjective and may fail to produce the depth of insight required.",
"Supplementing a topic analysis with samples from the T dc increases the depth of insight (Eickhoff and Wieneke, 2018; Kagashe et al., 2017; Nerghes and Lee, 2019).",
"Alternatively, the T dc alone cam be used for in-depth analysis (Trnberg and Trnberg, 2016).",
"However, human evaluation tasks that require open labeling are not generally used to validate new coherence measures (O'Callaghan et al., 2015; Korencic et al., 2018).",
"We have generated five propositions about the relationship between coherence scores, human evaluation of topic models, and the different views of interpretability to explore the research question.",
"We conduct five experiments to interrogate these propositions and re-evaluate how informative coherence measures are for topic interpretability.",
"Because we are evaluating existing coherence measures, we do not employ automatic topic labeling techniques.",
"Instead, we make use of human evaluation tasks that reflect those conducted in applied settings.",
"Proposition",
"1. If coherence scores are robust, they should correlate.",
"The battery of coherence measures for evaluating novel topic models and automated labeling approaches are inconsistent across the literature.",
"Each new measure claims superior alignment to topic model interpretability.",
"As these measures are evolutionary (Rder et al., 2015), and there is no convention for which measure should be used, particularly as a standard measure of qualitative performance (Zuo et al., 2016; Zhao et al., 2017; Zhang and Lauw, 2020), they are considered notionally interchangeable.",
"Thus, we would expect that there would be some correlation between these measures.",
"However, previous studies have not considered the impact that the data type or model has on the coherence scores.",
"Particularly for non-parametric models, these issues may be compounded by how coherence measures are presented as an aggregate, e.g., The presentation of the top-N models.",
"Indeed, studies reporting multiple coherence measures have demonstrated inconsistencies at the model-level that are obscured during reporting (Blair et al., 2020).",
"Proposition",
"2. An interpretable topic is one that can be easily labeled.",
"How easily a topic could be labeled has been evaluated on an ordinal scale where humans determined if they could hypothetically give a topic a label (Mimno et al., 2011; Morstatter and Liu, 2017).",
"However, humans are notoriously poor at estimating their performance, particularly when they are untrained and do not have feedback on their performance (Dunning et al., 2003; Morstatter and Liu, 2017).",
"Thus, a rater's perception of whether they could complete a task is actually less informative than having them complete the task.",
"Proposition",
"3. An interpretable topic has high agreement on labels.",
"Agreement on a topic label is considered a feature of interpretability by Morstatter and Liu (2017), who propose consen-sus as a measure of interpretability.",
"A high level of agreement on topic labels, particularly in crowd-sourcing tasks, is seen as a means to infer that a T ws is interpretable.",
"However, in applied tasks, a topic is described in a sense-making process resulting in one coherent label.",
"Thus, the consensus task is not necessarily a reasonable means to infer interpretability.",
"A robust way to measure agreement on a topic label is needed.",
"Inter-coder reliability (ICR) measures are an appropriate means to achieve this.",
"Proposition",
"4. An interpretable topic is one where the document-collection is easily labeled.",
"The investigation of topic document-collections is an emerging trend in the applied topic modeling literature.",
"In these studies, authors have either used a topics top documents to validate or inform the labels assigned to T ws (Kirilenko et al., 2021), or have ignored the T ws in favor of qualitative analysis of the richer T dc (Doogan et al., 2020).",
"The use of topic modeling for the exploration of document-collections requires a T dc to be coherent enough that a reader can identify intertextual links between the documents.",
"The label or description given to the T dc results from the readers' interpretation of individual documents relative to the other documents in the collection.",
"T dc that have a high degree of similarity between their documents will be easiest to interpret and therefore label.",
"The ease of labeling a T dc decreases as the documents become more dissimilar.",
"Proposition",
"5. An interpretable topic word-set is descriptive of its topic document-collection.",
"The alignment of T ws to T dc is an expected property of a good topic (Chang et al., 2009), which human evaluation tasks have been developed to assess.",
"Typically these tasks ask annotators to choose the most and/or least aligned T ws to a given document (Morstatter and Liu, 2017; Lund et al., 2019; Alokaili et al., 2019; Bhatia et al., 2018), identify an intruder topic (Chang et al., 2009; Morstatter and Liu, 2017), rate their confidence in a topic-document pair (Bhatia et al., 2017), or select appropriate documents given a category label (Aletras et al., 2017).",
"However, none of these methods address the need for the topic document-collection to be evaluated and labeled.",
"Furthermore, they generally use one document and/or are not comparable to applied tasks.",
"The Auspol-18 dataset was constructed from 1,830,423 tweets containing the hashtag #Auspol, an established Twitter forum for the discussion of Australian politics.",
"The diminutives, slang, and domain-specific content provide a realistic example of a specialized context.",
"Four versions of the dataset were constructed from a subset of 123,629 tweets; AWH (contains the 30 most frequent hash-tags), AWM (contains the 30 most frequent mentions of verified accounts), AWMH (contains the 30 most frequent hashtags and 30 most frequent mentions of verified accounts), and AP (contains neither hashtags nor mentions).",
"Pre-processing included stopword removal, POS-tagging, lemma-tization, exclusion of non-English tweets, duplicate removal, removal of tokens with a frequency n < 10 , and removal of tweets with n < 5 tokens, and standardization of slang, abbreviations (Agrawal et al., 2018; Doogan et al., 2020) 1 .",
"To investigate interpretability in an applied setting, we compare LDA to MetaLDA (Zhao et al., 2017), a recent non-parametric topic model designed to improve short-text topic modeling by leveraging the incorporation of the document and word meta-information using word embeddings as well as non-parametrically learning topic proportions.",
"Despite the many extensions to LDA, the vanilla model maintains popularity among applied researchers (Sun et al., 2016), and as the baseline model, it is necessary to compare LDA with a model purpose-built for short-text applications.",
"MetaLDA is one reasonable representative of such models and has demonstrated effectiveness on Twitter data for applied work (Doogan et al., 2020).",
"The extensive effort of human labeling in our experiments (see Section 3.4) precludes us from adding more models.",
"LDA and MetaLDA are available in the MetaLDA package 2 , which is implemented on top of Mallet (McCallum, 2002).",
"Default parameter settings were used for both LDA and MetaLDA.",
"We use Glove2Vec embeddings trained on the Wikipedia corpus (Pen-nington et al., 2014) for MetaLDA.",
"We constructed topic sets with the number of topics K = { 10 , 40 , 20 , 60 , 80 , 100 , 150 , 200 } .",
"Several coherence measures were evaluated.",
"These were C Umass (Mimno et al., 2011), CV , CP (Rder et al., 2015), CA and CNPMI (Aletras and Stevenson, 2013).",
"These were calculated for each topic using the Palmetto package 3 using the top ten most frequent words.",
"Along with the default CNPMI , which is calculated using Wikipedia, we introduced 1 Tweet IDs and pre-processing details are available at: https://github.com/wbuntine/auspoldata 2 https://github.com/ethanhezhao/MetaLDA 3 http://aksw.org/Projects/Palmetto.html CNPMI-ABC , which is calculated using a collection of 760k Australian Broadcasting Company (ABC) news articles 4 with 150 million words (enough to make the CNPMI scores stable), and CNPMI-AP calculated using the AP dataset and is used to test CNPMI but with statistics drawn from the training data.",
"We report the average scores and the standard deviations over five random runs.",
"A primary concern in machine learning research is the need to establish model performance.",
"Following the recent trend to analyze T dc , we devised qualitative tests for the assessment of whether the T ws and T dc were adequately aligned and whether current performance measures are informative of this alignment.",
"We also tested to see if there is a relationship between topic alignment and the topic diagnostic statistics; effective number of words 5 , and topic proportion, denoted D ew and D tp , respectively.",
"Topic Word-sets: Four SMEs were recruited from a multidisciplinary pool of researchers who were representative of the political-ideological spectrum and who were Australian English speakers.",
"They were shown the same topics consisting of the top-10 words ranked by term frequency that were generated by LDA and MetaLDA on AP, AWH, and AWM for K =1060 topics 6 , producing a total of 3,120 labels (780 for each SME) generated for the 390 topics (130 per model-dataset combination).",
"Their task was to provide a descriptive label for each T ws and to use NA' if they were unable to provide a label.",
"Appendix A provides an example of this task.",
"Two measures were constructed from these labels.",
"The first was the number of raters able to label the topic, a count between 04 denoted Q nbr .",
"The second was a simple ICR measure, Percentage Agreement denoted Q agr , which calculated as the number of times a set of annotators agree on a label, divided by the total number of annotations, as a percentage.",
"Topic Document-collections: Two SMEs analyzed the T dc s of the 60 topics each modeled by LDA and MetaLDA on the AP dataset, referred to hereafter as the qualitative set .",
"Samples of T dc generated by each model ( K =1060) were reviewed, and those generated from both models 60-topic sets 4 http://www.abc.net.au/news/archive 5 For word proportion vector (cid:126)p , this is e Entropy ( (cid:126)p ) .",
"were found to be of equal or higher quality than those produced by other values of K .",
"The SMEs reviewed the top-30 tweets representative of a topic and provided a label for each tweet.",
"They then inductively determined a label or phrase describing that T dc .",
"They noted any key phrases, names, or other terms that were consistent across the collection.",
"The SMEs were experienced at annotating such datasets and were familiar with the online #Auspol community.",
"The SMEs then discussed the results together and agreed on a final label for each T dc .",
"The SMEs were asked to rate on a scale of 13 how difficult it was to label each T dc , where 1 was difficult, 3 was easy, and 0 was where a label could be assigned.",
"This qualitative statistic is denoted Q dif .",
"The researchers then scored, on a scale of 15, the degree of alignment between topic labels and the labels assigned to their corresponding collections.",
"A score of 5 indicated the labels were identical, and a score of 0 indicated the T ws and/or T dc was incoherent.",
"This statistic is denoted Q aln .",
"Examples of these tasks are in Appendix A. 3.5 Statistical Tests We measure the strength of the association between variables using Pearson's r correlation coefficient in evaluation 1 (see section 4.1) and Spearman's correlation coefficient in evaluations 25 (see sections 4.2, 4.3, 4.4, and 4.5).",
"Pearson's r is used in the few papers that evaluate coherence scores over the same datasets (Rder et al., 2015; Lau et al., 2014b).",
"The practical reason for using Pearson's r for our evaluation of proposition 1 was to make valid comparisons with these studies.",
"The statistical justification for using Pearson's r (rather than Spearman's ) is that the datasets are continuous (neither is ordinal, as Spearman's requires) and believed to have a bivariate normal distribution.",
"7 Spearman's is only appropriate when the relationship between variables is monotonic, which has not been consistently demonstrated for coherence (Rder et al., 2015; Bovens and Hartmann, 2004).",
"Spearman's is appropriate to assess the association between coherence scores and human judgments in evaluations 25 8 .",
"It is a preferred method 7 We confirmed this with a Kolmogorov-Smirnov test for normality on the coherence scores.",
"8 Although Kendall's has been used for similar evaluations (Rosner et al., 2013), it is unreliable when the range of each dataset varies significantly as in these experiments (Sanderson and Soboroff, 2007).",
"for such tasks(Aletras and Stevenson, 2013; Newman et al., 2010a) as it is unaffected by variability in the range for each dataset (Lau et al., 2014b).",
"Here we detail the results of our analysis of the five propositions about interpretability evaluation.",
"As per proposition 1, coherence measures should be robust and highly correlated.",
"To test this proposition, we conducted a Pearson's correlation analysis of paired coherence measures calculated for K =10 60 for each model-dataset combination.",
"Pooling the results for K and the three datasets, we calculate the x r for LDA and MetaLDA.",
"CNPMI and CP scores were strongly correlated for all datasets.",
"Ranging from x r =0.7790.902 for LDA, and x r =0.7700.940 for MetaLDA.",
"CNPMI and CNPMI-ABC also showed a moderate-to-strong correlation for all datasets with LDA ranging from x r =0.7190.769, and MetaLDA from x r =0.6060.716.",
"CNPMI-ABC appears more sensitive to changes in K than CP .",
"No significant trends were seen between other coherence measures calculated for any dataset.",
"These results are reported in Appendix B. Methods to aggregate coherence scores may mask any differences in the models' behaviors as K increases.",
"To test this, aggregate coherence measures, typical of the empirical evaluation of topic models, were calculated per value of K .",
"These were the mean of all topics (Average), the mean for all topics weighted by the topic proportion (WeightedAverage), and the mean of the Top-N percent of ranked topics by coherence score (Top-Npcnt), where N = { 25 , 50 , 80 } .",
"Both models showed trends in aggregated coherence scores calculated on the AP dataset.",
"As shown in Figure 1, the peak for each measure varies according to different values of K and between models.",
"For instance, aggregates of both models CNPMI and CNPMI-ABC peak at 60 and 10 topics, respectively.",
"However, CV aggregate peaks are completely divergent between models, K =200 for MetaLDA and K =50 for LDA.",
"Indeed, the two models favored different coherence measures and aggregate methods.",
"Generally, MetaLDA exhibits superior performance across all aggregates for CV and CA , while LDA is superior for C Umass .",
"No-Figure 1: Comparison of LDA (triangle) and MetaLDA (circle) aggregated coherence scores for the AP dataset.",
"Scores are shown on the y-axis, and K is shown on the x-axis.",
"Individual points are averaged across five runs, where the typical sample standard is 0.005, but up to 0.010 for K =20.",
"tably, MetaLDA shows superior CNPMI , CNPMI-ABC , CNPMI-AP scores for Top20pcnt, Top50pcnt, and Top80pcnt aggregations, but is inferior when the full average of these scores is calculated.",
"Other datasets are broadly similar and shown in Appendix B. We also compare MetaLDA with LDA.",
"Pooling the results for K =10200 for each of the four datasets, we get a set of differences in the scores and compute the p -value for a one-sided student t -test to determine whether LDA has higher average coherence scores than MetaLDA.",
"MetaLDA yields significantly higher CNPMI scores calculated using the Top20pcnt ( p <0.01) and Top50pcnt of topics ( p <0.05).",
"Conversely, LDA yields significantly higher CNPMI scores for the other aggregates ( p <0.01).",
"Except for the full average, MetaLDA achieves significantly higher ( p <0.01) CNPMI-ABC , CNPMI-AP , and CV scores than LDA for the other aggregate methods.",
"Disturbingly, the best models, or optimal K varies depending on the coherence measure and the aggregate measure used to calculate it.",
"This has implications for topic model selection in applied settings, where coherence is used to inform K (Kir-ilenko et al., 2021).",
"When repeating the analysis using different K , a second trend emerges: MetaLDA significantly outperforms LDA in CNPMI for smaller K on average but loses out for larger K .",
"Results from our qualitative analysis confirmed this occurred because LDA had many less frequent topics (e.g., when K = 60 , all topics occur about 1/60 of the time), unlike MetaLDA, which mixes more and less frequent topics.",
"Proposition 2 states that if topics can be labeled they are interpretable.",
"Coherence as a measure of interpretability should then be predictive of topics that can be labeled.",
"To evaluate this proposition, a Spearman's correlation coefficient was used to assess the relationship between coherence measures and the number of raters able to label the T ws , Q nbr , for each of the 130 topics produced per model-dataset combination.",
"These results are available in Appendix C. There was no significant correlation between any coherence measure and Q nbr .",
"Interestingly, the SMEs reported several topics they could not label despite their high coherence scores.",
"For instance, the LDA modeled topic red, wear, flag, blue, gold, black, tape, tie, green, iron could not be labeled despite being the 9 th /60 highest ranked topic for CNPMI .",
"Proposition 3 states an interpretable topic is one where there is high agreement between annotators on its label.",
"As such, coherence should align to measures of consensus or agreement.",
"To evaluate this proposition, we calculate the gold-standard ICR measures, Fleiss' kappa ( ) (Fleiss, 1971) and Krippendorff's alpha ( ) (Krippendorff, 2004).",
"Both allow for multiple coders and produce a chance-corrected estimate of ICR but do not facilitate the isolation of low-agreement topics.",
"For this, we also calculated the Percentage Agreement Q agr for each topic, as shown in Appendix D. Generally, , , and Q agr improved as K increased.",
"As shown in Table 1, LDA consistently outperformed MetaLDA when K =60 across all three datasets and generally attained higher , , and Q agr scores than MetaLDA.",
"There was a moderate-to-strong agreement between SMEs, a reliable result for an open labeling task (Landis and Koch, 1977).",
"However, the performance of each model was notably affected by the datasets.",
"LDA outperformed MetaLDA on the AP dataset across all three measures except for when K =20, and for Q agr when K =10.",
"Except for when K =40, MetaLDA achieved higher or comparable scores to LDA on the AWH dataset when K =2040, but outperformed LDA only when K =1020 on the AWM dataset.",
"Kripp.",
"Fliess Pcnt.",
"Q agr LDA Meta LDA Meta LDA Meta AP 0.584 0.486 0.578 0.485 0.503 0.492 AWH 0.512 0.498 0.527 0.515 0.439 0.411 AWM 0.513 0.447 0.535 0.492 0.428 0.369 Table 1: Krippendorff's , Fleiss' , and Q agr ICR statistics for topic labeling when K =60.",
"Spearman's was calculated to measure the strength of the relationship between Q agr and the generated coherence measures.",
"As shown in Appendix D, results were random with no significant correlations.",
"As shown in Table 2, there was a statistically significant correlation between Q agr and Q nbr when K =60.",
"Coherence measures did not correlate with Q agr , and in some cases, were contradictory.",
"For example, Q agr generally increases with K (and our experts reported that labeling was often easier for smaller topics), but coherence measures such as CA and CNPMI-ABC tended to decrease (in Figure 1).",
"These results show that the two models show different sensitivities to dataset preparation and the value of K .",
"Proposition 4 states that topics that are interpretable have a T dc that is easily labeled.",
"To evaluate this proposition, a Spearman's was used to assess the relationship between coherence measures and SME ratings of T dc labeling difficulty, Q dif .",
"The full set, Top25pcnt, top50pcnt, and bottom 15% (Bot15pcnt) of ranked Q dif scores were analyzed.",
"The only notable correlation was between the Bot15pcnt of LDAT dc for CNPMI-ABC ( =-0.817, p =<0.01).",
"Interestingly, when ranked by topic diagnostic D ew , the Top25pcnt and Top50pcnt of T dc s showed moderate correlation with Q dif for MetaLDA ( =-0.764, p <0.01; =-0.630, p <0.01).",
"A repeat analysis with topic diagnostic D tp did not yield any statistically significant results.",
"However, we observed that for T dc s produced by MetaLDA, the three largest and three smallest topics could not be labeled.",
"By contrast, the LDA T dc s that were not interpretable were from the smallest 20% of topics.",
"We hypothesize that this distinction results from MetaLDA's broadly distributed D tp ( 0 . 017 0 . 155 ), which features several very large and very small topics.",
"By comparison, LDA D tp is approximately uniformly distributed ( 0 . 017 0 . 001 ).",
"Proposition 5 states that an interpretable topic is one that is descriptive of the T dc .",
"To test this proposition, we constructed an alignment score Q aln , which rate the similarity between the standardized topic label from T ws and the label from T dc .",
"Similar to the evaluation of Proposition 4, we conducted a Spearman's to test for a relationship between Q aln , coherence measures, and diagnostic scores.",
"The following illustrates a high scoring, but poorly aligned topic with a CNPMI of 0.073.",
"T ws : law, bill, power, gun, democracy, control, freedom, rule, protect, legislation was labeled Gun con-trol, but the T dc was labeled Foreign Interference Act.",
"Appendix F contains additional examples.",
"LDA showed a strong relationship between Q aln and CNPMI-ABC for the Top25pcnt of topics ( =0.825, p <0.01), but the relationship was weak for other coherence measures.",
"No coherence measures were correlated with MetaLDA Q aln scores.",
"As per section 4.4, we repeated the analysis by ranking topics by D ew .",
"MetaLDA showed a strong-to-moderate correlation between D ew and Q aln for the Top25pcnt ( =-0.776, p =<0.01), Top50pcnt ( =-0.646, p <0.01), and Bot15pcnt ( =0.693, p =0.039) of topics, making D ew a potentially useful proxy for alignment for MetaLDA.",
"We repeated the work of Zhao et al. (2017), who demonstrated that when the top-ranked topics by CNPMI are considered, MetaLDA produces higher CNPMI scores than LDA.",
"However, when CNPMI was measured using alternative aggregate methods, we discovered that LDA outperformed MetaLDA.",
"This is likely to be because the smaller topics in MetaLDA can be effectively ignored or scrapped, while in LDA, all topics are of comparable size and are used by the model.",
"Other non-parametric topic models are belived to behave similarly.",
"While MetaLDA generated higher CNPMI-ABC scores than LDA for all aggregates, it was highly dependent on dataset heterogeneity and the value of K .",
"This should indicate that MetaLDA is more adaptive to specialized language, an effect expected in other topic models supported by word embeddings.",
"The comparative performance of coherence measures can vary significantly depending on the aggregate calculation method used and the way the data has been prepared.",
"This latter point has been well established in the literature, most notably for Twitter data (Symeonidis et al., 2018), but is often overlooked when evaluating novel topic models.",
"This is a cause for concern, given the growing re-liance on coherence measures to select the optimal model or K in applied settings (Xue et al., 2020; Lyu and Luli, 2021).",
"Propositions 2 and 3 addressed T ws interpretability.",
"We have demonstrated the difference between comprehending a topic and providing a topic label that is both informative and reliable.",
"However, coherence measures may not be informative of these qualities.",
"Propositions 4 and 5 addressed T dc interpretability.",
"We have demonstrated that the ease of labeling a T dc and the alignment between T ws and T dc does not correlate with coherence measures.",
"Additionally, we identified several areas for future research into the use of diagnostic statistics in applied settings.",
"We observed unexpected behaviors in the distributions of D ew and D tp after a comparative analysis between LDA and the non-parametric model MetaLDA, affecting the interpretability of both T ws and T dc .",
"Correlations between Q dif /Q aln and D ew /D tp for MetaLDA, for example, indicate that these topic diagnostics could assist in evaluating T d c interpretability.",
"We have shown that coherence measures can be unreliable for evaluating topic models for specialized collections like Twitter data.",
"We claim this is because the target of interpretability is ambiguous, compromising the validity of both automatic and human evaluation methods 9 .",
"Due to the advancements in topic models, coherence measures designed for older models and more general datasets may be incompatible with newer models and more specific datasets.",
"Our experiments show that non-parametric models, such as MetaLDA, which employs embeddings to improve support for short-texts, behaves differently to LDA for these performance and diagnostic measures.",
"This is critical because recent research has focused on sophisticated deep neural topic models (Zhao et al., 2021), which make tracing and predicting behaviors more challenging.",
"Abstractly, we may compare the use of coherence measures in topic modeling to the use of BLEU in machine translation.",
"Both lack the finesse necessary for a complete evaluation, as is now the case with BLEU (Song et al., 2013).",
"Additionally, our study demonstrated that an examination of the T dc could provide greater insights into topic model behaviors and explained many of the observed problems.",
"We argue for the representation of topics as a combination of thematically related T dc and T ws , and the further adoption of empirical evaluation using specialized datasets and consideration of T dc interpretability.",
"To date, few papers have attempted this combination (Korencic et al., 2018).",
"However, we believe coherence measures and automated labeling techniques will continue to play a critical role in applied topic modeling.",
"Contextually relevant measures like CNPMI-ABC and topic diagnostics like D ew can be key indicators of interpretability.",
"Aside from the empirical evaluation of novel topic models, new automated labeling techniques, having proven themselves useful for labeling T tw , should be extended for T dc .",
"We thank Callum Waugh, Elliot Freeman and Elizabeth Daniels for conducting the expert annotations.",
"We also thank Henry Linger for providing feedback on earlier drafts.",
"The first author discloses the following financial support for the research: an Australian Government Research Training Program (RTP) Stipend and RTP Fee-Offset Scholarship, and an Australian Government Defence Science and Technology Group Research scholarship.",
"9 Specifically, construct validity, which confirms if the operational definition of a variable (interpretability) reflects the true theoretical meaning of a concept (O'Leary-Kelly and Vokurka, 1998).",
"This project has been reviewed and approved by the Monash University Human Research Committee (Project ID: 18167), subject to abidance with legislated data use and protection protocols.",
"In particular, the Twitter Inc. developers policy prohibits the further distribution of collected tweets and associated metadata by the authors group, with the exception of tweet IDs which may be distributed and re-hydrated.",
"The subject matter of the tweets collected is Australian Politics.",
"We have forgone the use of material included in the paper that would be offensive or problematic to marginalized groups in the Australian political context."
] | [
"abstain",
"abstain",
"method",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"result",
"result",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"method",
"abstain",
"objective",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"Automatically extracting key information from scientific documents has the potential to help scientists work more efficiently and accelerate the pace of scientific progress.",
"Prior work has considered extracting document-level entity clusters and relations end-to-end from raw scientific text, which can improve literature search and help identify methods and materials for a given problem.",
"Despite the importance of this task, most existing works on scientific information extraction (SciIE) consider extraction solely based on the content of an individual paper, without considering the paper's place in the broader literature.",
"In contrast to prior work, we augment our text representations by leveraging a complementary source of document context: the citation graph of referential links between citing and cited papers.",
"On a test set of English-language scientific documents, we show that simple ways of utilizing the structure and content of the citation graph can each lead to significant gains in different scientific information extraction tasks.",
"When these tasks are combined, we observe a sizable improvement in end-to-end information extraction over the state-of-the-art, suggesting the potential for future work along this direction.",
"We release software tools to facilitate citation-aware SciIE development.",
"1 1 Introduction The rapid expansion in published scientific knowledge has enormous potential for good, if it can only be harnessed correctly.",
"For example, during the first five months of the global COVID-19 pandemic, at least 11000 papers were published online about the novel disease (Hallenbeck, 2020), with each representing a potential faster end to a global pandemic and saved lives.",
"Despite the value of this quantity of focused research, it is infeasible 1 https://github.com/viswavi/ScigraphIE Speech Papers Vision Papers NLP Papers Citation Graph MLPapers [...] The very deep convolutional networks are inspired by the VGGNet architecture introduced in [16] for the 2014 ImageNet classification challenge , with the central idea to replace large convolutional kernels by small 33",
"for the scientific community to read this many papers in a time-critical situation, and make accurate judgements to help separate signal from the noise.",
"To this end, how can machines help researchers quickly identify relevant papers?",
"One step in this direction is to automatically extract and organize scientific information (e.g. important concepts and their relations) from a collection of research articles, which could help researchers identify new methods or materials for a given task.",
"Scientific information extraction (SciIE) (Gupta and Manning, 2011; Yogatama et al., 2011), which aims to extract structured information from scientific articles, has seen growing interest recently, as reflected in the rapid evolution of systems and datasets (Luan et al., 2018; Gabor et al., 2018; Jain et al., 2020).",
"Existing works on SciIE revolve around extraction solely based on the content of different parts of an individual paper, such as the abstract or conclusion (Augenstein et al., 2017; Luan et al., 2019).",
"However, scientific papers do not exist in a vacuum they are part of a larger ecosystem of papers, related to each other through different conceptual relations.",
"In this paper, we claim a better understanding of a research article relies not only on its content but also on its relations with associated works, using both the content of related papers and the paper's position in the larger citation network.",
"We use a concrete example to motivate how information from the citation graph helps with SciIE, considering the task of identifying key entities in a long document (known as salient entity classifica-tion) in Figure 1.",
"In this example, we see a paper describing a speech recognition system (Saon et al., 2016).",
"Focusing on two specific entities in the paper (Ima-geNet classification challenge and Switchboard task), we are tasked with classifying whether each is critical to the paper.",
"This task requires reasoning about each entity in relation to the central topic of the paper, which is a daunting task for NLP considering that this paper contains over 3000 words across 11 sections.",
"An existing state-of-the-art model (Jain et al., 2020) mistakenly predicts the non-salient entity ImageNet classification challenge as salient due to the limited contextual information.",
"However, this problem is more approachable when informed of the structure of the citation graph that conveys how this paper correlates with other research works.",
"Examining this example paper's position in the surrounding citation network suggests it is concerned with speech processing, which makes it unlikely that ImageNet is salient.",
"2 The clear goal of incorporating inter-article information, however, is hindered by a resource challenge: existing SciIE datasets that annotate papers with rich entity and relation information fail to include their references in a fine-grained, machine-readable way.",
"To overcome this difficulty, we build on top of an existing SciIE dataset and align it with a source of citation graph information, which finally allows us to explore citation-aware SciIE.",
"Architecturally, we adopt the neural multi-task model introduced by Jain et al. (2020), and establish a proof of concept by comparing simple ways of incorporating the network structure and textual content of the citation graph into this model.",
"Experimentally, we rigorously evaluate our methods, which we call CitationIE , on three tasks: mention identification, salient entity classification, and document-level relation extraction.",
"We find that leveraging citation graph information provides significant improvements in the latter two tasks, in-2 Our proposed method actually makes correct predictions on both these samples, where the baseline model fails on both.",
"cluding a 10 point improvement on F1 score for relation extraction.",
"This leads to a sizable increase in the performance of the end-to-end CitationIE system relative to the current state-of-the-art, Jain et al. (2020).",
"We offer qualitative analysis of why our methods may work in 5.3.",
"We consider the task of extracting document-level",
"relations from scientific texts.",
"Most work on scientific information extraction has used annotated datasets of scientific abstracts, such as those provided for SemEval 2017 and SemEval 2018 shared tasks (Augenstein et al., 2017; Gabor et al., 2018), the SciERC dataset (Luan et al., 2018), and the BioCreative V Chemical Disease Relation dataset (Wei et al., 2016).",
"We focus on the task of open-domain document-level relation extraction from long, full-text documents.",
"This is in contrast to the above methods that only use paper abstracts.",
"Our setting is also different from works that consider a fixed set of candidate relations (Hou et al., 2019; Kardas et al., 2020) or those that only consider IE tasks other than relation extraction, such as entity recognition (Verspoor et al., 2011).",
"We base our task definition and baseline models on the recently released SciREX dataset (Jain et al., 2020), which contains 438 annotated papers, 3 all related to machine learning research.",
"Each document consists of sections D = { S 1 , . . . , SN } , where each section contains a sequence of words S i = { w i, 1 , . . . , w i,N i } .",
"Each document comes with annotations of entities, coreference clusters, cluster-level saliency labels, and 4-ary document-level relations.",
"We break down the end-to-end information extraction process as a sequence of these four related tasks, with each task taking the output of the preceding tasks as input.",
"Mention Identification For each span of text within a section, this task aims to recognize if the span describes a Task , Dataset , Method , or Metric entity, if any.",
"Coreference This task requires clustering all entity mentions in a document such that, in each cluster, every mention refers to the same entity (Varkel and Globerson, 2020).",
"The SciREX dataset 3 The dataset contains 306 documents for training, 66 for validation, and 66 for testing.",
"Salient Entity Classification Given a cluster of mentions corresponding to the same entity, the model must predict whether the entity is key to the work described in a paper.",
"We follow the definition from the SciREX dataset (Jain et al., 2020), where an entity in a paper is deemed salient if it plays a role in the paper's evaluation.",
"Relation Extraction The ultimate task in our IE pipeline is relation extraction.",
"We consider relations as 4-ary tuples of typed entities ( E Task , E Dataset , E Method , E Metric ) , which are required to be salient entities.",
"Given a set of candidate relations, we must determine which relations are contained in the main result of the paper.",
"We base our work on top of the model of Jain et al. (2020), which was introduced as a strong baseline accompanying the SciREX dataset.",
"We refer the reader to their paper for full architectural details, and briefly summarize their model here.",
"This multi-task model performs three of our tasks (mention identification, saliency classification, and relation extraction) in a sequence, treating coreference resolution as an external black box.",
"While word and span representations are shared across all tasks and updated to minimize multi-task loss, the model trains each task on gold input.",
"Figure 2 summarizes the baseline model's end-to-end architecture, and highlights the places where we propose improvements for our CitationIE model.",
"Feature Extraction The model extracts features from raw text in two stages.",
"First, contextualized word embeddings are obtained for each section by running SciBERT (Beltagy et al., 2019) on that section of text (up to 512 tokens).",
"Then, the embeddings from all words over all sections are passed through a bidirectional LSTM (Graves et al., 2005) to contextualize each word's representation with those from other sections.",
"Mention Identification The baseline model treats this named entity recognition task as an IOBES sequence tagging problem (Reimers and Gurevych, 2017).",
"The tagger takes the SciBERT-BiLSTM (Beltagy et al., 2019; Graves et al., 2005) word embeddings (as shown in the Figure 2), feeds them through two feedforward networks (not shown in Figure 2), and produces tag potentials at each word.",
"These are then passed to a CRF (Lafferty et al., 2001) which predicts discrete tags.",
"Span Embeddings For a given mention span, its span embedding is produced via additive attention (Bahdanau et al., 2014) over the tokens in the span.",
"Coreference Using an external model, pairwise coreference predictions are made for all entity mentions, forming coreference clusters.",
"Salient Entity Classification Saliency is a property of entity clusters, but it is first predicted at the entity mention level.",
"Each entity mention's span embedding is simply passed through two feedforward networks, giving a binary saliency prediction.",
"To turn these mention-level predictions into cluster-level predictions, the predicted saliency scores are max-pooled over all mentions in a coreference cluster to give cluster-level saliency scores.",
"Relation Extraction The model treats relation extraction as binary classification, taking as input a set of 4 typed salient entity clusters.",
"For each entity cluster in the relation, per-section entity cluster representations are computed by taking the set of that entity's mentions in a given section, and max-pooling over the span embeddings of these mentions.",
"The four entity-section embeddings (one for each entity in the relation) are then concatenated and passed through a feedforward network to produce a relation-section embedding.",
"Then, the relation-section embeddings are averaged over all sections and passed through another feedforward network which returns a binary prediction.",
"Although citation network information has been shown to be effective in other tasks, few works have recently tried using it in SciIE systems.",
"One potential reason is the lack of a suitable dataset.",
"Thus, as a first contribution of this paper, we address this bottleneck by constructing a SciIE dataset that is annotated with citation graph information.",
"4 Specifically, we combine the rich annotations of SciREX with a source of citation graph information, S2ORC (Lo et al., 2020).",
"For each paper, S2ORC includes parsed metadata about which other papers cite this paper, which other papers are 4 We have released code to construct this dataset: https: //github.com/viswavi/ScigraphIE BiLSTM Title/Abstract TheIBM ... recurrentneuralnetworks Input Document Mention Identification Salient Entity Classification Coreference Clustering Relation Extraction Intro.",
"cited by this paper, and locations in the body text where reference markers are embedded.",
"To merge SciREX with S2ORC, we link records using metadata obtained via the Semantic Scholar API: 5 paper title, DOI string, arXiv ID, and Semantic Scholar Paper ID.",
"For each document in SciREX, we check against all 81M documents in S2ORC for exact matches on any of these identi-fiers, yielding S2ORC entries for 433 out of 438 documents in SciREX.",
"The final mapping is included in our repository for the community to use.",
"Though our work only used the SciREX dataset, our methods can be readily extended to other SciIE datasets (including those mentioned in 2.1) using our released software.",
"Statistics Examining the distribution of citations for all documents in the SciREX dataset (in Figure 3), we observe a long-tailed distribution of citations per paper, and a bell-shaped distribution of references per paper.",
"5 https://www.semanticscholar.org/ In addition to the 5 documents we could not match to the S2ORC citation graph, 7 were incorrectly recorded as containing no references and 5 others were incorrectly recorded as having no citations.",
"These errors are due to data issues in the S2ORC dataset, which relies on PDF parsers to extract information (Lo et al., 2020).",
"We now describe our citation-aware scientific IE architecture, which incorporates citation information into mention identification, salient entity classification, and relation extraction.",
"For each task, we consider two types of citation graph information, either separately or together: (1) structural information from the graph network topology and (2) textual information from the content of citing and cited documents.",
"The structure of the citation graph can contextualize a document within the greater body of work.",
"Prior works in scientific information extraction have predominantly used the citation graph only to analyze the content of citing papers, such as Cite-TextRank (Das Gollapalli and Caragea, 2014) and Citation TF-IDF (Caragea et al., 2014), which is described in detail in 4.2.2.",
"However, the citation graph can be used to discover relationships between non-adjacent documents in the citation graph; prior works struggle to capture these relationships.",
"Arnold and Cohen (2009) are the only prior work, to our knowledge, to explicitly use the citation graph's structure for scientific IE.",
"They predict key entities related to a paper via random walks on a combined knowledge-and-citation-graph consisting of papers and entities, without considering a document's content.",
"This approach is simple but cannot generalize to new or unseen entities.",
"A rich direction of recent work has studied learned representations of networks, such as social networks (Perozzi et al., 2014) and citation graphs (Sen et al., 2008; Yang et al., 2015; Bui et al., 2018; Khosla et al., 2021).",
"In this paper, we show citation graph embeddings can improve scientific information extraction.",
"Construction of Citation Graph To construct our citation graph, we found all nodes in the S2ORC citation graph within 2 undirected edges of any document in the SciREX dataset, including all edges between those documents.",
"This process took 10 hours on one machine due to the massive size of the full S2ORC graph, resulting in a graph with 1.1M nodes and 5M edges.",
"Network Representation Learning We learn representations for each node (paper) using DeepWalk 6 (Perozzi et al., 2014) via the GraphVite library (Zhu et al., 2019), resulting in a 128-dimensional graph embedding for each document in our dataset.",
"For each task, we incorporate the document-level graph embedding into that task's model component, by simply concatenating the document's graph embedding with the hidden state in that component.",
"We do not update the graph embedding values during training.",
"our CitationIE system culminates in a pair of feedforward networks.",
"Figure 4 describes this general 6 An empirical comparison by Khosla et al. (2021) found DeepWalk to be quite competitive on two citation graph node classification datasets, despite its speed and simplicity.",
"architecture, though the input to these networks varies from task to task (SciBERT-BiLSTM embeddings for mention identification, span embeddings for salient entity classification, and per-section relation embeddings for relation extraction).",
"This architecture gives two options for where to concatenate the graph embedding into the hidden state Stage 1 or Stage 2 marked with a light blue block in Figure",
"4. Intuitively, concatenating the graph embedding in a later stage feeds it more directly into the final prediction.",
"We find Stage 1 is superior for relation extraction, and both perform comparably for salient entity classification and mention identification.",
"We give details on this experiment in Appendix A.3.",
"Most prior work using the citation graph for SciIE has focused on using the text of citing papers.",
"We examine how to use two varieties of textual information related to citations.",
"Citation sentences, also known as citances (Nakov et al., 2004), provide an additional source of textual context about a paper.",
"They have seen use in automatic summarization (Yasunaga et al., 2019), but not in neural information extraction.",
"In our work, we augment each document in our training set with its citances, treating each citance as a new section in the document.",
"In this way, we incorporate citances into our CitationIE model through the shared text representations used by each task in our system, as shown in Figure",
"5. If our document has many citations, we randomly sample 25 to use.",
"For each citing document, we select citances centered on the sentence containing the first reference marker pointing to our document of interest, and include the subsequent and consequent sentences if they are both in the same section.",
"We ensure the mention identification step does not predict entities in citance sections, which would lead to false positive entities in downstream tasks.",
"Citation TF-IDF (Caragea et al., 2014), is a feature representing the TF-IDF value (Jones, 1972) of a given token in its document's citances.",
"We consider a variant of this feature: for each token in a document, we compute the TF-IDF of that token in each citance of the document, and average the per-citance TF-IDF values over all citances.",
"We imple-BiLSTM TheIBM ... recurrentneuralnetworks SciBERT [CITE]used recurrent Citation Sentence#1 Paper Content ... introducedby[CITE] Citation Sentence#25 Figure 5: Incorporating citances into the text representation extractor.",
"mented this feature only for saliency classification, as it explicitly reasons about the significance of a token in citing texts.",
"As a local token-level feature, it also does not apply naturally to relation extraction, which operates on entire clusters of spans.",
"We lastly consider using graph embeddings and citances together in a single model for each task.",
"We do this naively by including citances with the document's input text when first computing shared text features, and then concatenating graph embeddings into downstream task-specific components.",
"The ultimate product of our work is an end-to-end document-level relation extraction system, but we also measure each component of our system in isolation, giving end-to-end and per-task metrics.",
"All metrics, except where stated otherwise, are the same as described by Jain et al. (2020).",
"Mention Identification We evaluate mention identification with the average F1 score of classifying entities of each span type.",
"Salient Entity Classification Similar to Jain et al. (2020) we evaluate this task at the mention level and cluster level.",
"We evaluate both metrics on gold standard entity recognition inputs.",
"Relation Extraction This is the ultimate task in our pipeline.",
"We use its output and metrics to evaluate the end-to-end system, but also evaluate relation extraction separately from upstream components to isolate its performance.",
"We specifically consider two types of metrics: (1) Document-level : For each document, given a set of ground truth 4-ary relations, we evaluate a set of predicted 4-ary relations as a sequence of binary predictions (where a matching relation is a true positive).",
"We then compute precision, recall, and F1 scores for each document, and average each over all documents.",
"We refer to this metric as the document-level relation metric.",
"To compare with Jain et al. (2020), this is the primary metric to measure the full system.",
"(2) Corpus-level : When evaluating the relation extraction component in isolation, we are also able to use a more standard corpus-level binary classification evaluation, where each candidate relation from each document is treated as a separate sample.",
"We also run both these metrics on a binary relation extraction setup, by flattening each set of 4-ary relations into a set of binary relations and evaluating these predictions as an intermediate metric.",
"For each task, we compare against Jain et al. (2020), whose architecture our system is built on.",
"No other model to our knowledge performs all the tasks we consider on full documents.",
"For the 4-ary relation extraction task, we also compare against the DocTAET model (Hou et al., 2019), which is considered as state-of-the-art for full-text scientific relation extraction (Jain et al., 2020; Hou et al., 2019).",
"Significance To improve the rigor of our evaluation, we run significance tests for each of our proposed methods against its associated baseline, via paired bootstrap sampling (Koehn, 2004).",
"In experiments where we trained multiple models with different seeds, we perform a hierarchical bootstrap procedure where we first sample a seed for each model and then sample a randomized test set.",
"We build our proposed CitationIE methods on top of the SciREX repository 7 (Jain et al., 2020) in the AllenNLP framework (Gardner et al., 2018).",
"For each task, we first train that component in isolation from the rest of the system to minimize 7 https://github.com/allenai/SciREX Model F1 P R Salient Mention Evaluation Baseline (reported) 57.9 57.5 58.4 Baseline (reimpl.) 57.5 50.5 66.8 CitationIE w/ Citation-TF-IDF 57.1 50.2 66.1 w/ Citances 58.7 51.4 68.5 w/ Graph Embeddings 59.2 53.5 66.3 w/ Graph + Citance 58.4 51.3 67.8 Salient Entity Cluster Evaluation Baseline (reimpl.) 39.1 28.5 75.8 CitationIE w/ Citation-TF-IDF 38.6 28.4 74.3 w/ Citances 38.7 28.2 74.8 w/ Graph Embeddings 40.3 29.8 74.5 Table 1: Salient entity classification results.",
"the task-specific loss.",
"We then take the best performing modifications and use them to train end-to-end IE models to minimize the sum of losses from all tasks.",
"We train each model on a single GPU with batch size 4 for up to 20 epochs.",
"We include detailed training configuration information in Appendix A.1.",
"For saliency classification and relation extraction, we trained the baseline and the strongest proposed models three times, 8 to improve reliability of our results.",
"For mention identification, we did not retrain models, as the first set of results strongly suggested our proposed methods were not helpful.",
"Mention Identification For mention identification, we observe no major performance difference from using citation graphs, and include full results in Appendix A.2.",
"the results of our CitationIE methods.",
"We observe: (1) Using citation graph embeddings significantly improves the system with respect to the salient mention metric.",
"(2) Graph embeddings do not improve cluster evaluation significantly (at 95%) due to the small test 8 See Appendix A.1 for exact seeds used 9 Reported as Component-wise Binary and 4-ary Relations in Jain et al. (2020) size 10 (66 samples) and inter-model variation.",
"(3) Incorporating graph embeddings and citances simultaneously is no better than using either.",
"(4) Our reimplemented baseline differs from the results reported by Jain et al. (2020) despite using their published code to train their model.",
"This may be because we use a batch size of 4 (due to compute limits) while they reported a batch size of 50.",
"Relation Extraction Table 2 shows that using graph embeddings here gives an 11.5 point improvement in document-level F1 over the reported baseline, 11 and statistically significant gains on both corpus-level F1 metrics.",
"Despite seemingly large gains on the document-level F1 metric, these are not statistically significant due to significant inter-model variability and small test set size, despite the graph embedding model performing best at every seed we tried.",
"End-to-End Model From Table 3, we observe: (1) Using graph embeddings appears to have a positive effect on the main task of 4-ary relation extraction.",
"However, these gains are not statistically significant ( p = 0 . 235 ) despite our proposed method outperforming the baseline at every seed, for the same reasons as mentioned above.",
"(2) On binary relation evaluation, we observe smaller improvements which had a lower p-value ( p = 0 . 099 ) due to lower inter-model variation.",
"(3) Using citances instead of graph embeddings still appears to outperform the baseline (though by a smaller margin than the graph embeddings).",
"Do papers with few citations benefit from citation graph information?",
"Our test set only contains two documents with zero citations, so we cannot characterize performance on such documents.",
"However, Figure 6 shows that the gains provided by the proposed CitationIE model with graph embeddings counterintuitively shrink as the number of citations of a paper increases.",
"We also observe 10 The limited size of this test set is an area of concern when using the SciREX dataset, and improving statistical power in SciIE evaluation is a crucial area for future work.",
"11 The large gap between reimplemented and reported baselines is likely due to our reproduced results averaging over 3 random seeds.",
"When using the same seed used by Jain et al. (2020), the baseline's document-level test F1 score is almost 20 points better than with two other random seeds.",
"this with citances, to a lesser extent.",
"This suggests more work needs to be done to represent citation graph nodes with many edges.",
"relation extraction?",
"With relation extraction, we found citation graph information provides strongest gains when classifying relations between distant entities in a document, seen in Figure 7.",
"For each relation in the test set, we computed the average distance between pairs of entity mentions in that relation, normalized by total document length.",
"We find models with graph embeddings or citances perform markedly better when these relations span ( 0 , 70 ) ( 70 , 450 ) ( 450 , 12 K ) 0.0 0.2 0.4 0.6 0.8 1.0 D o c u m e n t -L e v e l F 1 Baseline w/ Graph ( 0 , 70 ) ( 70 , 450 ) ( 450 , 12 K ) Baseline w/ Citances Figure 6: Document-level relation extraction F1 score of CitationIE models with graph embeddings (left) and citances (right), compared with the baseline (red) on documents grouped by number of citations.",
"large swaths of text.",
"This is particularly useful since neural models still struggle to model long-range dependencies effectively (Brown et al., 2020).",
"contextualize important terms?",
"Going back to our motivating example of a speech paper referring to ImageNet in passing 1, we hypothesized that adding context from citations helps deal with terms that are important in general, but not for a given document.",
"To measure this, we grouped all entities in our test dataset by their global saliency rate measured on the test set: given a span, what is the probability that this span is salient in any given occurrence?",
"In Figure 8, we observe that most of the improvement from graph embeddings and citances comes at terms which are labeled as salient in at least 20% ( 0 . 0 , 0 . 29 ) ( 0 . 29 , 0 . 37 ) ( 0 . 37 , 0 . 4 ) ( 0 . 4 , 0 . 54 ) 0.0 0.2 0.4 0.6 0.8 1.0 C o r p u s L e v e l F 1 Baseline w/ Graph ( 0 . 0 , 0 . 29 )( 0 . 29 , 0 . 37 )( 0 . 37 , 0 . 4 )( 0 . 4 , 0 . 54 ) Baseline w/ Citances Figure 7: Corpus-Level F1 of relation extraction models, bucketed by the average distance between entity mentions in each relation.",
"of their training-set mentions.",
"This suggests that citation graph information yields improvements with reasoning about important terms, without negatively interfering with less-important terms.",
"We explore the use of citation graph information in neural scientific information extraction with CitationIE , a model that can leverage either the structure of the citation graph or the content of citing or cited documents.",
"We find that this information, combined with document text, leads to particularly strong improvements for salient entity classification and relation extraction, and provides an increase in end-to-end IE system performance over a strong baseline.",
"Our proposed methods reflect some of the simplest ways of incorporating citation graph information into a neural SciIE system.",
"As such, these results can be considered a proof of concept.",
"In the future we will explore ways to extract richer information from the graph using more sophisticated techniques, hopefully better capturing the interplay between citation graph structure and content.",
"Finally, we evaluated our proof of concept here on a single dataset in the machine learning domain.",
"While our methods are not domain-specific, verifying that these methods generalize to other scientific domains is important future work.",
"The authors thank Sarthak Jain for assisting with reproducing baseline results, Bharadwaj Ramachan-dran for giving advice on figures, and Siddhant Arora and Rishabh Joshi for providing suggestions on the paper.",
"The authors also thank the anonymous reviewers for their helpful comments.",
"This work was supported by the Air Force Research Laboratory under agreement number FA8750-19-2-0200.",
"The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon.",
"The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the Air Force Research Laboratory or the U.S. Government."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"result",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"result",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other"
] |
[
"We propose a novel constituency parsing model that casts the parsing problem into a series of pointing tasks.",
"Specifically, our model estimates the likelihood of a span being a legitimate tree constituent via the pointing score corresponding to the boundary words of the span.",
"Our parsing model supports efficient top-down decoding and our learning objective is able to enforce structural consistency without resorting to the expensive CKY inference.",
"The experiments on the standard English Penn Treebank parsing task show that our method achieves 92.78 F1 without using pre-trained models, which is higher than all the existing methods with similar time complexity.",
"Using pre-trained BERT, our model achieves 95.48 F1, which is competitive with the state-of-the-art while being faster.",
"Our approach also establishes new state-of-the-art in Basque and Swedish in the SPMRL shared tasks on multilingual constituency parsing.",
"Constituency or phrase structure parsing is a core task in natural language processing (NLP) with myriad downstream applications.",
"Therefore, devising effective and efficient algorithms for parsing has been a key focus in NLP.",
"With the advancements in neural approaches, various neural architectures have been proposed for constituency parsing as they are able to effectively encode the input tokens into dense vector representations while modeling the structural dependencies between tokens in a sentence.",
"These include recurrent networks (Dyer et al., 2016; Stern et al., 2017b) and more recently self-attentive networks (Kitaev and Klein, 2018).",
"The parsing methods can be broadly distinguished based on whether they employ a greedy transition-based algorithm or a globally optimized S She 1 VP enjoys 2 S-VP playing 3 tennis 4 .",
"Pointing Representation P ( T ) = { ( 1 (cid:41) 5 ,S), ( 2 (cid:41) 5 , ), ( 3 (cid:41) 4 ,S-VP), ( 4 (cid:41) 2 ,VP), ( 5 (cid:41) 1 ,S)",
"chart parsing algorithm.",
"The transition-based parsers (Dyer et al., 2016; Cross and Huang, 2016; Liu and Zhang, 2017) generate trees au-toregressively as a form of shift-reduce decisions.",
"Though computationally attractive, the local decisions made at each step may propagate errors to subsequent steps which would suffer from exposure bias.",
"Chart parsing methods, on the other hand, learn scoring functions for subtrees and perform global search over all possible trees to find the most probable tree for a sentence (Durrett and Klein, 2015; Gaddy et al., 2018; Kitaev and Klein, 2018; Kitaev et al., 2019).",
"In this way, these methods can ensure consistency in predicting structured output.",
"The limitation, however, is that they run slowly at O ( n 3 ) or higher time complexity.",
"In this paper, we propose a novel parsing approach that casts constituency parsing into a series of pointing problems (Figure 1).",
"Specifically, our parsing model estimates the pointing score from one word to another in the input sentence, which represents the likelihood of the span covering those words being a legitimate phrase structure ( i.e., a subtree in the constituency tree).",
"During training, the likelihoods of legitimate spans are maximized using the cross entropy loss.",
"This enables our model to enforce structural consistency, while avoiding the use of structured loss that requires expensive O ( n 3 ) CKY inference (Gaddy et al., 2018; Kitaev and Klein, 2018).",
"The training in our model can be fully parallelized without requiring structured inference as in (Shen et al., 2018; Gomez and Vilares, 2018).",
"Our pointing mechanism also allows efficient top-down decoding with a best and worse case running time of O ( n log n ) and O ( n 2 ) , respectively.",
"In the experiments with English Penn Treebank parsing, our model without any pre-training achieves 92.78 F1, outperforming all existing methods with similar time complexity.",
"With pre-trained BERT (Devlin et al., 2019), our model pushes the F1 score to 95.48, which is on par with the state-of-the-art (Kitaev et al., 2019), while supporting faster decoding.",
"Our model also performs competitively on the multilingual parsing tasks in the SPMRL 2013/2014 shared tasks and establishes new state-of-the-art in Basque and Swedish.",
"We will release our code at https://ntunlpsg.github.io/project/parser/ptr-constituency-parser 2 Model Similar to Stern et al. (2017a), we view constituency parsing as the problem of finding a set of labeled spans over the input sentence.",
"Let S ( T ) denote the set of labeled spans for a parse tree T .",
"Formally, S ( T ) can be expressed as S ( T ) := { (( i t , j t ) , l t ) } |S ( T ) | t =1 for i t < j t (1) where |S ( T ) | is the number of spans in the tree.",
"Figure 1 shows an example constituency tree and its corresponding labeled span representation.",
"Following the standard practice in parsing (Gaddy et al., 2018; Shen et al., 2018), we convert the n -ary tree into a binary form and introduce a dummy label to spans that are not constituents in the original tree but created as a result of bina-rization.",
"Similarly, the labels in unary chains corresponding to nested labeled spans are collapsed into unique atomic labels, such as S-VP in Fig. 1.",
"Although our method shares the same span-based view with that of Stern et al. (2017a), our approach diverges significantly from their framework in the way we treat the whole parsing problem, and the representation and modeling of the spans, as we describe below.",
"In contrast to previous approaches, we cast parsing as a series of pointing decisions.",
"For each index i in the input sequence, the parsing model points it to another index p i in order to identify the tree span ( i, p i ) , where i (cid:54) = p i .",
"Similar to Pointer Networks (Vinyals et al., 2015a), each pointing mechanism is modeled as a multinomial distribution over the indices of the input tokens (or encoder states).",
"However, unlike the original pointer network where a decoder state points to an encoder state, in our approach, every encoder state h i points to another encoder state h p i .",
"In this paper, we generally use x (cid:41) y to mean x points to y .",
"We will refer to the pointing operation either as a function of the encoder states ( e.g., h i (cid:41) h p i ) or simply the corresponding indices ( e.g., i (cid:41) p i ).",
"They both mean the same operation where the pointing function takes the encoder state h i as the query vector and points to h p i by computing an attention distribution over all the encoder states.",
"Let P ( T ) denote the set of pointing decisions derived from a tree T by a transformation H , i.e., H : T P ( T ) .",
"For the parsing process to be valid, the transformation H and its inverse H (cid:48) which transforms P ( T ) back to T , should both have a one-to-one mapping property.",
"Otherwise, the parsing model may confuse two different parse trees with the same pointing representation.",
"In this paper, we propose a novel transformation that satisfies this property, as defined by the following proposition (proof provided in the Appendix).",
"Proposition 1 Given a binary constituency tree T for a sentence containing n tokens, the transformation H converts it into a set of pointing decisions P ( T ) = { ( i (cid:41) p i , l i ) : i = 1 , . . . , n 1; i (cid:54) = p i } such that (min( i, p i ) , max( i, p i )) is the largest span that starts or ends at i , and l i is the label of the nonterminal associated with the span.",
"To elaborate further, each pointing decision in P ( T ) represents a specific span in S ( T ) .",
"The pointing i (cid:41) p i is directional, while the span that it represents ( i (cid:48) , j (cid:48) ) is non-directional.",
"In other words, there may exist position i such that i > p i , Algorithm 1 Convert binary tree to Pointing Input: Binary tree T and its span representation S ( T ) Output: Pointing representation P ( T ) P ( T ) = [] (cid:46) Empty pointing list for each leaf i in T do node leaf i ( x, y ) ( i, i ) (cid:46) Initialize current span, x y l i (cid:46) Initialize label of current span while x = i or y = i do p i x + y i l i node.label (cid:46) The span's label node node.parent ( x, y ) node.span (cid:46) Span covered by node end while (cid:46) Until i is no longer start/end point push( P ( T ) , ( i (cid:41) p i , l i )) end for return P ( T ) while i (cid:48) < j (cid:48) i (cid:48) , j (cid:48) [1 , n ] .",
"In fact, it is easy to see that if the token at index i is a left-child of a subtree, the largest span involving i starts at i , and in this case i < p i and i (cid:48) = i, j (cid:48) = p i .",
"On the other hand, if the token is a right-child of a subtree, the respective largest span ends at position i , in which case i > p i and i (cid:48) = p i , j (cid:48) = i ( e.g., see 4 (cid:41) 2 in Figure 1).",
"In addition, as the spans in S ( T ) are unique, it can be shown that the pointing decisions in P ( T ) are also distinct from one another (see Appendix for a proof by contradiction).",
"Given such pointing formulation, for every constituency tree, there exists a trivial case (1 (cid:41) n, l 1 ) where p 1 = n and l 1 is generally S'.",
"Thus, to make our formulation more general with n inputs and n outputs and convenient for the method description discussed later on, we add another trivial case ( n (cid:41) 1 , l 1 ) .",
"With this generalization, we can represent the pointing decisions of any binary constituency tree T as: P ( T ) = { ( i (cid:41) p i , l i ) : i = 1 , . . . , n ; i (cid:54) = p i } (2) The pointing representation of the tree in Figure 1 is given at the bottom of the figure.",
"To illustrate, in the parse tree, the largest phrase that starts or ends at token 2 (enjoys') is the subtree rooted at ', which spans from 2 to 5.",
"In this case, the span starts at token 2.",
"Similarly, the largest phrase that starts or ends at token 4 (tennis') is the span enjoys playing tennis, which is rooted at VP'.",
"In this case, the span ends at token 4.",
"Algorithm 1 describes the procedure to convert a binary tree to its corresponding pointing representation.",
"Specifically, from each leaf token i , the algorithm traverses upward along the hierarchy until the non-terminal node that does not start or end with i .",
"In this way, the largest span starting or ending with i can be identified.",
"In the previous section, we described how to convert a constituency tree T into a sequence of pointing decisions P ( T ) .",
"We use this transformation to train the parsing model (described in detail in Sections 2.3 2.4).",
"During inference, given a sentence to parse, our decoder with the help of the parsing model predicts P ( T ) , from which we can construct the tree T .",
"However, not all sets of point-ings P ( T ) guarantee the generation of a valid tree.",
"For example, for a sentence with four (4) tokens, the pointing P ( T ) = { (1 (cid:41) 4 , l 1 ) , (2 (cid:41) 3 , l 2 ) , (3 (cid:41) 4 , l 3 ) , (4 (cid:41) 1 , l 1 ) } does not generate a valid tree because token 3' cannot belong to both spans (2 , 3) and (3 , 4) .",
"In other words, simply taking the arg max over the pointing distributions may not generate a valid tree.",
"Our approach to decoding is inspired by the span-based approach of Stern et al. (2017a).",
"In particular, to reduce the search space, we score for span identification (given by the pointing function) and label assignment separately.",
"where ( k (cid:41) i ) and ( k +1 (cid:41) j ) are the pointing scores (probabilities) for spans ( i, k ) and ( k +1 , j ) , respectively.",
"Note that the pointing scores are asymmetric , meaning that ( i (cid:41) j ) may not be equal to ( j (cid:41) i ) , because pointing from i to j is different from pointing from j to i .",
"This is different from previous approaches, where the score of a span is defined to be symmetric.",
"We build a tree for the input sentence by computing Eq.",
"3 recursively starting from the full sentence span (1 , n ) .",
"In the general case when i < k < j 1 , our pointing-based parsing model should learn to assign high scores to the two spans ( i, k ) and ( k + 1 , j ) , or equivalently the pointing decisions k (cid:41) i and k +1 (cid:41) j .",
"However, the pointing formulation described so far omits the trivial self-pointing decisions, which represent the singleton spans .",
"A singleton span is only created when the splitting decision splits an n -size span into a single-token span (singleton span) and a sub-span of size n 1 , i.e., when k = i or k = j 1 .",
"For instance, for the parsing process in Figure 2a, the splitting decision at the root span (1 , 5) results in a singleton span (1 , 1) and a general span (2 , 5) .",
"For this splitting decision, Eq.",
"3 requires the scores of (1 , 1) and (2 , 5) .",
"However, the set of pointing decisions P ( T ) does not cover the pointing for (1 , 1) .",
"This discrepancy can be resolved by modeling the singleton spans separately.",
"To achieve that, we rede-fine Eq.",
"3 as follows: s split ( i, k, j ) = sp ( i (cid:41) i ) + gp ( i +1 (cid:41) j ) if k = i gp ( j 1 (cid:41) i ) + sp ( j (cid:41) j ) if k = j 1 gp ( k (cid:41) i ) + gp ( k +1 (cid:41) j ) otherwise (5) where sp and gp respectively represent the scores for the singleton and general pointing functions (to be defined formally in Section 2.3).",
"Remark on structural consistency.",
"It is important to note that since the pointing functions are defined to have a global structural property ( i.e., the largest span that starts/ends with i ), our model inherently enforces structural consistency.",
"The pointing formulation of the parsing problem also makes the training process simple and efficient; it allows us to train the model effectively with simple cross entropy loss (see Section 2.4).",
"Label Assignment.",
"Label assignment of spans is performed after every split decision.",
"Specifi-cally, as we split a span ( i, j ) into two sub-spans ( i, k ) and ( k +1 , j ) which corresponds to the pointing functions of k (cid:41) i and k +1 (cid:41) j , we perform the label assignments for the two new sub-spans as l k = arg max l L gc ( l | k ) l k +1 = arg max l L gc ( l | k + 1) (6) where gc is the label classifier for any general (non-unary) span and L is the set of possible nonterminal labels.",
"Following Shen et al. (2018), we use a separate classifier uc for determining the labels of the unary spans, e.g., the first layer of labels NP, , . . . , NP, ) in Figure 2.",
"Also, note that the label assignment is done based on only the query vector (the encoder state that is used to point).",
"Figure 2 illustrates the top-down parsing process for our running example.",
"It consists of a sequence of pointing decisions (Figure 2a, top to bottom), which are then trivially converted to the parse tree (Figure 2b).",
"We also provide the pseu-docode in Algorithm 2.",
"Specifically, the algorithm finds the best split for the current span ( i, j ) using the pointing scores and pushes the newly created sub-spans into the FIFO queue Q .",
"The process terminates when there are no more spans to be split.",
"Similar to Stern et al. (2017a), our parsing algorithm has the worst and best case time complexities of O ( n 2 ) and O ( n log n ) , respectively.",
"We now describe the architecture of our parsing model: the sentence encoder, the pointing model and the labeling model.",
"Sentence Encoder.",
"Given an input sequence of n words X = ( x 1 , . . . , x n ) , we first embed each word x i to its respective vector representation e i as: e i = e char i + e word i + e pos i (7) where e char i , e word i , e pos i are respectively the character, word, and part-of-speech (POS) embeddings of the word x i .",
"Following Kitaev and Klein (2018), we use a character LSTM to compute the character embedding of a word.",
"We experiment with both randomly initialized and",
"pre-(a) Execution of pointing parsing algorithm",
"trained word embeddings.",
"If pretrained embeddings are used, the word embedding e word i is the summation of the word's randomly-initialized embedding and the pretrained embedding.",
"The POS embeddings ( e pos i ) are randomly initialized.",
"The word representations ( e i ) are then passed to a neural network based sequence encoder to obtain their hidden representations.",
"Since our method does not require any specific encoder, one may use any encoder model, such as Bi-LSTM (Hochreiter and Schmidhuber, 1997) or self-attentive encoder (Kitaev and Klein, 2018).",
"In this paper, unless otherwise specified, we use the self-attentive encoder model as our main sequence encoder because of its efficiency with parallel computation.",
"The model is factorized into content and position information in both the self-attention sub-layer and the feed-forward layer.",
"Details about this factorization process is provided in Kitaev and Klein (2018).",
"Pointing and Labeling Models.",
"The results of the aforementioned sequence encoding process are used to compute the pointing and labeling scores.",
"More formally, the encoder network produces a sequence of n latent vectors H = ( h 1 , . . . , h n ) for the input sequence X = ( x 1 , . . . , x n ) .",
"After that, we apply four (4) separate position-wise two-layer Feed-Forward Networks (FFN), formulated as FFN ( x ) = ReLU ( xW 1 + b 1 ) W 2 + b 2 , to transform H into task-specific latent representations for the respective pointing and labeling tasks.",
"Note that there is no parameter sharing between FFN gp , FFN sp , FFN gc and FFN uc .",
"The pointing functions are then modeled as the multinomial (or attention) distributions over the input indices for each input position i as follows.",
"gp ( i, k ) = exp( h gpi ( h gpk ) T ) (cid:80) n k =1 exp( h gpi ( h gpk ) T ) (10) sp ( i, k ) = exp( h spi ( h spk ) T ) (cid:80) nk =1 exp( h spi ( h spk ) T ) (11) For label assignment functions, we simply feed the label representations H gc = ( h gc 1 , . . . , h gcn ) and H uc = ( h uc 1 , . . . , h ucn ) into the respective softmax classification layers as follows.",
"gc ( l | i ) = exp( h gci w gcl ) (cid:80) | L g | l =1 exp( h gci w gcl ) (12) uc ( l | i ) = exp( h uci w ucl ) (cid:80) | L u | l =1 exp( h uci w ucl ) (13) where L g and L u are the set of possible labels for the general and unary spans respectively, w gcl and w ucl are the class-specific trainable weight vectors.",
"We train our parsing model by minimizing the total loss L total ( ) defined as:",
"where each individual loss is a cross entropy loss computed for the corresponding labeling or pointing task, and = { e , gp , sp , gc , uc } represents the overall model parameters; specifically, e denotes the encoder parameters shared by all components, while gp , sp , gc and uc denote the separate parameters catering for the four pointing and labeling functions, gp, sp, gc and uc , respectively.",
"To show the effectiveness of our approach, we conduct experiments on English and Multilingual parsing tasks.",
"For English, we use the standard Wall Street Journal (WSJ) part of the Penn Treebank (PTB) (Marcus et al., 1993), whereas for multilingual, we experiment with seven (7) different languages from the SPMRL 2013-2014 shared task (Seddah et al., 2013): Basque, French, German, Hungarian, Korean, Polish and Swedish.",
"For evaluation on PTB, we report the standard labeled precision (LP), labeled recall (LR), and labelled F1 computed by evalb 1 .",
"For the SPMRL datasets, we report labeled F1 and use the same setup in evalb as Kitaev and Klein (2018).",
"Setup.",
"We follow the standard train/valid/test split, which uses sections 2-21 for training, section 22 for development and section 23 for evaluation.",
"This gives 45K sentences for training, 1,700 sentences for development, and 2,416 sentences for testing.",
"Following previous studies, our model uses POS tags predicted by the Stanford tagger (Toutanova et al., 2003).",
"For our model, we adopt the self-attention encoder with similar hyperparameter details proposed by Kitaev and Klein (2018).",
"The character embeddings are of 64 dimensions.",
"For general 1 http://nlp.cs.nyu.edu/evalb/ Model LR LP F1 Top-Down Inference Stern et al. (2017a) 93.20 90.30 91.80 Shen et al. (2018) 92.00 91.70 91.80 Our Model 92.81 92.75 92.78 CKY/Chart Inference Gaddy et al. (2018) -92.10 Kitaev and Klein (2018) 93.20 93.90 93.55 Other Approaches Gomez and Vilares (2018) -90.7 Liu and Zhang (2017) -91.8 Stern et al. (2017b) 92.57 92.56 92.56 Zhou and Zhao (2019) 93.64 93.92 93.78 Table 1: Results for single models (no pre-training) on the PTB WSJ test set, Section 23.",
"and unary label classifiers ( gc and uc ), the hidden dimension of the specific position-wise feed-forward networks is 250, while those for pointing functions ( gp and sp ) have hidden dimensions of 1024 .",
"Our model is trained using the Adam optimizer (Kingma and Ba, 2015) with a batch size of 100 sentences.",
"Additionally, we use 100 warm-up steps, within which we linearly increase the learning rate from 0 to the base learning rate of 0 .",
"008 .",
"Model selection for testing is performed based on the labeled F1 score on the validation set.",
"Results for Single Models.",
"The experimental results on PTB for the models without pre-training are shown in Table 1.",
"As it can be seen, our model achieves an F1 of 92 .",
"78 , the highest among the models using top-down inference strategies.",
"Specifically, our method outperforms Stern et al. (2017a) and Shen et al. (2018) by about 1 .",
"0 point in F1-score.",
"Notably, our model with LSTM encoder achieves an F1 of 92.26, which is still better than all the top-down parser methods.",
"On the other hand, while Kitaev and Klein (2018) and Zhou and Zhao (2019) achieve higher F1 score, their inference speed is significantly slower than ours because of the use of CKY based algorithms, which run at O ( n 3 ) time complexity for Kitaev and Klein (2018) and O ( n 5 ) for Zhou and Zhao (2019).",
"Furthermore, their training objectives involve the use of structural hinge loss, which requires online CKY inference during training.",
"This makes their training time considerably slower than that of our method, which is trained Model F1 Our model BERT BASE-uncased 95.34 Our model BERT LARGE-cased 95.48 Kitaev and Klein (2018) ELMO 95.13 Kitaev et al. (2019) BERT LARGE-cased 95.59 Table 2: Restuls on PTB WSJ test set with pretraining.",
"directly with span-wise cross entropy loss.",
"In addition, Zhou and Zhao (2019) uses external supervision ( head information) from the dependency parsing task.",
"Dependency parsing models, in fact, have a strong resemblance to the pointing mechanism that our model employs (Ma et al., 2018).",
"As such, integrating dependency parsing information into our model may also be beneficial.",
"We leave this for future work.",
"Results with Pre-training Similar to Kitaev and Klein (2018) and Kitaev et al. (2019), we also evaluate our models with BERT (Devlin et al., 2019) embeddings .",
"Following them in the inclusion of contextualized token representations, we adjust the number of self-attentive layers to 2 and the base learning rate to 0 .",
"00005 .",
"As shown in Table 2, our model achieves an F1 score of 95.48, which is on par with the state-of-the-art models.",
"However, the advantage of our method is that it is faster than those methods.",
"Specifically, our model runs at O ( n 2 ) worst-case time complexity, while that of Kitaev et al. (2019) is O ( n 3 ) .",
"Comparison on parsing speed is discussed in the following section.",
"Parsing Speed Comparison.",
"In addition to parsing performance in F1 scores, we also compare our parser against the previous neural approaches in terms of parsing speed.",
"We record the parsing timing over 2416 sentences of the PTB test set with batch size of 1, on a machine with NVIDIA GeForce GTX 1080Ti GPU and Intel(R) Xeon(R) Gold 6152 CPU.",
"This setup is comparable to the setup of Shen et al. (2018).",
"As shown in Table 3, our parser outperforms Shen et al. (2018) by 19 more sentences per second, despite the fact that our parsing algorithm runs at O ( n 2 ) worse-case time complexity while the one used by Shen et al. (2018) can theoretically run at O ( n log n ) time complexity.",
"To elaborate further, the algorithm presented in Shen et al.",
"(2018) can only run at O ( n 2 ) complexity.",
"To achieve O ( n log n ) complexity, it needs to sort the list of syntactic distances, which the provided code 2 does not implement.",
"In addition, the speed up for our method can be attributed to the fact that our algorithm (see Algorithm 2) uses a while loop , while the algorithm of Shen et al. (2018) has many recursive function calls.",
"Recursive algorithms tend to be less empirically efficient than their equivalent while/for loops in handling low-level memory allocations and function call stacks.",
"Setup.",
"Similar to the English PTB experiments, we use the predicted POS tags from external taggers (provided in the SPMRL datasets).",
"The train/valid/test split is reported in Table 6.",
"For single model evaluation, we use the identical hyper-parameters and optimizer setups as in English PTB.",
"For experiments with pre-trained models, we use the multilingual BERT (Devlin et al., 2019), which was trained jointly on 104 languages.",
"Results.",
"The results for the single models are reported in Table 4.",
"We see that our model achieves the highest F1 score in Basque and Swedish, which are higher than the baselines by 0 .",
"52 and 1 .",
"37 respective in F1.",
"Our method also performs competitively with the previous state-of-the-art methods on other languages.",
"Table 5 reports the performance of the models using pre-trained BERT.",
"Evidently, our method achieves state-of-the-art results in Basque and Swedish, and performs on par with the previous best method by Kitaev et al. (2019) in the other five languages.",
"Again, note that our method is considerably faster and easier to train than the 2 https://github.com/hantek/ distance-parser Model Basque French German Hebrew Hungarian Korean Polish Swedish (Anders Bjorkelund and Szanto, 2014) 88.24 82.53 81.66 89.80 91.72 83.81 90.50 85.50 (Coavoux and Crabbe, 2017) 88.81 82.49 85.34 89.87 92.34 86.04 93.64 84.0 (Kitaev and Klein, 2018) 89.71 84.06 87.69 90.35 92.69 86.59 93.69 84.45 Our Model 90.23 82.20 84.91 90.63 91.07 85.36 93.99 86.87 Table 4: SPMRL experiment single model test.",
"method of Kitaev et al. (2019).",
"Prior to the neural tsunami in NLP, parsing methods typically model correlations in the output space through probabilistic context-free grammars (PCFGs) on top of sparse (and discrete) input representations either in a generative regime (Klein and Manning, 2003) or a discriminative regime (Finkel et al., 2008) or a combination of both (Charniak and Johnson, 2005).",
"Beside the chart parser approach, there is also a long tradition of transition-based parsers (Sagae and Lavie, 2005) Recently, however, with the advent of powerful neural encoders such as LSTMs (Hochre-iter and Schmidhuber, 1997), the focus has been switched more towards effective modeling of correlations in the input's latent space, as the output structures are nothing but a function of the input (Gaddy et al., 2018).",
"Various neural network models have been proposed to effectively encode the dense input representations and correlations, and have achieved state-of-the-art parsing results.",
"To enforce the structural consistency, existing neural parsing methods either employ a transition-based algorithm (Dyer et al., 2016; Liu and Zhang, 2017; Kitaev and Klein, 2019) or a globally optimized chart-parsing algorithm (Gaddy et al., 2018; Kitaev and Klein, 2018).",
"Meanwhile, researchers also attempt to convert the constituency parsing problem into tasks that can be solved in alternative ways.",
"For instance, Fernandez-Gonzalez and Martins (2015) transform the phrase structure into a special form of dependency structure.",
"Such a dependency structure, however, requires certain corrections while converting back to the corresponding constituency tree.",
"Gomez and Vilares (2018) and Shen et al. (2018) propose to map the constituency tree for a sentence of n tokens into a sequence of n 1 labels or scalars based on the depth or height of the lowest common ancestors between pairs of consecutive tokens.",
"In addition, methods like (Vinyals et al., 2015b; Vaswani et al., 2017) apply the sequence-to-sequence framework to trans-late a sentence into the linearized form of its constituency tree.",
"While being trivial and simple, parsers of this type do not guarantee structural correctness, because the syntax of the linearized form is not constrained during tree decoding.",
"Our approach differs from previous work in that it represents the constituency structure as a series of pointing representations and has a relatively simpler cross entropy based learning objective.",
"The pointing representations can be computed in parallel, and can be efficiently converted into a full constituency tree using a top-down algorithm.",
"Our pointing mechanism shares certain similarities with the Pointer Network (Vinyals et al., 2015a), but is distinct from it in that our method points a word to another word within the same encoded sequence.",
"We have presented a novel constituency parsing method that is based on a pointing mechanism.",
"Our method utilizes an efficient top-down decoding algorithm that uses pointing functions for scoring possible spans.",
"The pointing formulation inherently captures global structural properties and allows efficient training with cross entropy loss.",
"With experiments we have shown that our method outperforms all existing top-down methods on the English Penn Treebank parsing task.",
"Our method with pre-training rivals the state-of-the-art method, while being faster than it.",
"On multilingual constituency parsing, it also establishes new state-of-the-art in Basque and Swedish.",
"We would like to express our gratitude to the anonymous reviewers for their insightful feedback on our paper.",
"Shafiq Joty would like to thank the funding support from his Start-up Grant (M4082038.020)."
] | [
"objective",
"abstain",
"objective",
"result",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"method",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"method",
"objective",
"method",
"abstain",
"result",
"abstain",
"abstain",
"other",
"other"
] |
[
"Automatic metrics are fundamental for the development and evaluation of machine translation systems.",
"Judging whether, and to what extent, automatic metrics concur with the gold standard of human evaluation is not a straightforward problem.",
"We show that current methods for judging metrics are highly sensitive to the translations used for assessment, particularly the presence of outliers, which often leads to falsely confident conclusions about a metric's efficacy.",
"Finally, we turn to pairwise system ranking, developing a method for thresholding performance improvement under an automatic metric against human judgements, which allows quantification of type I versus type II errors incurred, i.e., insignificant human differences in system quality that are accepted, and significant human differences that are rejected.",
"Together, these findings suggest improvements to the protocols for metric evaluation and system performance evaluation in machine translation.",
"Automatic metrics are an indispensable part of machine translation (MT) evaluation, serving as a proxy to human evaluation which is considerably more expensive and time-consuming.",
"They provide immediate feedback during MT system development and serve as the primary metric to report the quality of MT systems.",
"Accordingly, the reliability of metrics is critical to progress in MT research.",
"A particularly worrying finding was made in the most recent Conference on Machine Translation (WMT), as part of their annual competition findings to benchmark progress in translation and translation evaluation.",
"WMT has established a method based on Pearson's correlation coefficient for measuring how well automatic metrics match with human judgements of translation quality, which is used to rank metrics and to justify their widespread use in lieu of human evaluation.",
"Their findings (Ma et al., 2019) showed that if the correlation is computed for metrics using a large cohort of translation systems, typically very high correlations were found between leading metrics and humans (as high as r = 0 . 9 ).",
"However, if considering only the few best systems, the correlation reduced markedly.",
"This is in contrast to findings at sentence-level evaluation, where metrics are better at distinguishing between high-quality translations compared to low-quality translations (Fomicheva and Specia, 2019).",
"When considering only the four best systems, the automatic metrics were shown to exhibit negative correlations in some instances.",
"It would appear that metrics can only be relied upon for making coarse distinctions between poor and good translation outputs, but not for assessing similar quality outputs, i.e., the most common application faced when assessing incremental empirical improvements.",
"Overall these findings raise important questions as to the reliability of the accepted best-practises for ranking metrics, and more fundamentally, cast doubt over these metrics' utility for tuning high-quality systems, and making architecture choices or publication decisions for empirical research.",
"In this paper, we take a closer look into this problem, using the metrics data from recent years of WMT to answer the following questions: 1. Are the above problems identified with Pearson's correlation evident in other settings besides small collections of strong MT systems?",
"To test this we consider a range of system quality levels, including random samples of systems, and show that the problem is widely apparent.",
"2. What is the effect of outlier systems in the reported correlations?",
"Systems that are considerably worse than all others can have a disproportionate effect on the computed correlation, despite offering very little insight into the evaluation problem.",
"We identify a robust method for identifying outliers, and demonstrate their effect on correlation, which for some metrics can result in radically different conclusions about their utility.",
"3. Given these questions about metrics' utility, can they be relied upon for comparing two systems?",
"More concretely, we seek to quantify the extent of improvement required under an automatic metric such that the ranking reliably reflects human assessment.",
"In doing so, we consider both type I and II errors, which correspond to accepting negative or insignificant differences as judged by humans, versus rejecting human significant differences; both types of errors have the potential to stunt progress in the field.",
"Overall we find that current metric evaluation methodology can lend false confidence to the utility of a metric, and that leading metrics require either untenably large improvements to serve a gatekeeping role, or overly permissive usage to ensure good ideas are not rejected out of hand.",
"Perhaps unsurprisingly, we conclude that metrics are inadequate as a substitute for human evaluations in MT research.",
"1 2 Related work Since 2007, the Conference on Machine Translation (WMT) has organized an annual shared task on automatic metrics, where metrics are evaluated based on correlation with human judgements over a range of MT systems that were submitted to the translation task.",
"Methods for both human evaluation and meta evaluation of metrics have evolved over the years.",
"In early iterations, the official evaluation measure was the Spearman's rank correlation of metric scores with human scores (Callison-Burch and Osborne, 2006).",
"However, many MT system pairs have very small score differences, and evaluating with Spearman's correlation harshly penalises metrics that have a different ordering for these systems.",
"This was replaced by the Pearson correlation in 2014 (Bojar et al., 2014).",
"To test whether the difference in the performance of two metrics is statis-1 Code, data and additional analysis available at https://github.com/nitikam/tangled tically significant, the William's test for dependent correlations is used (Graham and Baldwin, 2014), which takes into account the correlation between the two metrics.",
"Metrics that are not outperformed by any other metric are declared as the winners for that language pair.",
"Pearson's r is highly sensitive to outliers (Os-borne and Overbay, 2004): even a single outlier can have a drastic impact on the value of the correlation coefficient; and in the extreme case, outliers can give the illusion of a strong correlation when there is none, or mask the presence of a true relationship.",
"More generally, very different underlying relationships between the two variables can have the same value of the correlation coefficient (Anscombe, 1973).",
"2 The correlation of metrics with human scores is highly dependent on the underlying systems used.",
"BLEU (Papineni et al., 2002a) has remained mostly unchanged since it was proposed in 2002, but its correlation with human scores has changed each year over ten years of evaluation (2006 to 2016) on the EnglishGerman and GermanEnglish language pairs at WMT (Reiter, 2018).",
"The low correlation for most of 20062012 is possibly due to the presence of strong rule-based systems that tend to receive low BLEU scores (Callison-Burch and Osborne, 2006).",
"By 2016, however, there were only a few submissions of rule-based systems, and these were mostly outperformed by statistical systems according to human judgements (Bojar et al., 2016).",
"The majority of the systems in the last three years have been neural models, for which most metrics have a high correlation with human judgements.",
"BLEU has been surpassed by various other metrics at every iteration of the WMT metrics shared task.",
"Despite this, and extensive analytical evidence of the limitations of BLEU in particular and automatic metrics in general (Stent et al., 2005; Callison-Burch and Osborne, 2006; Smith et al., 2016), the metric remains the de facto standard of evaluating research hypotheses.",
"2 https://janhove.github.io/teaching/ 2016/11/21/what-correlations-look-like contains examples that clearly illustrate the extent of this phenomenon 3 Data 3.1 Direct Assessment (DA) Following Ma et al. (2019), we use direct assessment (DA) scores (Graham et al., 2017) collected as part of the human evaluation at WMT 2019.",
"Annotators are asked to rate the adequacy of a set of translations compared to the corresponding source/reference sentence on a slider which maps to a continuous scale between 0 and 100.",
"Bad quality annotations are filtered out based on quality control items included in the annotation task.",
"Each annotator's scores are standardised to account for different scales.",
"The score of an MT system is computed as the mean of the standardised score of all its translations.",
"In WMT 19, typically around 15002500 annotations were collected per system for language pairs where annotator availability was not a problem.",
"To assess whether the difference in scores between two systems is not just chance, the Wilcoxon rank-sum test is used to test for statistical significance.",
"Automatic metrics compute the quality of an MT output (or set of translations) by comparing it with a reference translation by a human translator.",
"For the WMT 19 metrics task, participants were also invited to submit metrics that rely on the source instead of the reference (QE .",
"In this paper, we focus on the following metrics that were included in evaluation at the metrics task at WMT 2019: Baseline metrics BLEU (Papineni et al., 2002b) is the precision of n -grams of the MT output compared to the reference, weighted by a brevity penalty to punish overly short translations.",
"BLEU has high variance across different hyper-parameters and pre-processing strategies, in response to which sacreBLEU (Post, 2018) was introduced to create a standard implementation for all researchers to use; we use this version in our analysis.",
"TER (Snover et al., 2006) measures the number of edits (insertions, deletions, shifts and substitutions) required to transform the MT output to the reference.",
"CHRF (Popovi c, 2015) uses character n -grams instead of word n -grams to compare the MT output with the reference.",
"This helps with matching morphological variants of words.",
"YISI -1 (Lo, 2019) computes the semantic similarity of phrases in the MT output with the reference, using contextual word embeddings (BERT: Devlin et al. (2019)).",
"ESIM (Chen et al., 2017; Mathur et al., 2019) is a trained neural model that first computes sentence representations from BERT embeddings, then computes the similarity between the two strings.",
"3 Source-based metric YISI -2 (Lo, 2019) is the same as YISI -1, except that it uses cross-lingual embeddings to compute the similarity of the MT output with the source.",
"The baseline metrics, particularly BLEU, were designed to use multiple references.",
"However, in practice, they have only have been used with a single reference in recent years.",
"4 Re-examining conclusions of Metrics Task 2019 4.1 Are metrics unreliable when evaluating high-quality MT systems?",
"In general, the correlation of reference-based metrics with human scores is greater than r = 0 .",
"8 for all language pairs.",
"However, the correlation is dependent on the systems that are being evaluated, and as the quality of MT increases, we want to be sure that the metrics evaluating these systems stay reliable.",
"To estimate the validity of the metrics for high-quality MT systems, Ma et al. (2019) sorted the systems based on their Direct Assessment scores, and plotted the correlation of the top N systems, with N ranging from all systems to the best four systems.",
"They found that for seven out of 18 language pairs, the correlation between metric and human scores decreases as we decrease N , and tends towards zero or even negative when N = 4 .",
"There are four language pairs (GermanEnglish, EnglishGerman, EnglishRussian, and English Chinese) where the quality of the best MT systems is close to human performance (Barrault et al., 2019).",
"If metrics are unreliable for strong MT systems, we would expect to see a sharp degradation in correlation for these language pairs.",
"But as 3 ESIM's submission to WMT shared task does not include scores for the language pairs en-cs and en-gu.",
"In this paper, we use scores obtained from the same trained model that was used in the original submission.",
"we look at the top N systems, the correlation decreases for GermanEnglish and EnglishGerman, stays the same for EnglishRussian, and actually increases for EnglishChinese.",
"On the other hand, we observe this phenomenon with EnglishKazakh, where the top systems are far from the quality of human translation.",
"Is there another explanation for these results?",
"Pearson's r between metrics and DA scores is unstable for small samples, particularly when the systems are very close in terms of quality.",
"The low correlation over topN systems (when N is small) could be an artefact of this instability.",
"To understand this effect, we instead visualise the correlation of a rolling window of systems, starting with the worst N systems, and moving forward by one system until we reach the top N systems.",
"The number of systems stays constant for all points in these graphs, which makes for a more valid comparison than the original setting where the sample size varies.",
"If the metrics are indeed less reliable for strong systems, we should see the same pattern as with the top N systems.",
"For the GermanEnglish language pair (Figure 1",
"b), the correlation of most metrics is very unstable when N = 4 .",
"Both BLEU and CHRF perfectly correlate with human scores for systems ranked 25, which then drops to 1 for the top 4 systems.",
"On the other hand, ESIM exhibits the opposite behaviour, even though it shows an upward trend when looking at the topN systems.",
"Even worse, for EnglishGerman, YISI -2 obtains a perfect correlation at some values of N , when in fact its correlation with human scores is negligible once outliers are removed (Section 4.2).",
"We observe similar behaviour across all language pairs: the correlation is more stable as N increases, but there is no consistent trend in the correlation that depends on the quality of the systems in the sample.",
"If we are to trust Pearson's r at small sample sizes, then the reliability of metrics doesn't really depend on the quality of the MT systems.",
"Given that the sample size is small to begin with (typically 1015 MT systems per language pair), we believe that we do not have enough data to use this method to assess whether metric reliability decreases with the quality of MT systems.",
"A possible explanation for the low correlation of subsets of MT systems is that it depends on how close these systems are in terms of quality.",
"In the extreme case, the difference between the DA scores of all the systems in the subset can be statistically insignificant, so metric correlation over these systems can be attributed to chance.",
"An outlier is defined as an observation (or subset of observations) which appears to be inconsistent with the remainder of the dataset (Barnett and Lewis, 1974).",
"Pearson's r is particularly sensitive to outliers in the observations.",
"When there are systems that are generally much worse (or much better) than the rest of the systems, metrics are usually able to correctly assign low (or high) scores to these systems.",
"In this case, the Pearson correlation can over-estimate metric reliability, irrespective of the relationship between human and metric scores of other systems.",
"Based on a visual inspection, we can see there are two outlier systems in the EnglishGerman language pair.",
"To illustrate the influence of these systems on Pearson's r , we repeatedly subsample ten systems from the 22 system submissions (see Figure 2).",
"When the most extreme outlier ( en-de-task ) is present in the sample, the correlation of all metrics is greater than 0.97.",
"The selection of systems has a higher influence on the correlation when neither outlier is present, and we can see that YISI -1 and ESIM usually correlate much higher than BLEU.",
"One method of dealing with outliers is to calculate the correlation of the rest of the points (called the skipped correlation: Wilcox (2004)).",
"Most of these apply methods to detect multivariate outliers in the joint distribution of the two variables: the 0 .",
"metric and human scores in our case.",
"However, multivariate outliers could be system pairs that indicate metric errors, and should not be removed because they provide important data about the metric.",
"Thus, we only look towards detecting univariate outliers based on human ratings.",
"One common method is to simply standardise the scores, and remove systems with scores that are too high or too low.",
"However, standardising depends on the mean and standard deviation, which are themselves affected by outliers.",
"Instead, we use the median and the Median Absolute Deviation (MAD) which are more robust (Iglewicz and Hoaglin, 1993; Rousseeuw and Hubert, 2011; Leys et al., 2013).",
"For MT systems with human scores s , we use the following steps to detect outlier systems: 1. Compute MAD, which is the median of all absolute deviations from the median MAD = 1 .",
"483 median ( | s median ( s ) | ) 2. compute robust scores: z = ( s median ( s )) / MAD 3. discard systems where the magnitude of z exceeds a cutoff (we use 2.5) Tables 1 and 2 show Pearson's r with and without outliers for the language pairs that contain outliers.",
"Some interesting observations, are as follows: 0 .",
"for language pairs like LithuanianEnglish and EnglishFinnish, the correlation between the reference based metrics and DA is high irrespective of the presence of the outlier; the correlation of BLEU with DA drops sharply from 0.85 to 0.58 for EnglishKazakh when outliers are removed; for EnglishGerman, the correlation of BLEU and TER appears to be almost as high as that of YISI -1 and ESIM.",
"However, when we remove the two outliers, there is a much wider gap between the metrics.",
"if metrics wrongly assign a higher score to an outlier (e.g. most metrics in GujaratEnglish), removing these systems increases correlation, and reporting only the skipped correlation is not ideal.",
"To illustrate the severity of the problem, we show examples from the metrics task data where outliers present the illusion of high correlation when the metric scores are actually independent of the human scores without the outlier.",
"For English German, the source-based metric YISI -2 correctly assigns a low score to the outlier en-de-task .",
"When this system is removed, the correlation is near zero.",
"At the other extreme, YISI -2 incorrectly assigns a very high score to a low-quality outlier in the EnglishRussian language pair, resulting in a strongly negative correlation.",
"When we remove this system, we find there is no association between metric and human scores.",
"In practice, researchers use metric scores to compare pairs of MT systems, for instance when claiming a new state of the art, evaluating different model architectures, or even in deciding whether to publish.",
"Basing these judgements on metric score alone runs the risk of making wrong decisions with respect to the true gold standard of human judgements.",
"That is, while a change may result in a significant improvement in BLEU, this may not be judged to be an improvement by human assessors.",
"Thus, we examine whether metrics agree with DA on all the MT systems pairs across all languages used in WMT 19.",
"Following Graham et al. (2014), we use statisti0 .",
"For human scores, we apply the Wilcoxon rank-sum test which is used by WMT when ranking systems.",
"We use the bootstrap method (Koehn, 2004) to test for statistical significance of the difference in BLEU between two systems.",
"YISI -1 and ESIM compute the system score as the average of sentence scores, so we use the paired t-test to compute significance.",
"Although CHRF is technically the macro-average of n -gram statistics over the entire test set, we treat this as a micro-average when computing significance such that we can use the more powerful paired t-test over sentence scores.",
"Figure 4 visualises the agreement between metric score differences and differences in human DA scores.",
"Ideally, only differences judged as truly significant would give rise to significant and large magnitude differences under the metrics; and when metrics judge differences to be insignificant, ideally very few instances would be truly significant.",
"However, this is not the case: there are substantial numbers of insignificant differences even for very high metric differences (cyan, for higher range bins); moreover, the NS category denoting an insignificant difference in metric score includes many human significant pairs (red and green, top bin).",
"Considering BLEU (top plot in Figure 2), for insignificant BLEU differences, humans judge one system to be better than the other for half of these system pairs.",
"This corresponds to a Type I error.",
"It is of concern that BLEU cannot detect these differences.",
"Worse, the difference in human scores has a very wide range.",
"Conversely, when the BLEU score is significant but in the range 03, more than half of these systems are judged to be insignificantly different in quality (corresponding to a Type II error).",
"For higher BLEU deltas, these errors diminish, however, even for a BLEU difference between 3 and 5 points, about a quarter of these system pairs are of similar quality.",
"This paints a dour picture for the utility of BLEU as a tool for gatekeeping (i.e., to define a minimum publishable unit' in deciding paper acceptance on empirical grounds, through bounding the risk of Type II errors), as the unit would need to be whoppingly large to ensure only meaningful improvements are accepted.",
"Were we seek to minimise Type I errors in the interests of nurturing good ideas, the thresh-BLEU TER chrF YiSi-1 ESIM BLEU TER chrF YiSi-1 ESIM 326 59 88 99 127 68 335 89 92 125 72 64 310 63 97 63 47 43 290 74 82 71 68 65 281 0 50 100 150 200 250 Figure 5: The agreement between metric errors over all 1362 system comparisons.",
"old would need to be so low as to be meaningless, effectively below the level required for acceptance of the bootstrap significance test.",
"The systems evaluated consist of a mix of systems submitted by researchers (mostly neural models) and anonymous online systems (where the MT system type is unknown).",
"Even when we restrict the set of systems to only neural models submitted by researchers, the patterns of Type 1 and Type 2 errors remain the same (figure omitted for space reasons).",
"TER makes similar errors: TER scores can wrongly show that a system is much better than another when humans have judged them similar, or even worse, drawn the opposite conclusion.",
"CHRF, YISI -1 and ESIM have fewer errors compared to BLEU and TER.",
"When these metrics mistakenly fail to detect a difference between systems, the human score difference is considerably lower than for BLEU.",
"Accordingly, they should be used in place of BLEU.",
"However the above argument is likely to still hold true as to their utility for gatekeeping or nurturing progress, in that the thresholds would still be particularly punitive or permissive, for the two roles, respectively.",
"Finally, Figure 5 looks at agreement between metric decisions when comparing MT systems.",
"As expected, when BLEU or TER disagree with CHRF, ESIM, or YISI -1, the former are more likely to be wrong.",
"BLEU and TER have an 80% overlap in errors.",
"The decisions of ESIM, a trained neural model, diverge a little more from the other metrics.",
"Overall, despite the variety of approaches towards the task, all five metrics have common biases: over half of all erroneous decisions made by a particular metric are made in common with all other metrics.",
"In this paper, we revisited the findings of the metrics task at WMT 2019, which flagged potential problems in the current best practises for assessment of evaluation metrics.",
"Pearson's correlation coefficient is known to be unstable for small sample sizes, particularly when the systems in consideration are very close in quality.",
"This goes some way to explaining the findings whereby strong correlations between metric scores and human judgements evaporate when considering small numbers of strong systems.",
"We show that the same can be true for any small set of similar quality systems, not just the top systems.",
"This effect can partly be attributed to noise due to the small sample size, rather than true shortcomings in the metrics themselves.",
"We need better methods to empirically test whether our metrics are less reliable when evaluating high quality MT systems.",
"A more serious problem, however, is outlier systems, i.e. those systems whose quality is much higher or lower than the rest of the systems.",
"We found that such systems can have a disproportionate effect on the computed correlation of metrics.",
"The resulting high values of correlation can then lead to to false confidence in the reliability of metrics.",
"Once the outliers are removed, the gap between correlation of BLEU and other metrics (e.g. CHRF, YISI -1 and ESIM) becomes wider.",
"In the worst case scenario, outliers introduce a high correlation when there is no association between metric and human scores for the rest of the systems.",
"Thus, future evaluations should also measure correlations after removing outlier systems.",
"Finally, the same value of correlation coefficient can describe different patterns of errors.",
"Any single number is not adequate to describe the data, and visualising metric scores against human scores is the best way to gain insights into metric reliability.",
"This could be done with scatter plots (e.g. Figure 3a) for each language pair, or Figure 5, which compresses this information into one graph.",
"the real meaning encoded by a difference in metric score, in terms of what this indicates about human judgements of the two systems.",
"Most published work report BLEU differences of 1-2 points, however at this level we show this magnitude of difference only corresponds to true improvements in quality as judged by humans about half the time.",
"Although our analysis assumes the Direct Assessment human evaluation method to be a gold standard despite its shortcomings, our analysis does suggest that the current rule of thumb for publishing empirical improvements based on small BLEU differences has little meaning.",
"Overall, this paper adds to the case for retiring BLEU as the de facto standard metric, and instead using other metrics such as CHRF, YISI -1, or ESIM in its place.",
"They are more powerful in assessing empirical improvements.",
"However, human evaluation must always be the gold standard, and for continuing improvement in translation, to establish significant improvements over prior work, all automatic metrics make for inadequate substitutes.",
"To summarise, our key recommendations are: When evaluating metrics, use the technique outlined in Section 4.2 to remove outliers before computing Pearson's r .",
"When evaluating MT systems, stop using BLEU or TER for evaluation of MT, and instead use CHRF, YISI -1, or ESIM; Stop using small changes in evaluation metrics as the sole basis to draw important empirical conclusions, and make sure these are supported by manual evaluation.",
"We are grateful to the anonymous reviewers for their comments and valuable suggestions.",
"This work was supported in part by the Australian Research Council."
] | [
"abstain",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"objective",
"objective",
"objective",
"objective",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"other"
] |
[
"Learning high-quality embeddings for rare words is a hard problem because of sparse context information.",
"Mimicking (Pinter et al., 2017) has been proposed as a solution: given embeddings learned by a standard algorithm, a model is first trained to reproduce embeddings of frequent words from their surface form and then used to compute embeddings for rare words.",
"In this paper, we introduce attentive mimicking : the mimicking model is given access not only to a word's surface form, but also to all available contexts and learns to attend to the most informative and reliable contexts for computing an embedding.",
"In an evaluation on four tasks, we show that attentive mimicking outperforms previous work for both rare and medium-frequency words.",
"Thus, compared to previous work, attentive mimicking improves embeddings for a much larger part of the vocabulary, including the medium-frequency range.",
"Word embeddings have led to large performance gains in natural language processing (NLP).",
"However, embedding methods generally need many observations of a word to learn a good representation for it.",
"One way to overcome this limitation and improve embeddings of infrequent words is to incorporate surface-form information into learning.",
"This can either be done directly (Wieting et al., 2016; Bojanowski et al., 2017; Salle and Villavicencio, 2018), or a two-step process is employed: first, an embedding model is trained on the word level and then, surface-form information is used either to fine-tune embeddings (Cotterell et al., 2016; Vulic et al., 2017) or to completely recompute them.",
"The latter can be achieved using a model trained to reproduce (or mimic ) the original embeddings (Pinter et al., 2017).",
"However, these methods only work if a word's meaning can at least partially be predicted from its form.",
"A closely related line of research is embedding learning for novel words , where the goal is to obtain embeddings for previously unseen words from at most a handful of observations.",
"While most contemporary approaches exclusively use context information for this task (e.g. Herbelot and Baroni, 2017; Khodak et al., 2018), Schick and Schutze (2019) recently introduced the form-context model and showed that joint learning from both surface form and context leads to better performance.",
"The problem we address in this paper is that often, only few of a word's contexts provide valuable information about its meaning.",
"Nonetheless, the current state of the art treats all contexts the same.",
"We address this issue by introducing a more intelligent mechanism of incorporating context into mimicking: instead of using all contexts, we learn by way of self-attention to pick a subset of especially informative and reliable contexts.",
"This mechanism is based on the observation that in many cases, reliable contexts for a given word tend to resemble each other.",
"We call our proposed architecture attentive mimicking (AM).",
"Our contributions are as follows:",
"(i) We introduce the attentive mimicking model.",
"It produces high-quality embeddings for rare and medium-frequency words by attending to the most informative contexts.",
"(ii) We propose a novel evaluation method based on VecMap (Artetxe et al., 2018) that allows us to easily evaluate the embedding quality of lowand medium-frequency words.",
"(iii) We show that attentive mimicking improves word embeddings on various datasets.",
"Methods to train surface-form models to mimic word embeddings include those of Luong et al.",
"(2013) (morpheme-based) and Pinter et al. (2017) (character-level).",
"In the area of fine-tuning methods, Cotterell et al. (2016) introduce a Gaussian graphical model that incorporates morphological information into word embeddings.",
"Vulic et al. (2017) retrofit embeddings using a set of language-specific rules.",
"Models that directly incorporate surface-form information into embedding learning include fastText (Bojanowski et al., 2017), LexVec (Salle and Villavicencio, 2018) and Charagram (Wieting et al., 2016).",
"While many approaches to learning embeddings for novel words exclusively make use of context information (Lazaridou et al., 2017; Herbelot and Baroni, 2017; Khodak et al., 2018), Schick and Schutze (2019)'s form-context model combines surface-form and context information.",
"Ling et al. (2015) also use attention in embedding learning, but their attention is within a context (picking words), not across contexts (picking con-texts).",
"Also, their attention is based only on word type and distance, not on the more complex factors available in our attentive mimicking model, e.g., the interaction with the word's surface form.",
"We briefly review the architecture of the form-context model (FCM), see Schick and Schutze (2019) for more details.",
"FCM requires an embedding space of dimensionality d that assigns high-quality embeddings v R d to frequent words.",
"Given an infrequent or novel word w and a set of contexts C in which it occurs, FCM can then be used to infer an embedding v ( w, C ) for w that is appropriate for the given embedding space.",
"This is achieved by first computing two distinct embeddings, one of which exclusively uses surface-form information and the other context information.",
"The surface-form embedding, denoted v form ( w, C ) , is obtained from averaging over a set of n -gram embeddings learned by the model; the context embedding v context ( w, C ) is obtained from averaging over all embeddings of context words in C .",
"The weighing coefficient is a function of both embeddings, modeled as = ( u (cid:62) [ v context ( w, C ) ; v form ( w, C ) ] + b ) with u R 2 d , b R being learnable parameters and denoting the sigmoid function.",
"FCM pays equal attention to all contexts of a word but often, only few contexts are actually suitable for inferring the word's meaning.",
"We introduce attentive mimicking (AM) to address this problem: we allow our model to assign different weights to contexts based on some measure of their reliabil-ity.",
"To this end, let C = { C 1 , . . . , C m } where each C i is a multiset of words.",
"We replace the context-embedding of FCM with a weighted embedding v context ( w, C ) = m (cid:88) i =1 ( C i , C ) v C i where v C i is the average of the embeddings of words in C i and measures context reliability.",
"To obtain a meaningful measure of reliability, our key observation is that reliable contexts typically agree with many other contexts.",
"Consider a word w for which six out of ten contexts contain words referring to sports.",
"Due to this high inter-context agreement, it is then reasonable to assume that w is from the same domain and, consequently, that the four contexts not related to sports are less informative.",
"To formalize this idea, we first define the similarity between two contexts as s ( C 1 , C 2 ) = ( Mv C 1 ) ( Mv C 2 ) (cid:62) d with M R d d a learnable parameter, inspired by Vaswani et al. (2017)'s scaled dot-product attention.",
"We then define the reliability of a context as ( C, C ) = 1 Z m (cid:88) i =1 s ( C, C i ) where Z = (cid:80) mi =1 (cid:80) mj =1 s ( C i , C j ) is a normalization constant, ensuring that all weights sum to one.",
"The model is trained by randomly sampling words w and contexts C from a large corpus and mimicking the original embedding of w , i.e., minimizing the squared distance between the original embedding and v ( w, C ) .",
"For our experiments, we follow the setup of Schick and Schutze (2019) and use the Westbury Wikipedia Corpus (WWC) (Shaoul and Westbury, 2010) for training of all embedding models.",
"To obtain training instances ( w, C ) for both FCM and AM, we sample words and contexts from the WWC based on their frequency, using only words that occur at least 100 times.",
"We always train FCM and AM on skipgram embeddings (Mikolov et al., 2013) obtained using Gensim ( Rehurek and Sojka, 2010).",
"Our experimental setup differs from that of Schick and Schutze (2019) in two respects:",
"(i) Instead of using a fixed number of contexts for C , we randomly sample between 1 and 64 contexts and",
"(ii) we fix the number of training epochs to 5.",
"The rationale behind our first modification is that we want our model to produce high-quality embeddings both when we only have a few contexts available and when there is a large number of contexts to pick from.",
"We fix the number of epochs simply because our evaluation tasks come without development sets on which it may be optimized.",
"To evaluate our model, we apply a novel, intrinsic evaluation method that compares embedding spaces by transforming them into a common space (4.1).",
"We also test our model on three word-level downstream tasks (4.2, 4.3, 4.4) to demonstrate its versatile applicability.",
"We introduce a novel evaluation method that explicitly evaluates embeddings for rare and medium-frequency words by downsampling frequent words from the WWC to a fixed number of occurrences.",
"1 We then compare gold skipgram embeddings obtained from the original corpus with embeddings learned by some model trained on the downsampled corpus.",
"To this end, we transform the two embedding spaces into a common space using VecMap (Artetxe et al., 2018), where we provide all but the downsampled words as a mapping dictionary.",
"Intuitively, the better a model is at inferring an embedding from few observations, the more similar its embeddings must be to the gold embeddings in this common space.",
"We thus measure the quality of a model by computing 1 The VecMap dataset is publicly available at https:// github.com/timoschick/form-context-model number of occurrences model 1 2 4 8 16 32 64 128 skipgram 8.7 18.2 30.9 42.3 52.3 59.5 66.7 71.2 fastText 45.4 44.3 45.7 50.0 55.9 56.7 62.6 67.7 Mimick 10.7 11.7 12.1 11.0 12.5 11.0 10.6 9.2 FCM 37.9 45.3 49.1 53.4 58.3 55.4 59.9 58.8 AM 38.0 45.1 49.6 53.7 58.3 55.6 60.2 58.9 FCM 32.3 36.9 41.9 49.1 57.4 59.9 67.3 70.1 AM 32.8 37.8 42.8 49.8 57.7 60.5 67.6 70.4 Table 1: Average cosine similarities for the VecMap evaluation, scaled by a factor of 100.",
"the average cosine similarity between its embeddings",
"embeddings and the gold embeddings.",
"As baselines, we train skipgram and fastText on the downsampled corpus.",
"We then train Mimick (Pinter et al., 2017) as well as both FCM and AM on the skipgram embeddings.",
"We also try a variant where the downsampled words are included in the training set (i.e., the mimicking models explicitly learn to reproduce their skipgram embeddings).",
"This allows the model to learn representations of those words not completely from scratch, but to also make use of their original embeddings.",
"Accordingly, we expect this variant to only be helpful if a word is not too rare, i.e. its original embedding is already of decent quality.",
"Table 1 shows that for words with a frequency below 32, FCM and AM infer much better embeddings than all baselines.",
"The comparably poor performance of Mimick is consistent with the observation of Pinter et al. (2017) that this method captures mostly syntactic information.",
"Given four or more contexts, AM leads to consistent improvements over FCM.",
"The variants that include downsampled words during training ( ) still outperform skipgram for 32 and more observations, but perform worse than the default models for less frequent words.",
"We follow the experimental setup of Rothe et al. (2016) and fuse Opinion lexicon (Hu and Liu,",
"2004) and the NRC Emotion lexicons (Moham-mad and Turney, 2013) to obtain a training set of words with binary sentiment labels.",
"On that data, we train a logistic regression model to classify words based on their embeddings.",
"For our evaluation, we then use SemEval2015 Task 10E where words are assigned a sentiment rating between 0 (completely negative) and 1 (completely positive) and use Spearman's as a measure of similarity between gold and predicted ratings.",
"We train logistic regression models on both skipgram and fastText embeddings and, for testing, replace skipgram embeddings by embeddings inferred from the mimicking models.",
"Table 2 shows that for rare and medium-frequency words, AM again outperforms all other models.",
"We use Yaghoobzadeh et al. (2018)'s name typing dataset for the task of predicting the fine-grained named entity types of a word, e.g., PRESIDENT and LOCATION for Washington.",
"We train a logistic regression model using the same setup as in 4.2 and evaluate on all words from the test set that occur 100 times in WWC.",
"Based on results in 4.1, where AM only improved representations for words occurring fewer than 32 times, we also try the variant AM+skip that, in testing, replaces v ( w, C ) with the linear combination v w = ( f w ) v ( w, C ) + (1 ( f w )) v w where v w is the skipgram embedding of w , f w is the frequency of w and ( f w ) scales linearly from 1 for f w = 0 to 0 for f w = 32 .",
"Table 3 gives accuracy and micro F1 for several word frequency ranges.",
"In accordance with results from previous experiments, AM performs drastically better than the baselines for up to 16 occurrences.",
"Notably, the linear combination of skipgram and AM achieves by far the best overall results.",
"The Chimeras (CHIMERA) dataset (Lazaridou et al., 2017) consists of similarity scores for pairs of made-up words and regular words.",
"CHIMERA provides only six contexts for each made-up word, so it is not ideal for evaluating our model.",
"Nonetheless, we can still use it to analyze the difference of FCM (no attention) and AM (using attention).",
"As the surface-form of the made-up words was constructed randomly and thus carries no meaning at all, we restrict ourselves to the context parts of FCM and AM (referred to as FCM-ctx and AM-ctx).",
"We use the test set of Herbelot and Baroni (2017) and compare the given similarity scores with the cosine similarities of the corresponding word embeddings, using FCM-ctx and AM-ctx to obtain embeddings for the made-up words.",
"Table 4 gives Spearman's for our model and various baselines; baseline results are adopted from Khodak et al. (2018).",
"We do not report results for Mimick as its representations for novel words are entirely based on their surface form.",
"While AM performs worse than previous methods for 24 sentences, it drastically improves over the best result currently published for 6 sentences.",
"Again, context attention consistently improves results: AM-ctx performs better than FCM-ctx, regardless of the number of contexts.",
"Since A La Carte (Khodak et al., 2018), the method performing best for 24 contexts, is conceptually similar to FCM, it most likely would similarly benefit from context attention.",
"While the effect of context attention is more pronounced when there are many contexts available, we still perform a quantitative analysis of one exemplary instance of CHIMERA to better understand what AM learns; we consider the made-up word petfel, a combination of saxophone and harmonica, whose occurrences are shown in Table 5.",
"The model attends most to sentences model 2 sent.",
"(2) and (4); consistently, the embeddings obtained from those sentences are very similar.",
"Furthermore, of all four sentences, these two are the ones best suited for a simple averaging model as they contain informative, frequent words like instru-ment, chimes and music.",
"We have introduced attentive mimicking (AM) and showed that attending to informative and reliable contexts improves representations of rare and medium-frequency words for a diverse set of evaluations.",
"In future work, one might investigate whether attention mechanisms on the word level (cf. Ling et al., 2015) can further improve the model's performance.",
"Furthermore, it would be interesting to investigate whether the proposed architecture is also beneficial for languages typologically different from English, e.g., morphologically rich languages.",
"This work was funded by the European Research Council (ERC #740516).",
"We would like to thank the anonymous reviewers for their helpful comments."
] | [
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"other"
] |
[
"Part-of-Speech (POS) tags routinely appear as features in morphological tasks.",
"POS taggers are often one of the first NLP tools developed for low-resource languages.",
"However, as NLP expands to new languages it cannot assume that POS tags will be available to train a POS tagger.",
"This paper empirically examines the impact of POS tags on two morphological tasks with the Transformer architecture.",
"Each task is run twice, once with and once without POS tags, on otherwise identical data from ten well-described languages and five under-documented languages.",
"We find that the presence or absence of POS tags does not have a significant bearing on the performance of either task.",
"In joint segmentation and glossing, the largest average difference is an .09 improvement in F 1 -scores by removing POS tags.",
"In reinflection, the greatest average difference is 1.2% in accuracy for published data and 5% for unpublished data.",
"These results are indicators that NLP and documentary linguistics may benefit each other even when a POS tag set does not yet exist for a language.",
"Parts of speech (POS), also known as word classes or lexical categories, communicate information about a word, its morphological structure and inflectional paradigm, and its potential grammatical role in a clause.",
"POS tagging is a well-studied problem in NLP.",
"It is one of the first tasks undertaken for a new data set and a POS tagger is often one of the first NLP resources built for low-resource languages (Yarowsky and Ngai, 2001; Cox, 2010; De Pauw, 2012; Baldridge and Gar-rette, 2013; Duong, 2017; Anastasopoulos, 2019; Millour and Fort, 2019; Eskander et al., 2020b).",
"Although this priority on early POS tagging may be simply due to the relative ease of building a POS tagger, it seems to reflect an assumption that POS Figure 1: Average F 1 -scores on joint segmentation and glossing on interlinear glossed texts from fieldwork in five languages found that POS-tags have little and irregular impact.",
"tags simplify or improve other NLP tasks (Krauwer, 2003).",
"As far as we are aware, this assumption has not been methodically tested.",
"This paper examines the impact of POS tags on morphological learning, an important area for low-resource languages, many of which are more morphologically complex than English, Mandarin, or other large-resource languages.",
"Morphological learning can help reduce the out-of-vocabulary problem in morphologically complex languages, especially in low-resource settings.",
"Morphological learning also holds high priority in documentary and descriptive linguistics as a necessary foundation for further descriptive work.",
"We focus on two related tasks that involve morphological learning: joint morpheme segmentation/glossing and morphological reinflection.",
"Joint segmentation and glossing segments a word into its component morphemes and glosses the segments.",
"Reinflection gen-Figure 2: During reinflection generation of four interlinear field corpora and four cleaned versions of those corpora the presence or absence of POS tags does not make a significant or consistent difference in accuracy of inflected forms.",
"erates unseen inflected word forms from morphological features based on a language's inflectional patterns.",
"Since lexical categories (POS) are iden-tified partly by morphological structure, it seems reasonable to assume the reverse that knowing a word's part of speech makes it easier for a model to analyze its morphological structure.",
"For example, knowing that a word is a noun in English makes it extremely unlikely that a final substring (e)n could be a participial affix (e.g. oven -NOUN ; cf. driven -VERB ).",
"On the other hand, POS tags may be providing redundant information when, for example, an affix that marks a morphosyntactic feature is identical across all categories where that feature appears (e.g. the Russian morpheme /-i/ PL ' is identical for for plural nouns and plural verb agreement).",
"However, these hypotheses must be tested before claiming either one.",
"The impact of (not) having POS tags has perhaps not been examined closely in part because it seems safe to assume that POS tags or a POS tagger will be available.",
"However, as NLP expands its reach to new languages, POS tags may not be readily available.",
"In fact, the lexical categories present in the language may not even be described yet when data becomes available.",
"In documentary and descriptive linguistics, the description and tagging of lexical categories takes a relatively low priority compared to its place in NLP (cf.",
"Bird and Chiang (2012)'s workflow).",
"Yet interlinear glossed texts (IGT) are often the largest available annotated resource for a low-resource languageand sometimes the only available resource.",
"The impact of POS tags on computational morphology may hold implications for linguistic theory as well.",
"The nature of lexical categories (Rauh, 2010), the criteria for identifying them (Croft, 2000), and even their very reality as a universal property of language (Gil, 2005) are not entirely settled among linguists.",
"If the morphological structure of unseen words can be analyzed and generated without reference to lexical categories, then perhaps such categories should not be considered an inherent property of the lexicon (Rauh et al., 2016).",
"This paper describes experiments that were run on corpora differing only in the presence or absence of POS tags.",
"The results, which are generalized in Figures 1 and 2, indicate that POS tags do not have significant impact on computational morphological learning.",
"Section 2 presents related work in lexical categories, POS-tagging, segmentation and glossing, and (re)inflection.",
"Sections 3 and 4 describe the corpora and the NLP architecture used.",
"The segmentation and glossing task and results are presented in Section 5. The reinflection task and results are presented in Section 6. Implications of both experiments are discussed in Section 7. 2 Related Work Work on POS tagging has led to the development of several related resources in NLP and linguistics including numerous methods for automatic tagging (e.g. Kupiec (1992); Toutanova and Johnson (2008)) as well as tag sets.",
"The most popular tag set for English was developed by the Penn Treebank Project (Taylor et al., 2003).",
"A universal POS tag set was proposed by Petrov et al. (2012) and has been widely adopted.",
"It closely follows traditional linguistic conventions for common lexical categories as can be seen by comparing to the Leipzig Glossing Rules (Institute, 2008) which also has recommended tags for less common categories.",
"Many NLP models have been applied to segmentation and glossing of low-resource languages but they often tackle just one of the two tasks, e.g. segmentation only (Ruokolainen et al., 2014; Wang et al., 2016; Kann et al., 2018; Mager et al., 2020; Sorokin, 2019; Eskander et al., 2020a).",
"Automatic morpheme segmentation was introduced by Harris (1970) and much earlier segmentation research implemented unsupervised learning (Gold-smith, 2001; Creutz and Lagus, 2002; Poon et al., 2009).",
"Published linguistic descriptive data is used as training data usually after some preprocessing.",
"Glossing-only experiments make the assumption that data is already segmented into morphemes.",
"For example, McMillan-Major (2020) trained a conditional random field (CRF) model to produce a gloss line for several high-resource languages and three low-resource languages.",
"The low-resource language data came from interlinearized data that was polished for publication.",
"McMillan-Major (2020) and some other experiments such as Samardzic et al. (2015) use information from lines of interlinearized texts such as translation and POS tags.",
"Computational approaches to morphological inflection or reinflection have been developed by Durrett and DeNero (2013); Nicolai et al. (2015); Liu and Mao (2016); Cotterell et al. (2017); Kann and Schutze (2016); Aharoni and Goldberg (2017), etc.",
"Some of the work was developed as part of the SIGMORPHON Shared Tasks.",
"1 Our work partly replicates the CoNLL-SIGMORPHON reinflection shared tasks (Cotterell et al., 2016, 2017, 2018a).",
"Sequence-to-sequence neural network models have been very successful at handling the morphological (re)inflection task, even in low-resource conditions with model improvement designed to tackle the situation (Kann et al., 2017; Silfverberg et al., 2017; Sharma et al., 2018; Makarov and Clematide, 2018; Anastasopoulos and Neubig, 2019; Wu and Cotterell, 2019; Liu, 2021).",
"The Transformer (Vaswani et al., 2017a) is the model architecture which produces the current state-of-the-art performance on this task (Vylomova et al., 2020; Wu et al., 2020; Liu and Hulden, 2020b,a).",
"Therefore, we use the Transformer for all the experiments in this paper.",
"This paper is an expansion of a section in Moeller et al. (2020).",
"The experimental setup and SIGMORPHON languages are the same as that work, but it does not look at what happens when POS tags are available in the field data.",
"We expanded the re-inflection task to field corpora.",
"we also ran the SIGMORPHON experiments 5 times instead of one time.",
"The addition of the segmentation and glossing was inspired by Moeller and Hulden (2021).",
"We use published data in ten languages and unpublished data in five low-resource languages.",
"The published and unpublished data is used for the mor-1 https://sigmorphon.github.io/ sharedtasks/ Language POS Adyghe N, ADJ Arabic N, V, ADJ Basque V Finnish N, V, ADJ German N, ADJ Persian V Russian N, V, ADJ Spanish N, V Swahili N, V, ADJ Turkish N, V, ADJ Table 1: SIGMORPHON languages and the lexical categories found in the data.",
"phological reinflection but only the unpublished data for segmentation and glossing.",
"For the morphological reinflection task we use datasets that were released for the CoNLL-SIGMORPHON 2018 shared task 1 (Cotterell et al., 2018b).",
"We selected 10 languages that belong to different families and are typologically diverse with regards to morphology.",
"The languages and the inflected lexical categories available for the shared task are listed in Table 1. The language family and morphological typology for each language is available on the UniMorph official website.",
"2 Only the listed lexical categories were POS-tagged.",
"The manually-annotated interlinear glossed texts (IGT) were created in documentary and descriptive projects for five low-resource and under-documented languages.",
"The corpora represent a range of documentary field projects rather than a range of language typology, although they do represent three different language families on four continents.",
"It is difficult to find corpora of under-documented languages with (enough) POS tags to conduct our POS experiments precisely because of the low priority of POS-tagging in documentary and descriptive linguistics.",
"We were unable to use half of the field corpora available to us for this reason.",
"However, because we are interested in leveraging NLP for fieldwork, we felt it is important to work with the noisy field data, rather than use (often morphologically simpler) high-resource 2 https://unimorph.github.io Language Tokens POS-tagged Inflected Alas 4.5k 3845 86% 623 Lamkang 101k 46,557 46% n/a Lezgi 14k 13,636 96% 843 Manipuri 12k 2067 17% 3,260 Natugu 16.5k 10,994 66% 1,954 Table 2: The approximate total number token counts in the field data does not include multiple-word-expressions (when parsed as such) and ignores personal nouns and digits.",
"The corpora were compiled during projects that each had their own priorities and workflow and this resulted in the differing amounts of annotation shown in Table 2. 4 Only the tokens that were segmented, glossed, and POS-tagged could be used.",
"The POS tags were provided by the annotators.",
"For the reinflection task, the data was further limited to inflected forms.",
"The collection of inflected forms was automatically extracted and grouped based on the gloss of the root morpheme (noisy version).",
"We happened to have cleaned versions for the reinflection task and include those for the sake of completeness.",
"The cleaned versions were created from the noisy versions that had been checked by language experts.",
"5 It is worth noting that the Lamkang (used only for the segmentation and glossing study), Manipuri, and Natugu corpora are the result of many years of work and these extended projects eventually led to significant POS tagging.",
"Two other large and completely segmented/glossed corpora could not be included because the lexical categories had not been tagged.",
"The Lezgi project used POS tags at an early stage because the research was focused on verb tenses (Donet, 2014).",
"All POS tags in the smaller Alas corpus, and many in the Lezgi corpus, were added specifically for our research.",
"3 We investigated the Online Database of Interlinear Text (ODIN) since the AGGREGATION project at University of Washington has projected POS tags from English, but as yet, we have not found a corpus of comparative size to the smallest field corpus.",
"Perhaps because we focused on finding more polysynthetic languages in order to balance the diversity of morphological types and because preprocessing the ODIN format is time-consuming.",
"4 Rights holders gave informed consent to use the data for this research and links are provide to the corpora that are publicly available.",
"5 Inflection data available at: https://github.com/ LINGuistLIU/IGT Alas [btz] (Alas-Kluet, Batak Alas, Batak Alas-Kluet) is an Austronesian language spoken by 200,000 people on the Indonesian island of Sumatra (Eberhard et al., 2020).",
"Its morphology features reduplication, infixation, and circumfixation.",
"The POS set in the corpus is: ADJ , ADV , AUX , CARDNUM , CLF , CONJ , COP , DEM , DISTRNUM , EXISTMRKR , INTERJ , N , NPROP , ORDNUM , PREP , PRO , PRT , QUANT , REFL , RELPRO , V , VD , VI , VT .",
"6 Lamkang [lmk] is a Northern Kuki-Chin language of the Tibeto-Burman family with an estimated 4 to 10 thousand speakers primarily in Manipur, India but also in Burma (Thounaojam and Chelliah, 2007).",
"Its morphology tends toward agglutination with many stem-stem patterns to signal syntactic categories.",
"The corpus is accessible through the Computational Resources for South Asian Languages (CoRSAL) digital archive at the University of North Texas.",
"7 The POS tag set is: ADN , ADVL , DEM , CONN , COORDCONN , COP , INTERJ , N , NPR , NUM , ORDNUM , POSTP , PRON , PTC , QUANT , SUBO , UNK , V , VC , VI , VT .",
"Lezgi [lez] (Lezgian) is a highly agglutinative language belonging to the Lezgic branch of the Nakh-Daghestanian (Northeast Caucasian) family.",
"It is spoken by over 400,000 speakers in Russia and Azerbaijan (Eberhard et al., 2020).",
"It features overwhelmingly suffixing agglutinative morphology.",
"The POS tag set is: ADJ , ADV , CARDNUM , CONN , COORDCONN , DEM , DET , INDFPRO , INTERJ , INTERROG , MSD , MULTIPNUM , N , NPROP , NUM , ORDNUM , PERS , POSS , POST , PREP , PRO , PROFORM , PRT , PTCP , RECP , SUBORDCONN , V , 6 All POS were used for the segmentation and glossing task.",
"Tags in boldface indicate POS that are inflected and were therefore used in the reinflection task.",
"Manipuri [mni] (Meitei, Meetei) is a Tibeto-Burman language spoken by nearly two million people, primarily in the state of Manipur, and is one of India's official languages.",
"It nonetheless has been classified as vulnerable to extinction (Mose-ley, 2010).",
"It is a tonal language with weakly suffixing, agglutinative morphology (Chelliah, 1997).",
"The corpus is at CoRSAL.",
"8 The POS set is: ADV , INTERJ , N , PROFORM , UNK , V .",
"Natugu [ntu] belongs to the Reefs-Santa Cruz group in the Austronesian family and is spoken by about 4,000 people in the Temotu Province of the Solomon Islands.",
"It has mainly agglutinative morphology with complex verb structures (Nss and Boerger, 2008).",
"The corpus is stored at SIL Language & Culture Archives.",
"9 The POS tags set is: A-D-P 2 , ADJ , ADV , CLAUSE , CONJ , DEM , DET , GEN , GERUND , INTERROG , INTJ , N , N",
".( KX . CL ) , NCOMP , NEG , NOM 1 , NP , NP ( COMP ), NPROP , NUM , ORD , PARTICLE , PCLF , PERSPRO , PHRASE , PN , POSSPRO , PREP , PRO , RPRN , SUBR , UNK , V , VI , VP , VT , Z-GERUND .",
"For simple comparisons, we chose a single neural model architecture for both tasks.",
"The tasks were trained with the Transformer (Vaswani et al., 2017b), the current state-of-the-art neural model architecture for morphological tasks (Vylomova et al., 2020; Liu and Hulden, 2020b).",
"We used the implementation of the Transformer model in the Fairseq toolkit (Ott et al., 2019) 10 with character-level transduction (Wu et al., 2020) for morphology learning in low-resource settings.",
"Following (Wu et al., 2020), we employ N = 4 layers for the encoder and the decoder, each with 4 self-attention heads.",
"The embedding size for the encoder and decoder is 256, and the hidden layer size is 1024.",
"We use a dropout rate of 0.3 for encoding and beam search with a width of 5 at decoding time.",
"The Adam algorithm (Kingma and Ba, 2014) ( 1 = 0 . 9 , 2 = 0 . 98 ) is used to optimize the cross entropy loss with label smoothing (Szegedy et al., 2016) of 0.1.",
"All models have been trained on an NVIDIA 8 https://digital.library.unt.edu/ explore/collections/MDR 9 https://www.sil.org/resources/search/ language/ntu 10 https://fairseq.readthedocs.io/en/ latest/ GP102 [TITAN Xp] GPU for 10k maximum updates with a batch size of 400.",
"The first study asks whether POS tags makes a significant impact on automated morpheme segmenting and glossing.",
"The experiment tests and compares two models on data that is identical except for the presence/lack of POS tags.",
"We chose morpheme segmentation and glossing because it is a high-priority and early step in documenting and describing new languages.",
"Segmenting words into morphemes and glossing (strictly translating) them is usually the first task undertaken after new data has been transcribed.",
"Therefore, it is important to study how to provide and improve automated assistance for field linguists.",
"Automatic systems could greatly benefit the analysis of endangered languages and combat the annotation bottleneck caused by current manual methods (Si-mons and Lewis, 2013; Holton et al., 2017; Seifart et al., 2018).",
"Although adding POS tagging as a high-priority task would add to that bottleneck, if the tags have a significant and positive impact on automating segmentation and glossing, then linguists may receive long-term benefits from the addition to their workflow.",
"Therefore, we explore the impact of POS tags at very low-resource settings and the impact of POS tags when a new field project takes time to tag some, but not all, tokens.",
"This is also why we chose noisy field corpora, rather than published, polished corpora which are not like the data that linguists typically work with.",
"We are interested in how POS tags influence segmentation and glossing in the earliest work with a new language.",
"Three Transformer models were trained.",
"The English example in (1) shows the input and output of models 1, 2, and 3. Model 1, shown in (1a), has no POS tags.",
"Models 2 and 3 have a POS tags, as shown in (1b).",
"Model 2 has POS tags on every word but Model 3 includes POS tags only for some words, simulating projects unable to complete POS-tagging.",
"(1)",
"a. INPUT 1: t a x e s",
"b. INPUT 2/3 : t a x e s N",
"c. OUTPUT: tax#levy -es# PL Language 1% 3% 6.5% 10% 20% 30% 40% 100% Alas .00 .02 .02 .03 .05 .05 .04 -.09",
"All three models are trained on all the available training data.",
"Models 1 and 2 are also trained on different proportions of training data in order to simulate very small corpora.",
"These proportions of training data start at 1% and are gradually increased to 40% of available training data.",
"Even when POS tags are included in interlinear field data, it is rarely completed as Table 2 clearly indicates.In order to simulate this reality Model 3 was trained on all the available training data but the proportion of inputs with POS tags was gradually and randomly increased.",
"The training/development/test split is 8/1/1.",
"All models are trained and evaluated on a 10-fold cross-validation.",
"The folds were trained twice, once with and once without POS tags; no other changes were made to the data.",
"All folds were evaluated on a single, consistent held-out test set.",
"Since we wanted to simulate a realistic field situation where the system is segmenting and glossing newly transcribed but unannotated text, the test inputs do not include POS tags.",
"POS tags have no consistent positive or negative effect on automated segmentation and glossing in low-resource settings.",
"The overall impact of POS tags is not significant.",
"Table 3 shows the differences when F 1 -scores without POS tags are subtracted from the F 1 -scores with POS tags, with various amounts of training data.",
"The largest difference is just under .1 points.",
"A few interesting observations can be made that should be explored with more languages.",
"Manipuri shows the smallest differences overall; it also has the fewest POS-tagged words and the smallest tag set.",
"The largest differences are seen in the Alas and Lamkang corpora.",
"Alas also has a relatively small amount of POS-tagged words, but it has quite a large tag set.",
"As the size of the Alas training data increases, the impact of POS tags becomes more pronounced, suggesting that perhaps a relatively large POS tag set may have a greater effect on results in medium settings.",
"Lamkang has the largest amount of POS-tagged words, but of those, a significant number were tagged as UNK .",
"It is not clear whether the UNK tag is limited to categories that have not been fully analyzed or if it is a default tag that covers a diverse set of words.",
"The difference made by adding POS tags all but disappears when all the Lamkang data is trained, suggesting that a smaller data set is more impacted by a large tag set or inconsistent annotations.",
"Overall, increasing the number of POS tags in the training data has minimal impact.",
"Table 4 shows the F 1 -scores when the amount of POS tags in the data is gradually increased.",
"For example, at 30%, one of three random training instances have a POS tag.",
"In most cases, having incomplete POS-tagged data hurts performance compared to have POS tags on all words or none at all.",
"The system either performs worse, or, in the case of Lezgi, makes very small improvement (.0063 points).",
"Except for Lezgi, as more POS tags are added, the system tends to improve slightly but never matches the best scores.",
"The second study asks whether POS tags make a significant impact on learning inflectional patterns and generating unseen inflected forms.",
"We chose the morphological re-inflection task because it is easy to reproduce and to compare with the original SIGMORPHON shared task.",
"Eliciting and analyzing a language's inflectional patterns is a recommended next step after morpheme segmentation and glossing (Bird and Chiang, 2012).",
"The inflectional pattern of a lexeme or a lexical category is also known as a morphological paradigm.",
"Learning morphological paradigms can be viewed in terms of filling in, or generating, the missing forms of a paradigm table by generalizing over inflectional patterns (Ackerman et al., 2009; Ahlberg et al., 2014, 2015; Liu and Hulden, 2017; Malouf, 2017; Silfverberg et al., 2018; Silfverberg and Hulden, 2018).",
"The experiments in this section partly replicates the CoNLL-SIGMORPHON 2018 shared task 1 of morphological reinflection.",
"Reinflection consists of generating unknown inflected forms, given a related inflected form f ( (cid:96),(cid:126)t 1 ) and a target morphological feature vector (cid:126)t 2 .",
"Thus, it corresponds to learning the mapping f : T .",
"The goal is then to produce the inflected form f ( (cid:96),(cid:126)t 2 ) .",
"An inflected form is generated when the model is given a related inflected form and the target morphological features (which are essentially glosses of affixes) of the inflected form to be generated.",
"In previous work, POS tags have been included by default as part of the morphological features.",
"That is, they have been assumed to be helpful and to be available.",
"The models were trained on individual languages in three different data sets.",
"The first data set is the published Unimorph inflectional data in ten languages.",
"The second data set is inflected word forms extracted from unpublished IGT in four languages; the third is the clean, or corrected, versions of the second data set.",
"The Unimorph data was extracted from published data and is the clean-est.",
"Its inflected forms and morphological features were double-checked and the forms provided were selected to provide a balanced picture of the language's morphological structure.",
"The inflected forms extracted from the IGT contains only inflected forms attested in original texts which are transcribed samples of natural oral speech.",
"The noisy version was automatically grouped into paradigms based on the assumption that identical glosses of root morphemes signified the same lemma, and therefore the same morphological paradigm.",
"The clean data was made by asking language experts to examine the noisy data and regroup paradigms when root morphemes were incorrectly glossed.",
"They also corrected typos and morphological features that were incorrectly glossed.",
"For the Unimorph data, the original SIGMORPHON training/validation/test splits were kept.",
"The prepared medium setting of 1,000 training examples was used.",
"This setting was chosen because of the three possible settings (100, 1k and 10k), it is the closest in size to number of inflected word forms extracted from the four IGT corpora, which provided between 600 and 3,000 training examples.",
"An 8/1/1 training/development/tests split was used for the IGT data.",
"Five reinflection models with random seeds were trained on each data set.",
"All models were trained twice, once with and once without POS tags on the input.",
"Crosswise pairs were compared by subtracting the results with POS tags from the results without POS, giving 25 accuracy scores per language.",
"Figures 3 and 4 show the average and range of differences between the two.",
"The range of differences shows that POS tags do not have a consistently positive or negative impact.",
"Only two languages show a clear tendency to be impact in one way.",
"In Natugu, POS tags improve accuracy while in Adyghe, they decreases accuracy.",
"The average difference in accuracy on any data set is rarely more than 1 percentage point.",
"As the data becomes less polished, the impact of POS tags increases slightly and the range of differences grows noticeably.",
"The largest average difference ( 5 percentage points) seen in the noisy data from field IGT.",
"This indicates that time invested in polishing existing IGT data may give a better return than time spent on POS-tagging.",
"For the SIGMORPHON languages, the largest mean difference is barely over 2 points and for the clean IGT-extracted data the largest mean difference is about 3 points.",
"The number of language we used is not large but a few general observations can be made.",
"For both tasks the impact made by the presence or absence of POS tags is minimal.",
"Still, the best results with a small corpus are achieved when either all or no tokens are POS-tagged, at least for segmentation and glossing.",
"This suggests that having a completely tagged corpus is better than an incompletely tagged corpus, so perhaps limited annotation time might be better spent on more segmentation and glossing.",
"The size or specificity of the tag set may make a difference in the impact of POS tags.",
"When comparing the tag sets in the CoNLL-SIGMORPHON 2018 shared task data and the IGT from fieldwork, the difference in the number of lexical categories is significant.",
"The CoNLL-SIGMORPHON 2018 shared task data sets have at most three: noun ( N ), verb ( V ), and adjective ( ADJ ).",
"The IGT corpora have larger tag sets; for example, they may have tags for both finite verb form ( VF ) and non-finite forms ( VNF ).",
"The smallest IGT tag set has six categories (Manipuri).",
"That is twice as many POS tags as the SIGMORPHON languages, but still much smaller than the other corpora, which have over 20 unique tags.",
"However, the difference in results cannot be definitely attributed to tag set size.",
"The IGT tag sets are larger because the goal of descriptive work is to discover fine-grained categories, whereas the Unimorph data use more general categories which are common for language learning material or general dictionaries.",
"Similar fine-grained distinctions appear in the Penn Treebank tag set and are presumably useful for NLP tasks.",
"Future work could re-tag IGT with more general categories to test how the size and specificity of POS tags on small corpora impact these tasks.",
"This could be fruitful area of research because it might help us predict the usefulness of another linguistic category: the category of morphemes.",
"Morpheme-level categories are similar to POS tags but tagged for individual morphemes.",
"Interestingly, morpheme categories generally take higher priority than word-level tags in documentary and descriptive linguists and are therefore more often available in field data.",
"Consistency of annotation may be significant.",
"It is likely that the POS tags in the UNIMORPH data were added carefully and correctly, but the field data were likely tagged as the lexical categories were being discovered and described.",
"The differences in results between the two data sets may be due to these factors, but the differences are not huge.",
"So it seems possible that the effect of POS tags may be similar no matter how the POS tags are added.",
"A different approach to POS-tagging, such as training with context might affect results.",
"This possibility points to many future useful experiments.",
"We believe there may be many unresolved issues related to the way the POS tags were added or which POS tags were used.",
"One auxiliary task would be to project POS tags from the target language of the translated sentences that are usually available in IGT even before morpheme segmentation and glosses.",
"Also, metrics for annotation quality could be devised so that its impact is better understood.",
"Linguists need to know as they start annotation how best to perform their earliest analysis and annotation so that they gain optimal benefit from automated help later.",
"Finally, although a consistent impact by POS tags cannot be seen on morphological learning across all corpora, some corpora did show a more or less consistent impact from the presence or absence of POS tags.",
"Sometimes better results were achieved by removing POS tags, sometimes by adding them.",
"Reinflection in Adyghe and the clean version of Lezgi data tend to improve when POS tags are removed while Persian, Russian, and the noisy version of Natugu generally have more accurate results when POS tags are available.",
"In segmentation and glossing, Alas and Lamkang show in some settings nearly .1 points difference when POS tags are added and removed, respectively.",
"With these trends, a more interesting question for these corpora becomes When are POS tags helpful? and this should be explored further.",
"We conclude that the presence or absence of POS tags does not have a significant impact on two morphological learning tasks: segmentation and glossing, or reinflection.",
"No clear advantage is gained or lost from POS-tagging on low-resource data.",
"In segmentation and glossing, the greatest average difference is a loss of .09 F 1 -score when a large POS tag set is added to a small field corpus.",
"In reinflection, the overall tendency, though slight, is that accuracy decreases when POS tags are added.",
"The greatest average difference is 1.2 percentage points of accuracy for published data, 2.2 points for unpublished clean data, and 5 points for unpublished noisy data.",
"We hypothesize that POS tags do not have a significant impact on these tasks because the information provided by POS tags is implicitly learned.",
"These are, of course, not the only two tasks where POS tags could be leveraged for low-resource languages so we cannot make a definitive statement regarding the impact of POS tags in other NLP tasks with low-resource languages, particularly ones that more syntactic or semantic in nature.",
"Further methodical research needs to be done in order to produce a definitive analysis.",
"However, it does bring into question whether the development of POS taggers and POS tagging should be prioritized less.",
"Future work should explore how other tasks are impacted by POS tags.",
"The results might influence workflow priorities for documentary and descriptive linguists who want to receive benefit from, or give it to, NLP.",
"When a sophisticated POS tag set and POS taggers are available for a language, leveraging POS tags is trivial.",
"However, as NLP expands into a broader range of languages, the usefulness of POS tags may become an important question because documentary and descriptive linguistics does not currently place a high priority on lexical categories.",
"Discovering a language's lexical categories requires a detailed understanding of the language's syntaxsomething linguists do not always possess in the early stages of describing a new language."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"Crowdsourcing has emerged as a popular approach for collecting annotated data to train supervised machine learning models.",
"However, annotator bias can lead to defective annotations.",
"Though there are a few works investigating individual annotator bias, the group effects in annotators are largely overlooked.",
"In this work, we reveal that annotators within the same demographic group tend to show consistent group bias in annotation tasks and thus we conduct an initial study on annotator group bias.",
"We first empirically verify the existence of annotator group bias in various real-world crowdsourcing datasets.",
"Then, we develop a novel probabilistic graphical framework GroupAnno to capture annotator group bias with an extended Expectation Maximization (EM) algorithm.",
"We conduct experiments on both synthetic and real-world datasets.",
"Experimental results demonstrate the effectiveness of our model in modeling annotator group bias in label aggregation and model learning over competitive baselines.",
"The performance of supervised machine learning algorithms heavily relies on the quality of the annotated training data.",
"Due to the heavy workload of annotation tasks, researchers and practitioners typically take advantage of crowdsourcing platforms to obtain cost-effective annotation data (Snow et al., 2008; Buhrmester et al., 2016).",
"However, the labels collected from multiple crowdsourcing annotators could be not consistent, since the expertise and reliability of the annotators are uncertain, and the task itself could be subjective and difficult.",
"In recent years, a lot of efforts from the machine learning community have been conducted to mitigate the effect of these noisy crowdsourcing labels (Zheng et al., 2017).",
"Various approaches have been proposed to model the quality (Liu et al., 2012; Aydin et al., 2014), confidence (Joglekar et al., 2013), Corresponding author.",
"expertise (Ma et al., 2015; Zheng et al., 2016), reliability (Li et al., 2019) of annotators; or model the difficulty of the tasks (Whitehill et al., 2009; Ma et al., 2015).",
"With such information, we can infer the truth label from the noisy labels more accurately and correspondingly train a more desirable model.",
"In terms of annotator modeling, existing studies mainly concentrated on factors like quality, confidence, expertise, etc., which could affect the annotation results.",
"Besides, the bias held by the annotators can also lead to defective annotations (Sap et al., 2019), which is, however, rarely studied.",
"In addition, studies in social science (Eagly, 2013) suggest that people from different demographic groups tend to apply different standards to evaluate the same thing due to their different experiences, which causes group bias.",
"We observe that annotators in different demographic groups tend to show different bias in annotation tasks.",
"For example, in a preliminary study, we examine the instances annotated by both two groups of annotators in the Wikipedia Toxicity dataset (Wulczyn et al., 2017).",
"We observe that native speakers of English rate 5 .",
"1% more comments as toxic than non-native speakers.",
"Similarly, annotators over 30 years old rate 2 .",
"5% more comments as toxic than younger annotators.",
"More details of the preliminary study can be found in Section 2.",
"Thus, a thorough investigation of such annotator group bias is desired.",
"Similar to existing studies, by considering the effect of annotator group bias, we have the potential to achieve a more accurate inference of true labels and train a better model.",
"Meanwhile, it is often hard to estimate the individual bias of one annotator with limited annotation data.",
"With annotator group bias as the prior knowledge, we can estimate the bias more effectively based on the demographic groups the annotator belongs to.",
"Thus, annotator group bias could mitigate the cold-start problem in modeling the annotator individual bias.",
"In this paper, we aim to study how to detect annotator group bias under text classification tasks, and how to mitigate the detrimental effects of annotator group bias on model training.",
"We face several challenges.",
"First, given noisy annotated data without the true labels, how should we detect the annotator bias?",
"We first make a comparison of the annotation results from different groups of annotators and find that there is a significant gap between them.",
"Then, we use two metrics sensitivity and specificity to measure the annotator bias, and conduct an analysis of variance (ANOVA) which demonstrates that the bias of each individual annotator shows obvious group effects in terms of its demographic attributes.",
"Second, how can we estimate the annotator group bias, and perform label aggregation and model training with the knowledge of annotator group bias?",
"Following the traditional probabilistic approaches for label aggregation (Raykar et al., 2010; Rodrigues and Pereira, 2018; Li et al., 2019), we propose a novel framework GroupAnno that models the production of annotations as a stochastic process via a novel probabilistic graphical model (PGM).",
"Inspired by the results of ANOVA, we assume that the bias of an annotator can be viewed as a superposition of the effects of annotator group bias and its individual bias.",
"We thereby extend the original PGM for label aggregation with additional variables representing annotator group bias.",
"By learning the PGM, we estimate the annotator group bias, infer the true labels, and optimize our classification model simultaneously.",
"Third, how can we learn this PGM effectively?",
"With the unknown true label as the latent variable, typical maximum likelihood estimation (MLE) method cannot be directly applied to estimate the parameters.",
"To address this challenge, we propose an extended EM algorithm for GroupAnno to effectively learn all the parameters in it, including the parameters of the classifier and the newly introduced variables for modeling annotator group bias.",
"We summarize our contributions in this paper as follows.",
"First, we propose metrics to measure the annotator group bias and verify its existence in real NLP datasets via an empirical study.",
"Second, we propose a novel framework GroupAnno to model the annotation process by considering the annotator group bias.",
"Third, we propose a novel extended EM algorithm for GroupAnno where we estimate the annotator group bias, infer the true labels, and optimize the text classification model simultaneously.",
"Finally, we conduct experiments on synthetic and real data.",
"The experimental results show that GroupAnno can accurately estimate the annotator group bias.",
"Also, compared with competitive baselines, GroupAnno can infer the true label more accurately, and learn better classification models.",
"In this section, we perform an empirical study to get a rudimentary understanding of annotator group bias.",
"We investigate the group annotator bias on three datasets that involve various text classification tasks.",
"These datasets are released in the Wikipedia Detox project (Wulczyn et al., 2017): Personal Attack Corpus, Aggression Corpus, and Toxicity Corpus where each instance is labeled by multiple annotators from the Crowdflower platform 1 .",
"For all the datasets, the demographic attributes of the annotators are collected.",
"The data statistics of the three Wikipedia Detox datasets, i.e. Personal Attack, Aggression, and Toxicity are shown in Table 1, where #Instances indicates the total number of instances in a dataset; and #Annotators denotes the total number of annotators.",
"The Personal Attack dataset and the Aggression dataset contain the same comments collected from English Wikipedia.",
"Each comment is labeled by around 10 annotators on two tasks, respectively.",
"The task of the former dataset is to determine whether the comment contains any form of personal attack, while the task of the latter dataset is to judge whether the comment is aggressive or not.",
"For each annotator, four demographic categories are collected: gender , age , language , and education .",
"Although the original dataset provides more fine-grained partitions, for simplicity, we divide the annotators into only two groups in terms of 1 https://www.crowdflower.com/ 1798 each demographic category 2 .",
"We consider two groups: male and female for gender , under 30 and over 30 for age , below bachelor and above bachelor (including bachelor) for education , and native and non-native speaker of English for language .",
"The toxicity dataset contains comments collected from the same source.",
"Similarly, each comment is labeled by around 10 annotators on whether it is toxic or not.",
"The toxicity dataset includes the same demographic information of the annotators as the former two datasets.",
"To investigate whether the annotators from different groups behave differently in annotation tasks, we first perform a comparison of the annotation results from different annotator groups.",
"For each demographic category, we collect the instances which are labeled by annotators from both groups, and report the proportion of instances that are classified as positive.",
"The results are shown in Table 2.",
"First, we note that there are obvious gaps between the annotations given by different annotator groups.",
"Second, given that the tasks of the three datasets are similar (i.e., all of them are related to detecting inappropriate speech), the annotation tendency of each annotator group is the same.",
"For example, young and non-native speaker annotators are less likely to annotate a comment as attacking, aggressive, or toxic.",
"Third, in terms of different demographic categories, the gaps between the annotations from the two groups are different.",
"For example, compared with other group pairs, the annotations provided by native speakers and non-native speakers are more different.",
"Analysis of Variance.",
"The results in Table 2 suggest that annotators show group bias in the annotation tasks, which is manifested in that different groups hold different evaluation criteria in the same task.",
"Specifically for classification tasks, different annotators are unevenly likely to label instances belonging from one class to another class.",
"In this paper, we only consider binary classification tasks for simplicity 3 .",
"Thus, we use sensitivity (true positive rate) and specificity (1 false positive rate) (Yerushalmy, 1947) to describe the bias of an individual annotator.",
"2 Based on our experiments, when considering more fine-grained groups, e.g. 18-30, 30-45 and 45-60 for age , the bias is also significant.",
"3 All our findings and the proposed framework can be trivially extended to the case of multi-way classification.",
"Next, we seek to verify the existence of annotator group bias.",
"We are interested in whether the demographic category of an individual annotator has a significant impact on its bias.",
"Thus, we first estimate the bias (i.e., sensitivity and specificity) of each individual annotator from its annotation data.",
"Since we don't have the true labels, we use majority vote labels as the true labels to approximately estimate the bias of each annotator.",
"Then, we perform an ANOVA (Scheffe, 1999) with the demographic category as the factors, the groups as the treatments, and the bias of an annotator as the response variable, to analyze the significance of the annotator's demographic groups against its own bias.",
"The corresponding statistical model can be expressed as: r = u + 1 ,g 1 r + + P,g Pr + r (1) where r indicates the bias of an individual annotator r ; u is the average bias of all annotators; p,g pr is the effect of the group g pr in terms of category p ; and r is the random error which follows a normal distribution with the mean value as 0.",
"To test whether category p has a significant impact on , we consider the null hypothesis H 0 p : p, 0 = p, 1 , which indicates that the demographic category p has no significant effect on the annotator bias.",
"In other words, there is no significant difference between the annotation behaviors of the two groups in terms of category p .",
"The results are shown in Table 3.",
"In the table, we report the inter-group sum of squares, which represent the deviation of the average group bias from the overall average bias.",
"We also use to denote the significance of the hypothesis tests.",
"We observe that in categories of gender, age and language, the two opposing groups show obvious different sensitivity and specificity in most cases.",
"Moreover, the ANOVA suggests that we are confident to reject the null hypotheses in these cases, which means that the above three demographic categories can affect the annotator bias significantly in different datasets.",
"Based on our observations, we conclude that the demographic attribute of an annotator can have a significant impact on its annotation behavior, and thereby, annotator group bias does exist.",
"In this section, we discuss our approaches for annotator group bias estimation, as well as bias-aware",
"label aggregation and model training.",
"We first introduce the metrics for measuring annotator group bias, and then present the problem statement.",
"Next, we detail GroupAnno , the probabilistic graphical model for modeling the production of annotations.",
"Finally, we describe our extended EM algorithm for learning the proposed model.",
"To measure the annotator bias in terms of demographic groups, we extend the definitions of sensitivity and specificity to the group scenario.",
"Formally, we define group sensitivity and group specificity of a group g in terms of category p as follows p,g = P r ( z = 1 | y = 1 , g pr = g ) p,g = P r ( z = 0 | y = 0 , g pr = g ) where y is the true label and z is the annotated label.",
"g pr = g represents that the annotator r belongs to group g in terms of demographic category p .",
"We use p = ( p, 0 , p, 1 , p, 0 , p, 1 ) to denote the bias parameters of demographic category p .",
"The bias parameters of all the P categories are denoted as = { p } Pp =1 .",
"Suppose that we have a dataset D = { x i , z 1 i , , z R i i } Ni =1 which contains N instances.",
"Each instance x i is annotated by R i different annotators, which results in labels z 1 i , , z R i i .",
"We also have an annotator set A = { ( g 1 r , , g P r ) } Rr =1 that records the demographic groups of a total of R annotators.",
"Here, g pr { 0 , 1 } indicates the group that the r -th annotator belongs to in terms of the p -th demographic category.",
"We consider P demographic categories for each annotator, and we have two groups (i.e., 0 and",
"1) for each category.",
"Given D and A , we seek to (1) estimate the annotator group bias ; (2) estimate the true label y i of each instance x i ; and (3) learn a classifier P w ( y | x ) which is parameterized by w .",
"Next, we introduce our GroupAnno to model the annotation process, and propose an extended EM algorithm to estimate the parameters = { w , } .",
"As shown in Figure 1, GroupAnno models the generation procedure of annotations as follows.",
"Given an instance x , its true label y is determined by an underlying distribution P w ( | x ) .",
"The distribution is expressed via a classifier with parameters w that we will learn.",
"Given the true label y , the annotated label z r from an annotator r is determined by its bias r = ( r , r ) .",
"For simplicity, in the following formulations, we use r to represent r or r .",
"In Section 2.2, we show that the annotator bias can be modeled by a superposition of the effects of annotator group bias with a random variable reflecting the annotator individual bias.",
"Thus, following Eq 1, we assume that the annotator bias of annotator r can be decomposed as r = u + 1 ,g 1 r + + P,g Pr + r To sum up, the parameters we introduced to model annotator bias are = { u } { p } Pp =1 { r } Rr =1 .",
"To estimate the parameters = { w , } , one way is to use maximum likelihood estimation.",
"Under the assumption that instances are sampled 1800 x y z r g 1 r w Classifier True Label Instance Annotated Label Annotator Bias g 2 r g Pr 1 2 P Annotator Group Bias Annotator Groups u r r Figure 1: An illustration of GroupAnno.",
"Therefore, the MLE parameters can be found by maximizing the log-likelihood",
"However, we cannot directly apply MLE to solve Eq 2, because there is an unknown latent variable (i.e. the true label y ) in the probabilistic graphical model.",
"Thus, we propose an extended EM algorithm to effectively estimate the parameters in GroupAnno.",
"Since the true label y i is an unknown latent variable, the log-likelihood term in Eq 2 can be decomposed as ln P ( D | ) = N (cid:88) i =1 ln[ P w ( y i = 1 | x i ) P ( z 1 i , , z R i i | y i = 1; ) + P w ( y i = 0 | x i ) P ( z 1 i , , z R i i | y i = 0; )] where = { r } Rr =1 and = { r } Rr =1 represent the collections of the sensitivity and the specificity of all the annotators.",
"We further assume that the annotations for one instance from different annotators are conditionally independent given their demographic attributes (Raykar et al., 2010).",
"Then we have ln P ( D | ) = N (cid:88) i =1 ln (cid:104) P w ( y i = 1 | x i ) R i (cid:89) r =1 P ( z ri | y i = 1; ) + P w ( y i = 0 | x i ) R i (cid:89) r =1 P ( z ri | y i = 0; ) (cid:105) = N (cid:88) i =1 ln[ p i a i + (1 p i ) b i ] (3) where we denote p i := P w ( y i = 1 | x i ) a i := R i (cid:89) r =1 P ( z ri | y i = 1; ) = R i (cid:89) r =1 z ri r (1 r ) 1 z ri b i := R i (cid:89) r =1 P ( z ri | y i = 0; ) = R i (cid:89) r =1 (1 r ) z ri 1 z ri r Note that due to the existence of the latent variable y i , Eq 3 contains the logarithm of the sum of two terms, which makes it very difficult to calculate its gradient w.r.t .",
"Thus, to solve the obstacle, we instead optimize a lower bound of ln P ( D | ) via an EM algorithm.",
"E-step.",
"Given the observation D and the current parameters , we calculate the following lower bound of the real likelihood ln P ( D | ) ln P ( D | ) E y [ln P ( D , y | )] = N (cid:88) i =1 i ln p i a i + (1 i ) ln(1 p i ) b i (4) where i = P ( y i = 1 | z 1 i , . . . , z Ri , x i , ) and it can be computed by the Bayes' rule i = a i p i a i p i + b i (1 p i ) (5) M-step.",
"The training algorithm is summarized in Algorithm 1.",
"We first initialize the posterior probability of the labels i based on majority voting (line 1).",
"Next, we perform the extended EM algorithm to update the model parameters iteratively.",
"In the E-step, we update i by Bayes' rule in Eq 5, and then 1801 calculate the expectation by Eq 4 (from lines 3 to 5).",
"Afterward, we perform the M-step, where the gradients of the conditional expectation w.r.t the model parameters are calculated, and the model parameters are updated through gradient ascent.",
"The iterative process is terminated when some specific stop requirements are satisfied.",
"In our implementation, we execute the EM optimization steps for a fixed number of epochs.",
"In this section, we evaluate the proposed method via comprehensive experiments.",
"We test our model on both synthetic and real-world data.",
"Through the experiments, we try to answer three research questions: (1) is our method able to accurately estimate the annotator group bias?",
"(2) can our method effectively infer the true labels?",
"and (3) can our approach learn more accurate classifiers?",
"We compare our proposed framework GroupAnno with eight existing true label inference methods (Zheng et al., 2017), including majority voting (MV), ZenCrowd (Demartini et al., 2012), Minimax (Zhou et al., 2012), LFC-binary (Raykar et al., 2010), CATD (Li et al., 2014a), PM-CRH (Aydin et al., 2014), KOS (Karger et al., 2011), and VI-MF (Liu et al., 2012).",
"Synthetic Data.",
"We first create two synthetic datasets on a simple binary classification task with 2-dimension features.",
"As shown in Figure 2, the instances in the datasets are in the shape of circle and moon, respectively.",
"In each dataset, we sample 400 instances for both classes.",
"We simulate 40 annotators with two demographic attributes.",
"We first randomly set the group bias for the two demographic attributes.",
"Then, based on our assumed distribution that has been verified in Section 2, we sample the bias for each annotator.",
"Finally, we suppose that each instance is labeled by 4 different annotators and simulate the annotations based on the sampled annotator bias.",
"With the knowledge of actual annotator group bias and true labels in synthetic data, we can verify the capability of the proposed framework in group bias estimation and truth label inference.",
"Wikipedia Detox Data.",
"We conduct experiments on all the three subsets (i.e. Personal Attack, Aggression, and Toxicity) of the public Wikipedia Detox dataset.",
"The details of this dataset are introduced in Section 2.1.",
"For the three subsets in the Wikipedia Detox Corpus, we use the training/test sets split by the publisher of the data (Wulczyn et al., 2017).",
"Since there is no available ground-truth label in this dataset, we pick up a subset of instances in the test set on which more than 80% annotations reach an agreement and treat the MV label as the ground-truth label.",
"These instances are less controversial, thus we are confident that the MV labels are true labels.",
"We report the performance of the models trained under various label inference approaches on this set.",
"Information Detection Data.",
"This dataset consists of text transcribed from conversations recorded in several in-person and virtual meetings.",
"Each text is assigned an information label which groups the text into three categories: give information (G), ask information (A), and other (O).",
"Five different data annotators classified the text into one of G, A, or O categories.",
"We conducted a survey to collect data on demographic characteristics of the annotators such as gender, race, and native speaker of English.",
"We convert the three categories into two classes by treating G and A as positive (i.e., information exchange) and O as negative (i.e., other).",
"There are 2,483 instances in total in this dataset.",
"After the annotation, we randomly select 762 instances and ask the annotators to discuss and reach an agreement on their labels.",
"We treat these labels as true labels.",
"We construct the training set with the remaining 1,721 instances without true labels, plus 430 of the instances with true labels.",
"Thus, we have 20% training data with true labels, on which 1802 we will report the truth inference performance.",
"The rest 332 instances with true labels make up our test set.",
"For text classification tasks on the Wikipedia Detox data and the Information Detection data, we employ an one-layer recurrent neural network (RNN) with gated recurrent units (GRUs) as the classifier.",
"In the RNN classifier, the word embedding size is set as 128 and the hidden size is set as 256.",
"The classifier is optimized by an Adam optimizer (Kingma and Ba, 2014) with a learning rate of 0.001.",
"When modeling annotator group bias, we consider 1-2 demographic categories with the most significant group effects.",
"For the Personal Attack dataset and the Aggression dataset, we consider age and language.",
"For the Toxicity dataset, we consider gender.",
"For the Information Detection dataset, we consider language.",
"Group Bias Estimation.",
"In each of the synthetic datasets, we simulate the annotations based on presented annotator group bias.",
"We simulate two demographic attributes for each annotator, where there are two groups in terms of each attribute.",
"Thus, there are eight bias parameters to estimate: sensitivities p,g and specificities p,g , where p = 0 , 1 and q = 0 , 1 .",
"We compare the real values of the annotator group bias and the estimations from GroupAnno.",
"The results are shown in Table",
"4. We observe that the bias parameters are estimated accurately within an acceptable error range.",
"The results demonstrate the ability of our extended EM algorithm to estimate the parameters in GroupAnno.",
"Truth Label Inference.",
"The experimental results of truth label inference on synthetic data are shown in Table",
"5. In the table, we list the performance of different approaches on truth label inference.",
"We make the following observations.",
"First, MV performs the worst among all the methods.",
"In fact, a majority vote often does not mean the truth.",
"By explicitly modeling the annotation behaviors of the annotators, an algorithm can infer the true labels more accurately than the majority vote.",
"Second, the baselines Minimax and LFC-binary outperform other baselines.",
"LFC-binary leverages PGM to model the individual annotator bias for truth label inference, which achieves desirable performance.",
"Third, our framework GroupAnno fur-Table 4: Results of group bias estimation on the synthetic 2-dimensional datasets.",
"Real and Estimation indicate the real and the estimated values of the annotator group bias parameters.",
"ther improves the accuracy of truth label inference on the basis of LFC-binary, since GroupAnno finds and exploits the group annotator bias as additional information.",
"GroupAnno models the group annotator bias as prior information of the individual bias of each annotator so that individual bias can be estimated more accurately.",
"As a result, GroupAnno achieves the best performance on truth label inference.",
"The experimental results on the Wikipedia Detox datasets are shown in the left section of Table",
"6. For LFC-binary and GroupAnno, where truth label inference and model training are conducted simultaneously, we directly report the performance of the resulting model on the test set.",
"For other pure truth label inference approaches, we first infer the truth labels and then train the model on the inferred labels.",
"Finally, we report the performances of these models on the test set.",
"The results show that GroupAnno achieves better performances than the state-of-the-art methods, which demonstrates the effectiveness and superiority of our framework in practice.",
"The experimental results on the information detection dataset are shown in the right section of Table",
"6. Since we have 20% training data with available true labels, we first examine the accuracy of truth label inference of various methods on this part of the data, and then report the performance of the trained classifiers on the test data.",
"We find that our proposed method still outperforms all the baselines on both truth inference and resulting classifier performance, which further verifies the superiority of GroupAnno in real-world data.",
"Bias and fairness issues are crucial as machine learning systems are being increasingly used in sensitive applications (Chouldechova and Roth, 2018).",
"Bias is caused due to pre-existing societal norms (Friedman and Nissenbaum, 1996), data source, data labeling, training algorithms, and postprocessing models.",
"Data source bias emerges when the source distribution differs from the target distribution where the model will be applied (Shah et al., 2019).",
"Training algorithms can also introduce bias.",
"For example, if we train a model on data that contain labels from two populations a majority and a minority population minimizing overall error will fit only the majority population ignoring the minority (Chouldechova and Roth, 2018).",
"Data labeling bias exists when the distribution of the dependent variable in the data source diverges from the ideal distribution (Shah et al., 2019).",
"Many of these data labels are generated by human annotators, who can easily skew the distribution of training data (Dixon et al., 2018).",
"Various factors such as task difficulty, task ambiguity, amount of contextual information made available, and the expertise of the annotator determine annotation results (Joseph et al., 2017).",
"Prior literature studies various approaches to ensure the reliability of data annotations.",
"Demar-tini et al. (2012); Aydin et al. (2014) use worker probability to model the ability of an annotator to correctly answer a task, and some other works (Whitehill et al., 2009; Li et al., 2014b) introduce a similar concept, worker quality, by changing the value range from [0 , 1] to ( , + ) .",
"Welin-der et al. (2010) model the bias and variance of the crowdsourcing workers on numeric annotation tasks.",
"Moreover, Fan et al. (2015) and Ma et al. (2015) find that annotators show different qualities when answering different tasks, and thereby propose to model the diverse skills of annotators on various tasks.",
"Li et al. (2019) realize that annotators perform unevenly on each annotation instance, so they propose a novel method to model the instance-level annotator reliability for NLP labeling tasks.",
"Geva et al. (2019) use language generated by annotators to identify annotator identity and showed that annotator identity information improves model performance.",
"All these studies have been individual-focused and ignore group effects.",
"Our approach differs in that we study systemic bias associated with annotators of a specific demographic group.",
"In this work, we investigate the annotator group bias in crowdsourcing.",
"We first conduct an empirical study on real-world crowdsourcing datasets and show that annotators from the same demographic groups tend to show similar bias in the annotation tasks.",
"We develop a novel framework GroupAnno that considers the group effect of annotator bias, to model the whole annotation process.",
"To solve the optimization problem of the proposed framework, we propose a novel extended EM algorithm.",
"Finally, we empirically verify our approach on two synthetic datasets and four real-world datasets.",
"The experimental results show that our model can accurately estimate the annotator group bias, achieve more accurate truth inference, and also train better classifiers that outperform those learned under state-of-the-art true label inference baselines.",
"As future work, we plan to investigate the annotator group bias in tasks beyond classification such as regression tasks and text generation tasks.",
"This research is supported by the National Science Foundation (NSF) under grant num-bers IIS1714741, CNS1815636, IIS1845081, IIS1907704, IIS1928278, IIS1955285, IOS2107215, and IOS2035472.",
"Any opinions, findings, conclusions, or recommendations expressed in this material are those of the researchers and do not necessarily reflect the views of NSF.",
"This research is also supported by the Army Research Office (ARO) under grant number W911NF-21-1-0198, the Home Depot, Cisco Systems Inc, SNAP, and the Startup Funding at the University of Calgary."
] | [
"abstain",
"abstain",
"abstain",
"result",
"objective",
"objective",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"objective",
"method",
"abstain",
"result",
"objective",
"method",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"objective",
"objective",
"objective",
"objective",
"result",
"result",
"objective",
"other",
"other",
"other"
] |
[
"Multi-modal techniques offer significant untapped potential to unlock improved NLP technology for local languages.",
"However, many advances in language model pre-training are focused on text, a fact that only increases systematic inequalities in the performance of NLP tasks across the world's languages.",
"In this work, we propose a multi-modal approach to train language models using whatever text and/or audio data might be available in a language.",
"Initial experiments using Swahili and Kinyarwanda data suggest the viability of the approach for downstream Named Entity Recognition (NER) tasks, with models pre-trained on phone data showing an improvement of up to 6% F1-score above models that are trained from scratch.",
"Preprocessing and training code will be uploaded to https://github.com/sil-ai/phone-it-in.",
"Pre-trained language models are increasingly applied in ways that are agnostic to targeted downstream tasks (Brown et al., 2020).",
"This usage has led to a proliferation of large language models trained on enormous amounts of data.",
"For example, the recent Megatron-Turing NLG 530B model was trained on the Pile, which includes 800GB+ of text (Gao et al., 2021), and other large language models utilize large portions of the 200TB+ common crawl data.",
"1 These large data sets include impressive amounts of text, but all languages are not represented equally (or at all) in that text.",
"The reality is that only a negligible fraction of the 7000+ currently spoken languages (Eberhard et al., 2021) have sufficient text corpora to train state-of-the-art language models.",
"This data scarcity results in systematic inequalities in the performance of NLP tasks across the world's languages (Blasi et al., 2021).",
"Local language communities that are working to develop and preserve their languages are producing diverse sets of data beyond pure text.",
"The Bloom Library project, 2 for example, is being used by local language communities to create and translate \"shell\" or \"template\" books into many languages (426 languages at the time this paper is being writ-ten).",
"However, Bloom allows users to do more than just translate text.",
"Users are also recording audio tracks and sign language videos, which has resulted in 1600+ oral translations.",
"Other examples showing the multi-modal nature of data in local languages include:",
"(i) the creation of ChoCo: a multimodal corpus of the Choctaw language (Brixey and Artstein, 2021);",
"(ii) SIL International's 50+ year effort to document endangered Austronesian languages via text, audio, and video (Quakenbush, 2007);",
"(iii) the grassroots Masakhane effort catalyzing the creation and use of diverse sets of African language data ( et al., 2020); and",
"(iv) work with the Me'phaa language of western Mexico that is producing digital recordings (video and audio) along with vocabulary, grammar and texts (Marlett and Weathers, 2018).",
"These diverse data sources are effectively unusable by traditional text-based NLP techniques.",
"In the light of data scarcity on these languages, they offer significant untapped potential to unlock improved NLP technology, if text data can be leveraged along with audio, image and video data.",
"Furthermore, flexible multi-modal technology such as this will make it easier to include diverse people and communities such as those described above within the NLP technology development process audio-based technology reducing the need for literacy, for example.",
"In this paper, we propose a multi-modal approach to train both language models and models for downstream NLP tasks using whatever text and/or audio data might be available in a language (or even in a related language).",
"Our method uti-2 https://bloomlibrary.org/ 5306 lizes recent advances in phone recognition and text/grapheme-to-phone transliteration to convert input audio and text into a common phonetic representation (the IPA phone inventory).",
"We then pre-train character-based language models in this phone-space.",
"Finally, we fine-tune models for downstream tasks by mapping text-based training data into the phonetic representation.",
"Thus, in addition to flexibility in pre-training, our method provides a way to reuse labeled text data for common NLP tasks, like Named Entity Recognition or Sentiment Analysis, in the context of audio inputs.",
"We demonstrate our phonetic approach by training Named Entity Recognition (NER) models for Swahili [ swh ] 3 using various combinations of Swahili text data, Swahili audio data, Kinyarwanda [kin] text data, and Kinyarwanda audio data.",
"These two languages both originate from from the same language family, Bantu, and are spoken by millions of people in Eastern Africa, often within the same country, resulting in some overlap in loan words, etc. 4 However, they are both considered low-resource languages.",
"Kinyarwanda in particular, though spoken by approximately 13-22 million people 5 , has very little text data available in that language, with fewer than 3,000 articles on the Kinyarwanda-language Wikipedia, and Swahili comparatively ahead but still poorly resourced at approximately 68,000 articles, far less than many European languages.",
"6 , though some datasets have been created such as KINNEWS (Niyongabo et al., 2020).",
"On the other hand, Kinyarwanda is uniquely placed as a language to leverage speech-based technologies, due to well-organized efforts 7 to collect voice data for that language.",
"It is in fact one of the largest subsets available on the Common Voice Dataset (Ardila et al., 2019), with 1,183 hours of voice clips collected and validated.",
"Choosing these two languages allowed us to test the use of the technique on legitimately low-resourced languages that could benefit from improved NLP technology, and which as part of the same family of languages 3 Language codes formatted according to ISO 639-3 standard: https://iso639-3.sil.org/ 4 see for example (Kayigema and Mutasa, 2021), which describes English loan words entering Kinyarwanda \"very often via Kiswahili\" 5 Sources vary: Ethnologue cites \"Total users in all countries: 13,133,980\", but there are 22 million according to WorldData.info (https://www.worlddata.info/languages/kinyarwanda.php).",
"6 https://meta.wikimedia.org/wiki/List_of_Wikipedias 7 https://foundation.mozilla.org/en/blog/how-rwanda-making-voice-tech-more-open/ might be similar enough in vocabulary, grammar, sound systems and so on, to benefit from cross-lingual training.",
"We find that simple NER models, which just look for the presence or absence of entities, can be trained on small amounts of data (around 2000 samples) in the phonetic representation.",
"Models trained for complicated NER tasks in the phonetic representation, which look for entities and their locations within a sequence, are improved (by up to 6+% in F1 score) through pre-training a phonetic language model using a combination of text and audio data.",
"We see this improvement when fine-tuning either a Swahili or Kinyarwanda language model for downstream Swahili tasks, which implies that one could make use of text and audio data in related languages to boost phonetic language model performance.",
"The utility of the method in data scarce scenarios and importance of pre-training depends on the complexity of the downstream task.",
"There have been a series of attempts to utilize phonetic representations of language to improve or extend automatic speech recognition (ASR) models.",
"Some of these jointly model text and audio data using sequences of phonemes combined with sequences of text characters.",
"Sundararaman et al. (2021), for example, uses a joint transformer architecture that encodes sequences of phonemes and sequences of text simultaneously.",
"However, this joint model is utilized to learn representations that are more robust to transcription errors.",
"The architecture still requires text inputs (from ASR transcriptions) and generates outputs in both text and phoneme representations.",
"In contrast, our approach allows for text input, audio input, or text plus audio input to language models.",
"Similarly, in (Chaudhary et al., 2018) and (Bharadwaj et al., 2016) investigate the potential of phoneme-based or phoneme aware representations and models, showing gains in performance, language transfer, and flexibility across written scripts.",
"These works conduct training on text-based data only, using Epitran to convert to phonemes.",
"Baevski et al. (2021) transforms unlabeled text (i.e., not aligned with corresponding audio files) into phonemes in a scheme to train speech recognition models without any labeled data.",
"This scheme involves a generator model trained jointly with a discriminator model.",
"The generator model converts 5307 audio, segmented into phonetic units into predicted phonemes, and the discriminator model attempts to discriminate between these predicted phonemes and the phonemes transliterated from unlabeled text.",
"Although both text and audio are utilized in this work, they are not input to the same model and the primary output of the training scheme is a model that creates good phonetic speech representations from input audio.",
"Outside of speech recognition focused work, Shen et al. (2020) (and other researchers cited therein) attempt to \"fuse\" audio and text at the word level for emotion recognition.",
"They introduce another architecture that internally represents both audio and text.",
"However, the so-called WISE framework relies on speech recognition to generate the text corresponding to audio frames in real-time.",
"The current work explicitly avoids reliance on speech recognition.",
"The 2021 Multimodal Sentiment Analysis (MuSe) challenge continues this vein of research integrating audio, video, text, and physiology data in an emotion recognition task (Stappen et al., 2021).",
"Contributions to this challenge, such as Vlasenko et al. (2021), introduce a variety of ways to \"fuse\" audio and text inputs.",
"However, these contributions are squarely focused on emotion/sentiment analysis and do not propose methods for flexible, phonetic language models.",
"Lakhotia et al. (2021) introduced functionality for \"textless\" NLP.",
"They explored the possibility of creating a dialogue system from only audio inputs (i.e., without text).",
"As part of this system, language models are directly trained on audio units without any text.",
"This advances the state-of-the-art with regard to self-supervised speech methods, but it does not provide the flexibility in audio and/or text language modeling introduced here.",
"Our approach is inspired by the fact that many languages are primarily oral, with writing systems that represent spoken sounds.",
"We convert both text and audio into single common representation of sounds, or \"phones,\" represented using the International Phonetic Alphabet, or IPA.",
"Then, we perform both language model pre-training and the training of models for downstream tasks in this phonetic representation.",
"Well-tested architectures, such as BERT-style transformer models (Vaswani et al., 2017), are thus flexibly extended to either speech or audio data.",
"Regarding the conversion process of text and audio data, we leverage recent advances to transliterate this data into corresponding sounds represented by IPA phonetic symbols.",
"This transliteration is possible for speech/audio data using tools such as the Allosaurus universal phone recognizer, which can be applied without additional training to any language (Li et al., 2020), though it can benefit from fine-tuning(Siminyu et al., 2021).",
"To convert text data to phonemes we can use tools such as the Epitran grapheme-to-phoneme converter (Mortensen et al., 2018), which is specifically designed to provide precise phonetic transliterations in low-resource scenarios.",
"Fig. 1 shows how downstream models for certain NLP tasks, like Named Entity Recognition (NER), are performed in the phonetic representation.",
"Labeled data sets for NLP tasks need to be mapped or encoded into the phonetic representation to train downstream models.",
"However, once this mapping is accomplished, models trained in the phonetic representation can perform tasks with audio input that are typically restricted to processing text input.",
"One complication arising from direct speech-to-phone transcription is the loss of word boundaries in the transcription.",
"This is expected, as natural speech does not put any pauses between the words in an utterance.",
"This does, however, result in mixing text data sets containing clear word boundaries with speech data sets containing no clear word boundaries.",
"Borrowing from techniques used on languages that do not indicate word boundaries by the use of whitespace, we address the problem by removing all whitespace from our data sets after phone transliteration.",
"We train character-based language models over the resulting data.",
"Character-based models such as CharFormer (Tay et al., 2021) or ByT5 (Xue et al., 2021) have shown promise in recent years for language modeling, even if this approach is known to have some trade offs related to shorter context windows.",
"The transliteration of text and audio data into phonetic representations presents several other challenges related to potential loss of information or injection of noise:",
"1. Loss of suprasegmental information : In some languages, meaning may be encoded through tones, or pitch changes across sounds (aka across segments, or \"suprasegmental\").",
"Particularly for tonal languages such as Mandarin Chinese [ cmn ], this loss can represent a significant informational loss particularly for homophones with different tones, as seen in (Am-rhein and Sennrich, 2020).",
"While IPA symbols can represent these intricacies, it adds complexity 2. Phone/phoneme differences : As noted in (Li et al., 2020), speech sounds which are physically different (different phones ), may be perceived as the same (one phoneme ) by speakers of one language, but these same sounds could perhaps be distinguished by speakers of another language.",
"For example, the French words words bouche , and bche contain phones (/u/ vs. /y/) which may sound \"the same\" to English speakers, but are semantically different to French speakers.",
"In other words, in English, both phones map to the same phoneme perceptually.",
"As the Allosaurus phone recognizer recognizes the actual phones/sounds, not their perceived phonemes, it would transcribe these two phones to different representations even for English speech.",
"This can be mitigated to an extent by customizing the output of Allosaurus on a per-language basis, see Sec. 4.3.",
"3. Simple errors in phone recognition : As noted in (Siminyu et al., 2021), even the best-trained Allosaurus models, fine-tuned on language-specific data, have a non-trivial Phone Error Rate (PER).",
"An important question, therefore, is whether these added sources of noise/information losses are outweighed by the potential benefits in terms of flexibility.",
"Does working in a phonetic representation cause a prohibitive amount of information loss?",
"We constructed our experiments and data sets in order to answer this question.",
"In order to evaluate the quality of learned phonetic representations, we transliterate several text and audio data sets in the Swahili [ swh ] language.",
"We pre-train phonetic language models on various combinations of these data sets and evaluate downstream performance on NER tasks.",
"See Fig. 2 for a detailed overview of these various combinations.",
"We refer to these combinations as denoted by downstream tasks (SNER for S wahili NER), and pre-training language (( K for K inyarwanda, S for S wahili) as well as data modality ( T for text, A for audio).",
"By way of example, the SNER+ST2 model results from pre-training using 2 s wh t ext datasets (ST2) and fine-tuning on the s wh NER (SNER) task, whereas the SNER+SAT model results from pre-training using s wh a udio and t ext data (SAT).",
"Kinyarwanda [ kin ] data is used in our experiments as a language related to the target language ( swh ) with existing text and audio resources that, in some ways, surpasses those available in the target language.",
"Thus, we pre-train some models on kin data while fine-tuning for the downstream NER task using swh data.",
"The NER1 task tries to determine the presence or absence of certain kinds of entities within an input.",
"For our task we use PER, ORG, DATE, and LOC entities.",
"The NER2 task additionally requires models to predict the correct numbers of these entities within an input.",
"Finally, the NER3 task requires models to determine entities at the correct locations with an input sequence of phones.",
"For all of these tasks, we first convert text data to phones using Epitran and audio data to phones using Allosaurus.",
"Then, we pre-train on various combinations of data, before fine-tuning on NER.",
"For swh pre-training data we use:",
"(i) the \"Lan-guage Modeling Data for Swahili\" dataset (Shikali and Refuoe, 2019) hosted on Hugging Face (which we refer to as the \"HF Swahili\" data set); and",
"(ii) the ALFFA speech dataset (Gelas et al., 2012).",
"For ALFFA data we process both the audio files (using Allosaurus) and the original \"gold\" text transcriptions (using Epitran).",
"For Kinyarwanda pre-training data, we use the Common Voice (CV) Kinyarwanda 6.1 subset (Ardila et al., 2019).",
"Again, we utilize both the audio files and transcriptions.",
"Due to the large size of the CV 6.1 Kinyarwanda subset, we processed only about 80% of the audio files.",
"For fine-tuning the downstream NER task, we use the MasakhaNER data set (Adelani et al., 2021).",
"As with other text-based data sets, we transform the NER sample with Epitran to map the samples into the phonetic representation.",
"For the downstream NER tasks we map or encode the NER annotations into the phonetic representation.",
"We thus edited the labels (PER, ORG, DATE, and LOC) to convert them from word-level labels to phone-level labels as shown in Fig. 3. Unlike (Kuru et al., 2016), we leave in the Band Iprefixes.",
"Our fork of the MasakhaNER data set, which implements our phonetic representations of the labels, is published on Github.",
"8 .",
"As mentioned already, we use Allosaurus for phone recognition with audio inputs.",
"In order to ensure consistency with Epitran, we took advantage of Allosaurus's inventory customization feature, giving it the phone inventories specified by the same language in Epitran.",
"The inventory used throughout this work (for swh ) is the swa-Latn inventory from Epitran.",
"9 When this inventory is supplied as input, Allosaurus will only output symbols from the inventory.",
"We followed similar practice when transliterating Kinyarwanda data.",
"We compare the output of Epitran and Allosaurus on the ALFFA dataset.",
"Following the practice of (Li et al., 2020), we used the editdistance 10 library to calculate the Phone Error Rate (PER).",
"Having no ground truth phone annotations, we instead take Epitran's outputs as \"ground truth\" for comparison.",
"The mean PER between the outputs is 23.7%.",
"This result is consistent with Siminyu et al. (2021), which finds PERs as high as 72.8% when testing on on the Bukusu (bxk), Saamia (lsm) and East Tusom languages (an endangered subdialect of the Tungkhulic language family).",
"However, by training the phone recognizer on even minimal amounts of data in these languages, PERs were improved significantly.",
"A spreadsheet with detailed results for 10k samples from ALFFA can be found online.",
"11 4.4 Model Architecture and Training All models use the SHIBA implementation of CANINE (Tanner and Hagiwara, 2021).",
"SHIBA was designed for use on the Japanese [ jpn ] language, which does not include spaces between its characters (similar to our phonetic representations without 9 https://bit.ly/30f8YCI 10 https://github.com/roy-ht/editdistance 11 https://bit.ly/3F0is3t word boundaries).",
"We used the default hyperpa-rameter settings for SHIBA pre-training and fine-tuning, because we are primarily concerned with the relative impact of various combinations of pretraining data on the downstream NER tasks.",
"We use the Hugging Face transformers library (Wolf et al., 2020) to train all models.",
"Because of the small size of the NER data set used during fine-tuning, we enabled Hugging Face's early stopping callback for all downstream training runs.",
"We stopped these runs if they did not improve training loss after 20 evaluations.",
"Nonetheless, we found after a number of trials that the models quickly overfit using this setting.",
"We also experimented with modifying this on several trials to stop based on the evaluation loss instead, but this change did not significantly influence the evaluation results.",
"Following the example of Adelani et al. (2021), we do not run downstream model trainings once, but multiple times.",
"We also pre-trained each phonetic language model multiple times with different random seeds.",
"We report averages of these multiple trials in the following.",
"Scripts and code for our experiments will be uploaded to Github.",
"12 5 Results and Discussion Table 1 presents the F1 scores for our training scenarios in the downstream NER1 and NER2 tasks.",
"The models that utilize pre-training on the kin audio and text data give the best results.",
"However, pre-training does not appear to dramatically influence the level.",
"F1 scores in the range of 74-85% suggests the minimum viability of these phonetic models for simple NLP tasks.",
"Table 2 presents the F1 scores for our various training scenarios in the downstream NER3 task, which should be the most challenging for our phonetic models.",
"The influence of pre-training is more noticeable for this task.",
"Further, the models pre-trained on the kin audio and text data have the best performance.",
"This is likely due to the fact that the kin data is both large and higher quality (in terms of sound quality) as compared to the ALFFA Swahili data.",
"This benefit of this data size and quality appears to outweigh any degradation due to the pre-training occurring in a different (although related) language.",
"The importance (or relative impact) of pretraining phonetic language models increases with the complexity of the NER task.",
"Fig. 4 shows the maximum percentage improvement due to pretraining for each of our NER tasks.",
"This suggests that simple NLP tasks with a small number of output classes are much easier to port to phonetic representations, even without pre-training, while more complicated NLP tasks may require a more significant amount of text and/or audio data for pretraining.",
"We expect this trend to carry through to tasks like sentiment analysis, which could be formulated as a simple classification task with NEG, NEU, and POS sentiment labels or a more complicated aspect based sentiment analysis task.",
"The proposed method for multi-modal training using phonetic representations of data has minimum viability for simple NER tasks.",
"For more complicated NER tasks, pre-training phonetic language models boosts downstream model performance by up to 6% in F1 scores.",
"This pre-training can be Figure 4: The max percentage improvement with fine-tuning for each kind of NER task that was explored.",
"performed in the target language or in a related language using text and/or audio data.",
"Thus, the method provides flexibility in the data needed to train language models, while also allowing for audio and/or text inputs to models trained on downstream NLP tasks.",
"We anticipate exploring various extensions to and validations of this method in the future.",
"Specifically, we would like to explore methods that might mitigate performance degradation due to a lack of word boundaries in our method.",
"Subword to-kenization techniques, such as Byte-Pair Encodings (BPE) (Sennrich et al., 2016; Gage, 1994), or character-based word segmentation techniques might help in detecting and exploiting repeating patterns within the phonetic representation.",
"Furthermore, the word embedding techniques used by (Chaudhary et al., 2018) or (Bharadwaj et al., 2016) have been shown to work well, and would be worth investigating how the removal of space-delimited word boundaries would affect this.",
"We would also like to validate our methods on a variety of other data sets and tasks.",
"We selected the MasakhaNER dataset for evaluation because we specifically wished to evaluate results on ac-5312 tual low-resource languages supported by both Allosaurus and Epitran.",
"While there are still, we argue, detectable improvements in downstream results with our method, further work would benefit from additional evaluations on other data sets or tasks.",
"In particular, the Swahili News Classification corpus (David, 2020) corpus may provide a useful evaluation.",
"We did not investigate going from audio to phones, then phones to words/characters, judging that information losses and errors would likely compound in multiple stages of processing.",
"Instead, we focused on what could be achieved with the Allosaurus \"universal phone transcriber\" without any language-specific finetuning.",
"A truly universal transcriber would increase flexibility when training for truly low-resource scenarios.",
"Nevertheless, it has been shown by Siminyu et al. (2021) that it is possible to improve phone recognition with even small amounts (approximately 100 sentences) of annotation.",
"It may be possible to improve phonetic language modeling results by performing this fine-tuning in the target language.",
"Experiments involving other languages with, e.g. languages that are not related would help to isolate the role of relatedness, lexical overlap, or related sound systems/phonology.",
"While we do not claim that conversion to phones provides better performance generally, we believe that our experiments show that the fundamental idea of converting either text or audio data to the common phone representation provides a viable path to more flexible approach to certain downstream NLP tasks, worthy of further development.",
"The authors wish to thank Dr. Vijayan Asari, Dr. Steven Rogers, Dr. Julia Kreutzer, Dr. Graham Neubig, Dr. David Mortenson, Andre Niyongabo Rubungo, and Joshua Turner for advice, helpful discussions, assistance in debugging, and time spent in proofreading.",
"In addition, David Adelani and the Masakhane community provided invaluable help, encouragement and assistance with the MasakhaNER dataset.",
"We used GNU Parallel for much of the dataset processing (Tange, 2011).",
"In combination with Lhoest et al. (2021) from Hugging Face, GNU Parallel significantly accelerated pre-processing and phone transcription.",
"tracking, model and dataset management, and (when needed) prompt and helpful technical support.",
"As our project involved the creation of over 20 distinct dataset variations and training many models on some of them, these management tools significantly eased the entire research process.",
"This research project uses open datasets and models, which are used in accordance with corresponding licenses to the best of our knowledge.",
"For the downstream task in question (NER), we used the MasakhaNER dataset, which is constructed from newspaper data.",
"Where this newspaper data includes mentions of individuals, the individuals are public figures.",
"The domain of this NER data is limited to the newspaper/news domain, which should be kept in mind while considering the applicability of the methods presented.",
"In terms of compute, the work presented here required approximately 200 pre-training or fine-tuning jobs tracked via ClearML.",
"Each run lasted no more than 1-2 hoursfor finetuning, but generally much longer for pretraining (on the order of a day), and only consumed one GPU resource at a time (either an A100 or P100).",
"This computation sums up to around 5-6 GPU-weeks on the A100, about one gpu-week on the Titan RTX, and several compute-days each for the other GPUs.",
"Additional exploratory work and debugging consumed another few GPU-days on Google Colab."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"result",
"abstain",
"result",
"abstain",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"Personalized language models are designed and trained to capture language patterns specific to individual users.",
"This makes them more accurate at predicting what a user will write.",
"However, when a new user joins a platform and not enough text is available, it is harder to build effective personalized language models.",
"We propose a solution for this problem, using a model trained on users that are similar to a new user.",
"In this paper, we explore strategies for finding the similarity between new users and existing ones and methods for using the data from existing users who are a good match.",
"We further explore the trade-off between available data for new users and how well their language can be modeled.",
"Recent work has suggested that there are several benefits to personalized models in natural language processing (NLP) over one-size-fits-all solutions: they are more accurate for individual users; they help us understand communities better; and they focus the attention of our evaluations on the end-user (Flek, 2020).",
"Generation tasks in particular benefit from a personalized approach, for example, Dudy et al. (2021) argue that user intention is more often difficult to recover from the context alone.",
"We study personalization in language modeling, a core task in NLP.",
"Direct applications of language models (LM) include predictive text, authorship attribution, and dialog systems used to model the style of an individual or profession (e.g., therapist, counselor).",
"LMs are increasingly used as the backbone of models for a range of tasks in NLP, increasing the potential impact of personalization even further (Brown et al., 2020).",
"of data written by many people.",
"This approach does not take into account the differences between individuals and their language patterns.",
"Given the same context, different people may act or write differently, but these general models cannot produce that type of variation.",
"Approaches like fine-tuning can be used to tailor a pretrained model to an individual, but perform well only when enough data is available, which is often not the case.",
"Previous work on personalized and demographic word embeddings has seen successful application in downstream tasks.",
"Garimella et al. (2017) look at location and gender and how they affect associations with words like health and many other stimulus words like stack does it make you think of books or pancakes?",
"Welch et al. (2020) discuss other associations, for instance, embodying an idea may more often refer to a religious or economic concept depending on your beliefs.",
"Similarly, wicked may mean evil or may function as an intensifier depending on where you live (Bam-man et al., 2014).",
"These exemplify how personalized representations can help make distinctions in meaning, however, static representations have limitations.",
"For example, Hofmann et al. (2021) find that in some contexts testing refers to seeing if a device works and sanitation refers to a pest control issue, while in another context both refer to conditions of the COVID-19 pandemic.",
"Personalized LMs, or language models built to better predict what an individual will say, could better address these cases, as LMs learn dynamic encodings of words.",
"In this paper, we consider approaches to fine-tuning and interpolation that are novel in that they leverage data from similar users to boost personalized LM performance.",
"We consider the case of users with a small number of available tokens and propose ways to (1) find similar users in our corpus and (2) leverage data from similar users to build a personalized LM for a new user.",
"We explore the 1742 trade-offs between the amount of available data from existing users, the number of existing users and new users, and how our similarity metrics and methods scale.",
"We then show an analysis to explore what types of words our method predicts more accurately and are thus more important to consider in personalization methods.",
"Personalized Language Modeling.",
"King and Cook (2020) examined methods for creating personalized LMs and their work is most similar to ours.",
"They consider interpolating, fine-tuning, and priming LMs as methods of personalization, though they use these methods with a large generic model.",
"In contrast, our work shows that performance can be improved by leveraging data from similar users.",
"They also analyzed model adaptation for models trained on users with similar demographics, inspired by Lynn et al. (2017), who showed that these demographic factors could help model a variety of tasks, and found that personalized models perform better than those adapted from similar demographics.",
"Shao et al. (2020) have also explored models for personalization but focused on handling OOV tokens.",
"Wu et al. (2020) proposed a framework to learn user embeddings from Reddit posts.",
"Their user embeddings were built on the sentence embeddings generated by a BERT model.",
"By using the learned user embeddings to predict gender, detect depression and classify MBTI personality, they concluded that their embeddings incorporate intrinsic attributes of users.",
"In our work, user embeddings are learned in a different approach, and we focus on how to use similarity calculated from user embeddings to build better LMs.",
"Authorship Attribution.",
"One of the tasks we consider as a means of computing similarity is authorship attribution, i.e., identifying the author of a document.",
"Early work on this task used lexical features like word frequencies and word n-grams (Kop-pel et al., 2009; Stamatatos, 2009).",
"As in Ge et al. (2016), we employ neural networks to model similarity between users and predict authorship.",
"Learning from Limited Data.",
"Antonello et al. (2021) explored training a model to predict what data will be most informative for fine-tuning and select individual data points to improve language modeling.",
"The similarity metrics that we derive are used to select data for fine-tuning in one of our methods of leveraging similar user data, however we consider indivisible sets of data grouped by author.",
"The cold start problem is a well-known problem in recommendation systems.",
"A great amount of previous work addressed how to recommend items to new users, about whom the system has little or no history, often with a focus on matrix factorization methods (Zhou et al., 2011).",
"Work from Huang et al. (2016) approached language modeling as a cold-start problem, in that they had no writing from a user, though they had a social network, from which they interpolated LMs from users linked in their social graph.",
"Language Models.",
"We use a recently developed LM that has received widespread attention (Mer-ity et al., 2018b).",
"The LSTM-based model combines a number of regularization and optimization techniques explored in recent literature, including averaged SGD, embedding dropout, and recurrent dropout.",
"Subsequent work has developed variations of the model with improved perplexity, but these take at least twice as much time to train (Gong et al., 2018), making them less practical for the user-specific experiments we consider.",
"Another direction of research has shown impressive results using extremely large models (Radford et al., 2019; Devlin et al., 2019).",
"Using these as a basis for experiments could be an interesting direction, but fine-tuning models in low data settings is known to be difficult and highly variable (Dodge et al., 2020).",
"Similar transformer models have been used for controlled generation.",
"Zellers et al. (2019) developed a model for news generation that conditioned on meta-data including domain, date, authors, and headline.",
"No ablation is performed, and though it would be interesting to compare to a transformer method that conditions on authors alone, we opted for a model that is faster and cheaper to train (Grover-Mega from Zellers et al. (2019) was trained for two weeks and cost around 25k USD).",
"Additionally, when fine-tuning models for new users, little data is available.",
"Contextualized embedding models often require a large amount of data to train effectively, though this type of comparison would be an interesting future direction to explore.",
"Variations of the LSTM have consistently achieved state-of-the-art performance without massive compute resources and thus we chose this architecture for our experiments (Merity et al., 2018a; Melis et al., 2019; Merity, 2019; Li et al., 2020).",
"We examine a corpus of publicly available Reddit comments and select users active on Reddit between the years of 2007-2015 who have at least 60k tokens of text. 1 We refer to the existing users with at least 250k tokens of text as anchor users. These are users that are leveraged through interpolation or fine-tuning in order to improve performance on new users. Reddit posts are mostly in English.",
"We experiment with two settings: In the small anchor setting, there are 100 anchor users, with a 200k, 25k, 25k split for training, validation, and test, and 50 new users, with 2k tokens for training, and 25k for each of validation and test. In the large anchor setting, there are 10k anchor users and 100 new users, each having 2k tokens for training and validation and 20k for test.",
"Preprocessing Reddit data can be noisy, containing URLs, structured content (e.g., tables, lists), Subreddit-specific emoticons, generated, or deleted content. We first extract all posts for each user in our dataset. During this process we remove noisy posts, where a post is considered noisy if it matches one of ten rules.",
"These rules and examples of each are shown in Table 1.",
"After this filtering step, we remove markup for emojis and hyperlinks from the remaining posts (keeping the posts themselves).",
"We take these steps to ensure that we capture language used by the authors, rather 1 Posts are retrieved from https://www.reddit.",
"com/r/datasets/comments/3bxlg7/i_have_every_publicly_available_reddit_comment/ and we exclude known bots and do not include posts in the /r/counting subreddit in our dataset.",
"than reposts, collections of links, ASCII tables and art, equations, or code.",
"Tokens that occur fewer than 5 times are replaced with UNK , which results in a vocab size of 55k for the small anchor set and 167k for the larger one.",
"Our method for constructing personalized LMs consists of a similarity metric and a method for leveraging similar user data to train a personalized LM.",
"The similarity metric measures which anchor users are most similar to a new user.",
"That is, given a set of users ( anchors ), a new user ( n ), and a similarity function ( sim ), we compute z = sim ( n, anchors ); z anchors to get a set of similar users z .",
"We explore three similarity metrics and two methods of applying them to the construction of personalized models.",
"Figure 1 shows how user data is used for each step.",
"We explore three methods for measuring the similarity between users.",
"Two of them, authorship confusion and user embeddings, are derived from classifiers trained for other tasks, while the third, perplexity-based similarity, is obtained from the performance of LMs on the new user.",
"The user embedding method results in a vector space where we can use cosine similarity to measure the distance between individuals.",
"The perplexity directly gives a distance between each pair and the authorship confusion vectors can be treated as a vector of continuous values where each value represents the similarity to an anchor user.",
"Authorship Attribution Confusion (AA).",
"Similarity can be measured from the confusion matrix of an authorship attribution model.",
"This model takes a post as input and encodes it with an LSTM (Hochre-iter and Schmidhuber, 1997).",
"The final state is passed to a feed-forward layer and then a softmax to get a distribution over authors.",
"We denote this model A , and A ( U ) as the class distribution output by the model for a given utterance set.",
"For a new user, we take their set of utterances, U n and pass them to our model A ( U n ) which will give us a confusion vector of length K , one value for each author.",
"We train this model on the data from anchor users.",
"2 Embeddings are initialized with 200d GloVe vectors pretrained on 6 billion tokens from randomly sampled Reddit posts (Pennington et al., 2014).",
"For K = 100 anchors the test accuracy is 42.88% and K = 10 , 000 the test accuracy is 2.42%.",
"These accuracies are reasonably high given the difficulty of the task.",
"3 The classifier does not 2 See Appendix A for hyperparameters 3 Note that when K = 10 , 000 the majority class is 0.01%.",
"have to be high performing given our application to computing a user similarity metric.",
"We apply this model to each post in the training data from new users.",
"The scores produced by the model for each new post indicate which of the anchor users has the most similar writing.",
"The more frequently posts from a new user are predicted as coming from a specific anchor user, the more similar this anchor user is to the new user.",
"User Embeddings (UE).",
"We first train an LM with a user embedding layer on the data from anchor users.",
"The model is adapted from Merity et al. (2018b) with an added user embedding layer.",
"This token embedding layer is initialized with our pretrained GloVe vectors and frozen during training.",
"The output of the LSTM layer is concatenated to the user embedding at each time step based on the author of the token at that time step.",
"4 Note that this is then passed through another feed-forward layer before being used for prediction.",
"Our optimizer starts with SGD and will switch to ASGD if there 4 See Appendix B for hyperparameters 1745 is no improvement in validation loss in the past 5 epochs (Polyak and Juditsky, 1992).",
"We removed continuous cache pointers (Grave et al., 2016) to speed up training.",
"For K = 100 , the validation perplexity converges to 59.06 and test perplexity is 58.86.",
"When training with K = 10 , 000 the validation perplexity converges to 88.71 with test perplexity 88.54.",
"The embeddings of anchor users can be obtained from the user embedding layer in the trained model.",
"To learn the embeddings of new users, we freeze all parameters of the trained model except the user embedding layer.",
"We train the model on the data from each new user separately with the same training strategy.",
"It takes 2 minutes to learn the embedding of each new user.",
"The average test perplexity is 66.67 when K = 100 and 90.48 when K = 10 , 000 .",
"For each pair of new user and anchor user, we use the cosine similarity between two embeddings as the similarity.",
"Perplexity-Based (PPLB).",
"Given N trained LMs, one for each user, we can then use the perplexity of one LM on another user's data as a measure of distance.",
"We could compare the word-level distributions, though this would be very computationally expensive.",
"In our experiments, we use the probability of the correct words only, or the perplexity of each model on each new user's data.",
"We take the large LM trained on all anchor users, as described in the user embedding section and fine-tune it for each anchor user.",
"We then measure the perplexity of each model on the data of each new user.",
"For this matrix of new anchor perplexities, we turn each row, representing a new user, into a similarity vector by computing 1 c min ( row ) max ( row ) for each cell, c .",
"This step is expensive, taking close to 24 hours for K = 100 and intractable given our hardware constraints in the K = 10 , 000 setting.",
"Our three similarity methods provide a way to identify anchor users with the most relevant data for a new user.",
"In this section, we describe two methods to learn from that data to construct a personalized model.",
"Users who speak in a similar style or about similar content may be harder to distinguish from each other and should then be more similar.",
"For a given similarity metric, we compute similar users and use data from these users to fine-tune an LM before fine-tuning for the new user.",
"We compare to two baselines, (1) a model trained on all anchor users with no fine-tuning and (2) a model trained on all anchor users that is fine-tuned on the new user's data, as is done in standard fine-tuning.",
"Our method of weighted sample fine-tuning has two steps.",
"The first step is to fine-tune the model trained on all anchor users on a new set of similar users, as determined by our chosen similarity metric.",
"Then we fine-tune as in the standard case, by tuning on the new user's data.",
"Our interpolation model is built from individual LMs constructed for each anchor user.",
"It takes the predictions of each anchor user model and weights their predictions by that anchor's similarity to the new user.",
"No model updates are done in this step, which makes it immediately applicable, without requiring further training, even if the aggregation of output from all anchor models is more resource intensive.",
"We also want to incorporate the predictions of the model fine-tuned on the new user data with the predictions of models trained on similar anchor users.",
"We define a set of similar anchor users, , each of which has a similarity to the new user, n .",
"We vary s for each similarity function.",
"The weight to give the new user fine-tuned model is , and we interpolate as follows for a given resulting probability p r , of a word, w : p r ( w | ) = p n ( w | )+(1 ) (cid:88) i s ( i , n ) p i ( w | ) The similarities are adjusted to the range (0 , 1) and normalized to sum to one.",
"We divide our results into separate subsections for each of the anchor sets.",
"On the small anchor set we were able to perform more exploration of the weighted fine-tuning method, as it does not scale as well to the large anchor set.",
"We present results using standard perplexity measurements as a function of the probability of a correct prediction of a token.",
"We also present results with accuracy at N, where a prediction is counted as correct if the correct token occurs within the top N most probable words given by the model.",
"In this section, we compare our weighted sample fine-tuning and interpolation approaches to the more standard fine-tuning, where a large pretrained model is fine-tuned only on the new user's data.",
"With no fine-tuning our LM achieves a perplexity of 67.6 and when fine-tuning on the new user only, this perplexity drops to 64.3.",
"For weighted fine-tuning, we attempt to fine-tune the large pretrained model on 100 anchors using our two step method, first fine-tuning on a million tokens from most similar users, and then fine-tuning on new user data.",
"Through tuning the number of similar users, we found 5 worked best.",
"For the interpolation model, we found more similar users improved accuracy, though perplexity was slightly higher for ten similar users.",
"Our interpolation model combines predictions from similar anchor user LMs.",
"We have an LM fine-tuned to each of our anchor users and for a given new user we predict words by weighting the predictions of the models representing the most similar users.",
"for any of our three similarity metrics.",
"Perplexity and accuracy results are reported averaged over the test set users.",
"We also tried fine-tuning with random user's data and found that this performance was better than no fine-tuning but worse than fine-tuning on new user data only, showing that there is no added benefit from simply continuing to fine-tune on all data.",
"For the interpolation model, we tune (see Section 5.2) on a held-out set and use a value of 0.7.",
"The results show that the authorship attribution similarity performs best on both metrics.",
"We find that as the number of similar users increases it has little effect past around ten similar users, as the similarity weights decrease and have a smaller impact.",
"Retraining with Similar User Data: It appears that having similar user data does not help the weighted fine-tuning model.",
"To further investigate this we looked at settings where the amount of training data is fixed, but the source is either random, or a sample of similar user's data.",
"For each new user, we build six datasets: a random dataset and five datasets consisting of data from top-k similar anchor users for this new user where k is in {10, 20, 30, 40, 50}.",
"Each of these datasets has 2m tokens.",
"The random dataset is comprised of 20k tokens from each anchor user.",
"For the dataset built from the top-k similar users, we want the number of tokens selected from each anchor user to be proportional to the similarity between the new user and each anchor user.",
"To do this, we normalize the three similarities by subtracting the minimum and dividing by the maximum such that they are between zero and one.",
"For a given set of k users and similarity metric, we sort all anchor users in descending order by their similarity to the new user and choose the top k anchor users.",
"For the rank 1 anchor user a 1 , we choose the following number of tokens from the 1747 #Sim.",
"training data, where s ( , ) is the similarity between a pair of users: n a 1 = 2000 k s ( newuser, a 1 ) (cid:80) ki =1 s ( newuser, a i ) If n a 1 > 200 k , we choose n a 1 = 200 k .",
"For the rank x anchor user a x , we choose n a x = (2000 k x 1 (cid:88) j =1 n a j ) s ( newuser, a x ) (cid:80) ki = x s ( newuser, a i ) tokens from their training data.",
"If n a x > 200 k , we choose n a x = 200 k .",
"We repeat this procedure until the rank k anchor user.",
"The ratio of similarities in this equation enforces that the amount of data we select from each of the top-k similar users is proportional to their similarity.",
"We then train a separate model on each dataset.",
"The architecture of the model is the same as what is described in Section 4.1 except that it does not have a user embedding layer.",
"We then fine-tune the trained models on the training data of the new user.",
"For a chosen similarity metric and number k, we average the test perplexity of the fine-tuned models for all new users and subtract from it the average test perplexity of the fine-tuned models trained on random datasets, whose average perplexity is 111.0.",
"The results are shown in Figure 2 with shaded areas indicating standard deviation.",
"In the figure, the lower a point is, the better the datasets built using the corresponding similarity metric and number k is for training an LM for new users, which we infer is because the weighted sample datasets are closer to the data from new users.",
"is the worst.",
"As k increases, the performance first increases then decreases.",
"The best performance is achieved when using the similarities calculated with user embeddings and using top 20 or 30 similar anchor users.",
"After that, including more users has little effect, as their similarity weights continue to decrease.",
"The main takeaway from this experiment is that although similar user data helps more than random data, the benefit does not transfer to the larger fine-tuning scenario.",
"This area may be worth further exploring for fine-tuning strategies or for training data selection in applications where new models must be trained.",
"In a set of only one hundred anchor users, it may be the case that existing users are not similar enough to the new user to benefit from our approach.",
"To test this idea we ran experiments using the larger set of 10k anchor users and 100 new users.",
"Taking our most promising user embedding similarity metric from the weighted sample fine-tuning, we tested this method's performance varying the number of similar users.",
"Our results in Table 3 show a reduction in perplexity of 0.94 at 100 similar users and over one point at 200 users.",
"There is a logarithmic improvement with the number of similar users considered, as we would expect more dissimilar users to be less informative.",
"The results in this table suggest that the anchor set must be diverse enough to contain similar users to new users, in order to benefit from this method.",
"We also try the interpolation model with a larger set of anchor users.",
"Our base model is trained on 10k anchor users and 2k tokens from each anchor.",
"Note that we are controlling for the total points from anchor users, using 100 times fewer points per user and 100 times more users.",
"Scaling up these experiments to more points and users is computationally expensive but may be worth exploring in future work.",
"We fine-tune this model to each similar anchor user for weighting predictions.",
"On a held-out set we tune and find that in this setting performance starts to drop after around 10 similar users.",
"It is computationally expensive to run each of the 10k models on each new user.",
"The perplexity similarity metric requires that all of these are run in order to determine similarity and thus is not scalable to the large anchor user setting.",
"The user embedding metric scales better because similarity can be determined by tuning an existing LM on 1748 #Sim.",
"new user data.",
"For ten similar users we require 1,000 times fewer computations than we would to weight all 10k users.",
"We found that authorship attribution performed much worse in this setting, as the confusion matrix becomes very sparse.",
"The results for our best similarity metric, user embeddings, are shown in Table",
"4. On the left we see performance for our model on the larger set containing 2k tokens per anchor user.",
"For this analysis of our best, scalable model, we include accuracy @N, a metric denoting the percentage of times the correct word was in the top-N most probable choices.",
"This is comparable to Table 3, where we used the same amount of data for the weighted sample fine-tuning approach.",
"On the right we see performance when the amount of data per anchor user is tripled.",
"The baseline and fine-tuned models all benefit from this additional data, however we find that the difference in perplexity is much larger, as having additional data will allow the models to learn more accurate similarity metrics.",
"We also find that when tuning it tends toward 0.6 when there are 2k tokens per anchor user but 0.3 when there are 6k.",
"As the amount of data from the anchor users increases, the optimal interpolation weights shift to weight the anchor user models more heavily than the model fine-tuned on the new user.",
"How the tuning of could be done on a per-user basis, rather than globally, is an interesting open question.",
"We looked at the differences between our three similarity functions by computing the correlation coefficients for Spearman's and Pearson's r in Table",
"5. Interestingly, the perplexity and authorship attribution metrics correlate much more strongly with the user embedding metric than with each other.",
"It is possible that the user embedding metric performs best in our experiments because it contains more of the useful information from both of the other metrics.",
"Additional heat maps for each metric are in Figure 3.",
"In general, they show that the three metrics seem to capture different information about the relationships between users.",
"The user embedding metric leads to more evenly distributed similarities, while the other two metrics have outlier anchor users that show stronger correlation with a subset of the new users.",
"fuels, qaeda, zealand, inte, al., antonio, facto, neutrality, kong, differ, olds, custody, cruise, obligation, arts, beck, guise, scrolls, vegas, mph, dame, conclusions, laden, pedestal, throne, ck, charm, occasions, disorders, correctness, disposal, capita, hominem, floyd, thrones, sarcastic, ghz, explorer, comprehension, standpoint, ambulance, noting, diego, accusations, cares, forth, enforcement, amp, nukem, convicted",
"We take the highest performing model using user embedding similarity trained on our large anchor user set and compare it to our baseline model to look at which words are more accurately predicted.",
"By taking the number of times each word is correctly predicted by the best model when the baseline was wrong and dividing by the total number of occurrences of that word in our language modeling data, we can find words that have the highest normalized frequency of being improved by our model.",
"The top 50 words for which we see improvement are shown in Table 6.",
"We see the second word of many two-word proper nouns in this set.",
"Many names can start with San or Las and so we see vegas, diego, and antonio, in this list.",
"Similarly, new precedes zealand and other location names.",
"The top word is fuels, which occurs often in the data in conversation about fossil fuels, though there are also many others that mention other kinds of fuels, or use fuels as a verb, as in it fuels outrage.",
"We also see that units such as mph or ghz are more accurately predicted.",
"The units that one chooses may be more common depending on where one lives, or in the case of ghz it may depend more on the subject matter that a user is familiar with or tends to talk about.",
"Other proper nouns such as game of thrones, or hong kong vs. donkey kong, contain common words, which individually may be hard to predict, but with knowledge of an individual's preferences could be predicted more accurately.",
"Work on personalized LMs could be used for surveillance by detecting language from individuals or groups (Stamatatos, 2009).",
"We recommend against such applications, as they threaten intellectual freedom and risk discrimination (Richards, 2013).",
"There may be a risk in storing private data necessary to construct these models, as data may not be properly secured or used.",
"Furthermore, a personalized model could reinforce incorrect language usage, which may be an issue for individuals learning to speak a new language, making it more difficult to learn.",
"Learning personal language patterns in a given context and suggesting these patterns in other contexts may lead to potentially incorrect or offensive results and we recommend that if this type of personalization is deemed appropriate, users are made aware of how their data is being used and potential consequences.",
"In this paper, we addressed the issue of language modeling in a low data setting where a new user may not have enough data to train a personalized LM and presented a novel approach that leverages data from similar users.",
"We considered three similarity metrics and two methods of leveraging data from similar anchor users to improve the performance of language modeling over a standard fine-tuning baseline, and showed how our results vary with the amount of data available for anchor users and the number of available anchor users.",
"We found that the most easily scalable and highest performing method was to use user embedding similarity and to interpolate similar user fine-tuned models.",
"Additionally, we provided an analysis of the kind of words that our personalized models are able to more accurately predict and further discussed limitations of our methods.",
"This material is based in part on work supported by the NSF (grant #1815291) and the John Templeton Foundation (grant #61156).",
"Any opinions, findings, conclusions, or recommendations in this material are those of the authors and do not necessarily reflect the views of the NSF or the John Templeton Foundation.",
"Clover icons taken from Freepik at flaticon.com."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"method",
"other",
"method",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"abstain",
"other",
"abstain",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"result",
"abstain",
"result",
"result",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"result",
"method",
"other",
"other",
"other"
] |
[
"Thanks to the strong representation power of neural encoders, neural chart-based parsers have achieved highly competitive performance by using local features.",
"Recently, it has been shown that non-local features in CRF structures lead to improvements.",
"In this paper, we investigate injecting non-local features into the training process of a local span-based parser, by predicting constituent n -gram non-local patterns and ensuring consistency between non-local patterns and local constituents.",
"Results show that our simple method gives better results than the self-attentive parser on both PTB and CTB.",
"Besides, our method achieves state-of-the-art BERT-based performance on PTB (95.92 F1) and strong performance on CTB (92.31 F1).",
"Our parser also achieves better or competitive performance in multilingual and zero-shot cross-domain settings compared with the baseline.",
"Constituency parsing is a fundamental task in natural language processing, which provides useful information for downstream tasks such as machine translation (Wang et al., 2018), natural language inference (Chen et al., 2017), text summarization (Xu and Durrett, 2019).",
"Over the recent years, with advance in deep learning and pre-training, neural chart-based constituency parsers (Stern et al., 2017a; Kitaev and Klein, 2018) have achieved highly competitive results on benchmarks like Penn Treebank (PTB) and Penn Chinese Treebank (CTB) by solely using local span prediction.",
"The above methods take the contextualized representation (e.g., BERT) of a text span as input, and use a local classifier network to calculate the scores of the span being a syntactic constituent, together with its constituent label.",
"For testing, the output layer uses a non-parametric dynamic programming The first two authors contributed equally to this work.",
"algorithm (e.g., CKY) to find the highest-scoring tree.",
"Without explicitly modeling structured dependencies between different constituents, the methods give competitive results compared to non-local discrete parsers (Stern et al., 2017a; Kitaev and Klein, 2018).",
"One possible explanation for their strong performance is that the powerful neural encoders are capable of capturing implicit output correlation of the tree structure (Stern et al., 2017a; Gaddy et al., 2018; Teng and Zhang, 2018).",
"Recent work has shown that modeling non-local output dependencies can benefit neural structured prediction tasks, such as NER (Ma and Hovy, 2016), CCG supertagging (Cui and Zhang, 2019) and dependency parsing (Zhang et al., 2020a).",
"Thus, an interesting research question is whether injecting non-local tree structure features is also beneficial to neural chart-based constituency parsing.",
"To this end, we introduce two auxiliary training objectives.",
"The first is Pattern Prediction .",
"As shown in Figure 1, we define pattern as the n -gram constituents sharing the same parent.",
"1 We ask the model to predict the pattern based on its span representation, which directly injects the non-local 1 Patterns are mainly composed of n -gram constituents but also include part-of-speech tags as auxiliary.",
"constituent tree structure to the encoder.",
"To allow stronger interaction between non-local patterns and local constituents, we further propose a Consistency loss, which regularizes the co-occurrence between constituents and patterns by collecting corpus-level statistics.",
"In particular, we count whether the constituents can be a sub-tree of the pattern based on the training set.",
"For instance, both NNS and NP are legal to occur as sub-trees of the 3-gram pattern { VBD NP PP } in Figure 1, while S or ADJP cannot be contained within this pattern based on grammar rules.",
"Similarly, for the 2-gram pattern { NP PP } highlighted in Figure 1, both IN and NP are consistent constituents, but JJ is not.",
"The Consistency loss can be considered as injecting prior linguistic knowledge to our model, which forces the encoder to understand the grammar rules.",
"Non-local dependencies among the constituents that share the same pattern are thus explicitly modeled.",
"We denote our model as Injecting N on-local F eatures for neural C hart-based parsers (NFC).",
"We conduct experiments on both PTB and CTB.",
"Equipped with BERT, NFC achieves 95.92 F1 on PTB test set, which is the best reported performance for BERT-based single-model parsers.",
"For Chinese constituency parsing, NFC achieves highly competitive results (92.31 F1) on CTB, outperforming the baseline self-attentive parser (91.98 F1) and a 0-th order neural CRF parser (92.27 F1) (Zhang et al., 2020b).",
"To further test the generalization ability, we annotate a multi-domain test set in English, including dialogue, forum, law, literature and review domains.",
"Experiments demonstrate that NFC is robust in zero-shot cross-domain settings.",
"Finally, NFC also performs competitively with other languages using the SPMRL 2013/2014 shared tasks, establishing the best reported results on three rich resource languages.",
"We release our code and models at https://github.com/ RingoS/nfc-parser .",
"Constituency Parsing.",
"There are mainly two lines of approaches for constituency parsing.",
"Transition-based methods process the input words sequentially and construct the output constituency tree incrementally by predicting a series of local transition actions (Zhang and Clark, 2009; Cross and Huang, 2016; Liu and Zhang, 2017).",
"For these methods, the sequence of transition actions make traversal over a constituent tree.",
"Although transition-based methods directly model partial tree structures, their local decision nature may lead to error propagation (Goldberg and Nivre, 2013) and worse performance compared with methods that model long-term dependencies (McDonald and Nivre, 2011; Zhang and Nivre, 2012).",
"Similar to transition-based methods, NFC also directly models partial tree structures.",
"The difference is that we inject tree structure information using two additional loss functions.",
"Thus, our integration of nonlocal constituent features is implicit in the encoder, rather than explicit in the decoding process.",
"While the relative effectiveness is empirical, it could potentially alleviate error propagation.",
"Chart-based methods score each span independently and perform global search over all possible trees to find the highest-score tree given a sentence.",
"Durrett and Klein (2015) represented nonlinear features to a traditional CRF parser computed with a feed-forward neural network.",
"Stern et al. (2017b) first used LSTM to represent span features.",
"Kitaev and Klein (2018) adopted a self-attentive encoder instead of the LSTM encoder to boost parser performance.",
"Mrini et al. (2020) proposed label attention layers to replace self-attention layers.",
"Zhou and Zhao (2019) integrated constituency and dependency structures into head-driven phrase structure grammar.",
"Tian et al. (2020) used span attention to produce span representation to replace the subtraction of the hidden states at the span boundaries.",
"Despite their success, above work mainly focuses on how to better encode features over the input sentence.",
"In contrast, we take the encoder of Kitaev and Klein (2018) intact, being the first to explore new ways to introduce non-local training signal into the local neural chart-based parsers.",
"Modeling Label Dependency.",
"There is a line of work focusing on modeling non-local output dependencies.",
"Zhang and Zhang (2010) used a Bayesian network to encode the label dependency in multi-label learning.",
"For neural sequence labeling, Zhou and Xu (2015) and Ma and Hovy (2016) built a CRF layer on top of neural encoders to capture label transition patterns.",
"Pislar and Rei (2020) introduced a sentence-level constraint to encourage the model to generate coherent NER predictions.",
"Cui and Zhang (2019) investigated label attention network to model the label dependency by producing label distribution in sequence labeling tasks.",
"Gui et al. (2020) proposed a two-stage label decoding framework based on Bayesian network to 2066 model long-term label dependencies.",
"For syntactic parsing, Zhang et al. (2020b) demonstrated that structured Tree CRF can boost parsing performance over graph-based dependency parser.",
"Our work is in line with these in the sense that we consider non-local structure information for neural structure prediction.",
"To our knowledge, we are the first to inject sub-tree structure into neural chart-based encoders for constituency parsing.",
"Our baseline is adopted from the parsing model of Kitaev and Klein (2018) and Kitaev et al. (2019).",
"Given a sentence X = { x 1 , ..., x n } , its corresponding constituency parse tree T is composed by a set of labeled spans T = { ( i t , j t , l c t ) }| | T | t =1 (1) where i t and j t represent the t -th constituent span's fencepost positions and l c t represents the constituent label.",
"The model assigns a score s ( T ) to tree T , which can be decomposed as s ( T ) = (cid:88) ( i,j,l ) T s ( i, j, l c ) (2) Following Kitaev et al. (2019), we use BERT with a self-attentive encoder as the scoring function s ( i, j, ) , and a chart decoder to perform a global-optimal search over all possible trees to find the highest-scoring tree given the sentence.",
"In particular, given an input sentence X = { x 1 , ..., x n } , a list of hidden representations H n 1 = { h 1 , h 2 , . . . , h n } is produced by the encoder, where h i is a hidden representation of the input token x i .",
"Following previous work, the representation of a span ( i, j ) is constructed by: v i,j = h j h i (3) Finally, v i,j is fed into an MLP to produce real-valued scores s ( i, j, ) for all constituency labels: s ( i, j, ) = W c2 RELU ( W c1 v i,j + b c1 ) + b c2 (4) where W c1 , W c2 , b c1 and b c2 are trainable parameters, W c2 R | H || L c | can be considered as the constituency label embedding matrix (Cui and Zhang, 2019), where each column in W c 2 corresponds to the embedding of a particular constituent label.",
"| H | represents the hidden dimension and | L c | is the size of the constituency label set.",
"Training.",
"The model is trained to satisfy the margin-based constraints s ( T ) s ( T ) + ( T, T ) (5) where T denotes the gold parse tree, and is Hamming loss.",
"The hinge loss can be written as L cons = max (cid:0) 0 , max T (cid:54) = T [ s ( T ) + ( T, T )] s ( T ) (cid:1) (6) During inference time, the most-optimal tree T = argmax T s ( T ) (7) is obtained using a CKY-like algorithm.",
"We propose two auxiliary training objectives to inject non-local features into the encoder, which rely only on the annotations in the constituency treebank, but not external resources.",
"We define n -gram constituents, which shares the same parent node, as a pattern.",
"We use a triplet ( i p , j p , l p ) to denote a pattern span beginning from the i p -th word and ending at j p -th word.",
"l p is the corresponding pattern label.",
"Given a constituency parse tree in Figure 1, (3 , 11 , { VBD NP PP } ) is a 3 -gram pattern.",
"Similar to Eq 4, an MLP is used for transforming span representations to pattern prediction probabilities: p i,j = Softmax (cid:0) W p2 RELU ( W p1 v i,j + b p1 ) + b p2 (cid:1) (8) where W p1 , W p2 , b p1 and b p2 are trainable parameters, W p2 R | H || L p | can be considered as the 2067 pattern label embedding matrix, where each column in W p 2 corresponds to the embedding of a particular pattern label.",
"| L p | represents the size of the pattern label set.",
"For each instance, the cross-entropy loss between the predicted patterns and the gold patterns are calculated as L pat = n (cid:88) i =1 n (cid:88) j =1 p i,j log p i,j (9) We use the span-level cross-entropy loss for patterns (Eq 9) instead of the margin loss in Eq 6, because our pattern-prediction objective aims to augment span representations via greedily classifying each pattern span, rather than to reconstruct the constituency parse tree through dynamic programming.",
"Constituency scores and pattern probabilities are produced based on a shared span representation; however, the two are subsequently separately predicted.",
"Therefore, although the span representations contain both constituent and pattern information, the dependencies between constituent and pattern predictions are not explicitly modeled.",
"Intuitively, constituents are distributed non-uniformly in patterns, and such correlation can be obtained in the corpus-level statistic.",
"We propose a consistency loss, which explicitly models the non-local dependencies among constituents that belong to the same pattern.",
"As mentioned in the introduction, we regard all constituent spans within a pattern span as being consistent with the pattern span.",
"Take 2-gram patterns for example, which represents two neighboring subtrees covering a text span.",
"The constituents that belong to the two subtrees, including the top constituent and internal sub constituents, are considered as being consistent.",
"We consider only the constituent labels but not their corresponding span locations for this task.",
"This loss can be understood first at the instance level.",
"In particular, if a constituent span ( i t , j t , l c t ) is a subtree of a pattern span ( i t (cid:48) , j t (cid:48) , l p t (cid:48) ) , i.e. i t > = i t (cid:48) and j t < = j t (cid:48) , where l c t = L c [ a ] (the a -th constituent label in L c ) and l p t (cid:48) = L p [ b ] (the b -th pattern label in L p ), we define L c [ a ] and L p [ b ] to be consistent (denoted as y a,b = 1 ).",
"Otherwise we consider it to be non-consistent (denoted as y a,b = 0 ).",
"This yields a consistency matrix Y R | L c || L p | for each instance.",
"The gold consistency matrix Y provides information regarding non-local dependencies among constituents and patterns.",
"An intuitive method to predict the consistency matrix Y is to make use of the constituency label embedding matrix W p2 (see Eq 4 for definition), the pattern label embedding matrix W c2 (see Eq 8 for definition) and the span representations V (see Eq 3 for definition): Y = Sigmoid (cid:0) ( W c2 TU 1 V )( VTU 2 W p2 ) (cid:1) (10) where U 1 , U 2 R | H || H | are trainable parameters.",
"Intuitively, the left term, W c2 TU 1 V , integrates the representations of the pattern span and all possible constituent label embeddings.",
"The second term, VTU 2 W p2 , integrates features of the span and all pattern embeddings.",
"Each binary element in the resulting Y R | L c || L p | denotes whether a particular constituent label is consistent with a particular pattern in the given span context.",
"Eq 10 can be predicted on the instance-level for ensuring consistency between patterns and constituent.",
"However, this naive method is difficult for training, and computationally infeasible, because the span representation matrix V R | H | n 2 is composed of n 2 span representations v i,j R | H | and the asymptotic complexity is: O (cid:16) ( | L p | + | L c | )( | H | 2 + n 2 | H | ) + | L p || L c | n 2 (cid:17) (11) for a single training instance.",
"We instead use a corpus-level constraint on the non-local dependencies among constituents and patterns.",
"In this way, Eq 10 is reduced to be inde-pendent of individual span representations: Y = Sigmoid (cid:0) W c2 UW p2 T (cid:1) (12) where U R | H || H | is trainable.",
"This trick decreases the asymptotic complexity to O ( | L c || H | 2 + | L p || L c || H | ) .",
"The cross-entropy loss between the predicted consistency matrix and gold consistency labels is used to optimize the model: L reg = | L c | (cid:88) a =1 | L p | (cid:88) b =1 y a,b log y a,b (13) The corpus-level constraint can be considered as a prior linguistic knowledge statistic from the treebank, which forces the encoder to understand the grammar rules.",
"Given a constituency tree, we minimize the sum the three objectives to optimize the parser:",
"The number of training parameters increased by NFC is W p1 R | H || H | , W p2 R | H || L p | , b p1 R | H | and b p2 R | L p | in Eq 8 and U R | H || H | in Eq 12.",
"Taking training model on PTB as an example, NFC adds less than 0.7M parameters to 342M parameters baseline model (Kitaev and Klein, 2018) based on BERT-large-uncased during training.",
"NFC is identical to our baseline self-attentive parser (Kitaev and Klein, 2018) during inference.",
"We empirically compare NFC with the baseline parser in different settings, including in-domain, cross-domain and multilingual benchmarks.",
"In-domain.",
"We conduct experiments on both English and Chinese, using the Penn Treebank (Mar-cus et al., 1993) as our English dataset, with standard splits of section 02-21 for training, section 22 for development and section 23 for testing.",
"For Chinese, we split the Penn Chinese Treebank (CTB) 5.1 (Xue et al., 2005), taking articles 001-270 and 440-1151 as training set, articles 301-325 as development set and articles 271-300 as test set.",
"Cross-domain.",
"To test the robustness of our methods across difference domains, we further annotate five test set in dialogue, forum, law, literature and review domains.",
"For the dialogue domain, we randomly sample dialogue utterances from Wizard of Wikipedia (Dinan et al., 2019), which is a chit-chat dialogue benchmark produced by humans.",
"For the forum domain, we use users' communication records from Reddit, crawled and released by Vlske et al. (2017).",
"For the law domain, we sample text from European Court of Human Rights Database (Stiansen and Voeten, 2019), which includes detailing judicial decision patterns.",
"For the literature domain, we download literary fictions from Project Gutenberg 2 .",
"For the review domain, we use plain text across a variety of product genres, released by SNAP Amazon Review Dataset (He and McAuley, 2016).",
"After obtaining the plain text, we ask annotators whose majors are linguistics to annotate constituency parse tree by following the PTB guideline.",
"We name our dataset as M ulti-domain C onstituency T ree b ank (MCTB).",
"More details of the dataset are documented in Yang et al. (2022).",
"Multi-lingual.",
"For the multilingual testing, we select three rich resource language from the SPMRL 2013-2014 shared task (Seddah et al., 2013): French, German and Korean, which include at least 10,000 training instances, and three low-resource language: Hungarian, Basque and Polish.",
"Our code is based on the open-sourced code of Kitaev and Klein (2018) 3 .",
"The training process gets terminated if no improvement on development F1 is obtained in the last 60 epochs.",
"We evaluate the models which have the best F1 on the development set.",
"For fair comparison, all reported results and baselines are augmented with BERT.",
"We adopt BERT-large-uncased for English, BERT-base for Chinese and BERT-multi-lingual-uncased for other languages.",
"Most of our hyper-parameters are adopted from Kitaev and Klein (2018) and Fried et al. (2019).",
"For scales of the two additional losses, we set the scale of pattern loss to 1.0 and the scale of consistency loss to 5.0 for all experiments.",
"local pattern features that appear less than 5 times in the PTB training set and those that account for less than 0.5% of all pattern occurrences in the CTB training set.",
"The out-of-vocabulary patterns are set as < UNK > .",
"This results in moderate pattern vocabulary sizes of 841 for PTB and 514 for CTB.",
"For evaluation on PTB, CTB and cross-domain dataset, we use the EVALB script for evaluation.",
"For the SPMRL datasets, we follow the same setup in EVALB as Kitaev and Klein (2018).",
"We report the performance of our method on the test sets of PTB and CTB in Table 2 and 3, respectively.",
"Compared with the baseline parser (Kitaev and Klein, 2018), our method obtains an absolute improvement of 0.20% F1 on PTB ( p <0.01) and 0.33% F1 on CTB ( p <0.01), which verifies the effectiveness of injecting non-local features into neural local span-based constituency parsers.",
"Note that the proposed method adds less than 0.7M parameters to the 342M parameter baseline model using BERT-large.",
"The parser trained with both the pattern loss (Section 4.1) and consistency loss (Section 4.2) outperforms the one trained only with pattern loss by 0.14% F1 ( p <0.01).",
"This suggests that the constraints between constituents and non-local pattern features are crucial for injecting non-local features into local span-based parsers.",
"One possible explanation for the improvement is that the constraints may bridge the gap between local and non-local supervision signals, since these two are originally separately predicted while merely sharing the same encoder in the training phase.",
"We further compare our method with the recent state-of-the-art parsers on PTB and CTB.",
"Liu and Zhang (2017) propose an in-order transition-based constituency parser.",
"Kitaev and Klein (2018) use self-attentive layers instead of LSTM layers to boost performance.",
"Zhou and Zhao (2019) jointly optimize constituency parsing and dependency parsing objectives using head-driven phrase structure grammar.",
"Mrini et al. (2020) extend Zhou and Zhao (2019) by introducing label attention layers.",
"Zhang et al. (2020b) integrate a CRF layer to a chart-based parser for structural training (with-out non-local features).",
"Tian et al. (2020) use span attention for better span representation.",
"Compared with these methods, the proposed method achieves an F1 of 95.92%, which exceeds previous best numbers for BERT-based single-model parsers on the PTB test set.",
"We further compare experiments for five runs, and find that NFC significantly outperforms Kitaev and Klein (2018) ( p <0.01).",
"The test score of 92.31% F1 on CTB significantly outperforms the result (91.98% F1) of the baseline ( p <0.01).",
"Compared with the CRF parser of Zhang et al. (2020b), our method gives better scores without global normalization in training.",
"This shows the effectiveness of integrating non-local information during training using our simple regularization.",
"The result is highly competitive with the current best result (Mrini et al., 2020), which is obtained by using external dependency parsing data.",
"We compare the generalization of our methods with baselines in Table 4.",
"In particular, all the parsers are trained on PTB training and validated on PTB development, and are tested on cross-domain test in the zero-shot setting.",
"As shown in the table, our model achieves 5 best-reported results among 6 cross-domain test sets with an averaged F1 score of 89.85%, outperforming our baseline parser by 2.97% points.",
"This shows that structure information is useful for improving cross-domain performance, which is consistent with findings from previous work (Fried et al., 2019).",
"To better understand the benefit of pattern features, we calculate Pearson correlation of n -gram pattern distributions between the PTB training set and various test sets in Figure 3.",
"First, we find that the correlation between the PTB training set and the PTB test set is close to 1.0, which verifies the effectiveness of the corpus-level pattern knowledge during inference.",
"Second, the 3 -gram pattern correlation of all domains exceeds 0.75, demonstrating that n -gram pattern knowledge is robust across domains, which supports the strong performance of NFC in the zero-shot cross-domain setting.",
"Third, pattern correlation decreases significantly as n increases, which suggests that transferable non-local information is limited to a certain window size of n -gram constituents.",
"We compare NFC with Kitaev and Klein (2018) and Nguyen et al. (2020) on SPMRL.",
"The results are shown in Table 5.",
"Nguyen et al. (2020) use pointer network to predict a sequence of pointing decisions for constituency parsing.",
"As can be seen, Figure 3: Pearson correlation of n -gram pattern distribution between PTB training set and different test set.",
"Nguyen et al. (2020) do not show obvious advantages over Kitaev and Klein (2018).",
"NFC outperforms these two methods on three rich resource languages.",
"For example, NFC achieves 89.07% F1 on Korean, outperforming Kitaev and Klein (2018) by 0.27% F1, suggesting that NFC is generally effective across languages.",
"However, NFC does not give better results compared with Kitaev and Klein (2018) on low-resource languages.",
"One possible explanation is that it is difficult to obtain prior linguistic knowledge from corpus-level statistics by using a relatively small number of instances.",
"Figure 4 shows the pattern-level F1 before and after introducing the two auxiliary training objectives.",
"In particular, we calculate the pattern-level F1 by calculating the F1 score for patterns based 2071",
"on the constituency trees predicted by CKY decoding.",
"Although our baseline parser with BERT achieves 95.76% F1 scores on PTB, the pattern-level F1 is 80.28% measured by 3-gram.",
"When testing on the dialogue domain, the result is reduced to only 57.47% F1, which indicates that even a strong neural encoder still has difficulties capturing constituent dependency from the input sequence alone.",
"After introducing the pattern and consistency losses, NFC significantly outperforms the baseline parser measured by 3-gram pattern F1.",
"Though there is no direct supervision signal for 2-gram pattern, NFC also gives better results on pattern F1 of 2-gram, which are subsumed by 3-gram patterns.",
"This suggests that NFC can effectively represent sub-tree structures.",
"We compare the performance of the baseline and our method on constituent spans with different word lengths.",
"Figure 5 shows the trends of F1 scores on the PTB test set as the minimum constituent span length increases.",
"Our method shows a minor improvement at the beginning, but the gap becomes more evident when the minimum span length increases, demonstrating its advantage in capturing more sophisticated constituency label dependency.",
"Exact match score represents the percentage of sentences whose predicted trees are entirely the same as the golden trees.",
"Producing exactly matched trees could improve user experiences in practical scenarios and benefit downstream applications on other tasks (Petrov and Klein, 2007; Kummerfeld et al., 2012).",
"We compare exact match scores of 2072 NFC with that of the baseline parser.",
"As shown in Figure 6, NFC achieves large improvements in exact match score for all domains.",
"For instance, NFC gets 33.40% exact match score in the review domain, outperforming the baseline by 10.2% points.",
"We assume that this results from the fact that NFC successfully ensures the output tree structure by modeling non-local correlation.",
"As mentioned in Section 4.4, NFC only introduces a few training parameters to the baseline model (Ki-taev and Klein, 2018).",
"For PTB, NFC takes about 19 hours to train with a single RTX 2080Ti, while the baseline takes about 13 hours.",
"For CTB, the approximate training time is 12 hours for NFC and 7 hours for the baseline.",
"Our inference time is the same as that of the baseline parser, since no further computational operations are added to the inference phase.",
"Both take around 11 seconds to parse the PTB section 23 (2416 sentences, an average of 23.5 tokens per sentence).",
"We investigated graph-based constituency parsing with non-local features both in the sense that features are not restricted to one constituent, and in the sense that they are not restricted to each training instance.",
"Experimental results verify the effectiveness of injecting non-local features to neural chart-based constituency parsing.",
"Equipped with pre-trained BERT, our method achieves 95.92% F1 on PTB and 92.31% F1 on CTB.",
"We further demonstrated that the proposed method gives better or competitive results in multilingual and zero-shot cross-domain settings.",
"We appreciate the insightful comments from the anonymous reviewers.",
"We thank Zhiyang Teng for the insightful discussions.",
"We gratefully acknowledge funding from the National Natural Science Foundation of China (NSFC No.61976180).",
"As mentioned in Section 5.1, we collected the raw data from free and publicly available sources that have no copyright or privacy issues.",
"We recruited our annotators from the linguistics departments of local universities through public advertisement with a specified pay rate.",
"All of our annotators are senior undergraduate students or graduate students in linguistic majors who took this annotation as a part-time job.",
"We manually shuffled the data so that all batches of to-be-annotated data have similar lengths on average.",
"An annotator could annotate around 25 instances per hour.",
"We pay them 50 CNY an hour.",
"The local minimum salary in the year 2021 is 22 CNY per hour for part-time jobs.",
"Our annotated data only involves factual information (i.e., syntactic annotation), but not opinions, attitudes or beliefs.",
"Therefore, the annotation job does not belong to human subject research; and IRB approval is not required."
] | [
"abstain",
"abstain",
"objective",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"objective",
"method",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"result",
"objective",
"other",
"other",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"Although deep neural networks have achieved prominent performance on many NLP tasks, they are vulnerable to adversarial examples.",
"We propose Dirichlet Neighborhood Ensemble (DNE), a randomized method for training a robust model to defense synonym substitution-based attacks.",
"During training, DNE forms virtual sentences by sampling embedding vectors for each word in an input sentence from a convex hull spanned by the word and its synonyms, and it augments them with the training data.",
"In such a way, the model is robust to adversarial attacks while maintaining the performance on the original clean data.",
"DNE is agnostic to the network architectures and scales to large models (e.g., BERT) for NLP applications.",
"Through extensive experimentation, we demonstrate that our method consistently outperforms recently proposed defense methods by a significant margin across different network architectures and multiple data sets.",
"Deep neural networks are powerful but vulnerable to adversarial examples that are intentionally crafted to fool the networks.",
"Recent studies have shown the vulnerability of deep neural networks in many NLP tasks, including reading comprehension (Jia and Liang, 2017), text classification (Samanta and Mehta, 2017; Wong, 2017; Liang et al., 2018; Alzantot et al., 2018), machine translation (Zhao et al., 2018; Ebrahimi et al., 2018; Cheng et al., 2018), dialogue systems (Cheng et al., 2019), and dependency parsing (Zheng et al., 2020).",
"These methods attack an NLP model by replacing, scrambling, and erasing characters or words under certain semantic and syntactic constraints.",
"In particular, most of them craft adversarial examples by substituting words with their synonyms in an input text to maximally increase the prediction error while maintaining the adversarial examples' fluency and naturalness.",
"In this paper, we focus on these word substitution-based threat models and discuss the strategy to defend against such attacks.",
"The goal of adversarial defenses is to learn a model capable of achieving high test accuracy on both clean and adversarial examples.",
"Adversarial training is one of the most successful defense methods for NLP models (Miyato et al., 2017; Sato et al., 2019; Zhu et al., 2019).",
"During the training time, they replace a word with one of its synonyms that maximizes the prediction loss.",
"By augmenting these adversarial examples with the original training data, the model is robust to such perturbations.",
"However, it is infeasible to explore all possible combinations where each word in a sentence can be replaced with any of its synonyms.",
"Also, when updating word embeddings during training, the distance between a word and its synonyms in the embedding space change dynamically.",
"Therefore, the point-wise guarantee becomes insufficient, and the resulting models have shown to be vulnerable to strong attacks (Alzantot et al., 2018).",
"On the other hand, several certified defense methods have recently been proposed to ensure that the model predictions are unchanged when input word embeddings are perturbed within the convex hull formed by the embeddings of a word and its synonyms (Jia et al., 2019; Huang et al., 2019).",
"However, due to the difficulty of propagating convex hull through deep neural networks, they compute a loose outer bound using Interval Bound Propagation (IBP).",
"As a result, the convex hull may contain irrelevant words and lead to a significant performance drop on the clean data.",
"In this paper, we propose Dirichlet Neighborhood Ensemble (DNE) to create virtual sentences by mixing the embedding of the original word in the input sentence with its synonyms.",
"By training on these virtual sentences, the model can enhance the robustness against word substitution-based perturbations.",
"Specifically, our method samples an embedding vector in the convex hull formed by a word and its synonyms to ensure the robustness within such a region.",
"In contrast to IBP, our approach better represents the synonyms' subspace by creating virtual sentences.",
"To deal with complex error surface (e.g., surfaces containing multiple hills and valleys), a gradient-guided optimizer is applied to search for the most vulnerable points within the convex hull.",
"By minimizing the error with these vulnerable points, we can guarantee with high probability that the resulting model is robust at any point within the convex hull (i.e., a set of synonyms).",
"The framework can be extended to higher-order neighbors (synonyms) to boost the robustness further.",
"In the inference time, the same Dirichlet sampling technique is used, and the prediction scores on the virtual sentences are ensembled to get a robust output.",
"Through extensive experiments with various model architectures on multiple data sets, we show that DNE consistently achieves better performance on clean and adversarial samples than existing defense methods.",
"By conducting a detailed analysis, we found that DNE enables the embeddings of a set of similar words to be updated together in a coordinated way.",
"In contrast, prior approaches either fix the word vectors during training (e.g., in the certified defenses) or update individual word vectors independently (e.g., in the adversarial training).",
"We believe it is the crucial property why DNE leads to a more robust NLP model.",
"Furthermore, unlike most certified defenses, the proposed method is easy to implement and can be integrated into any existing neural network including those with large architecture such as BERT (Devlin et al., 2019).",
"In the text domain, adversarial training is one of the most successful defenses (Miyato et al., 2017; Sato et al., 2019; Zhu et al., 2019).",
"A family of fast-gradient sign methods (FGSM) was introduced by Goodfellow et al. (2015) to generate adversarial examples in the image domain.",
"They showed that the robustness and generalization of machine learning models could be improved by including high-quality adversarial examples in the training data.",
"Miyato et al. (2017) proposed an FGSM-like adversarial training method to the text domain by applying perturbations to the word embeddings rather than to the original input itself.",
"Sato et al. (2019) extended the work of Miyato et al. (2017) to improve the interpretability by constraining the directions of perturbations toward the existing words in the word embedding space.",
"Zhang and Yang (2018) applied several types of noises to perturb the input word embeddings, such as Gaussian, Bernoulli, and adversarial noises, to mitigate the overfitting problem of NLP models.",
"Zhu et al. (2019) proposed a novel adversarial training algorithm, called FreeLB (Free Large-Batch), which adds adversarial perturbations to word embeddings and minimizes the resultant adversarial loss inside different regions around input samples.",
"They add norm-bounded adversarial perturbations to the input sentences' embeddings using a gradient-based method and enlarge the batch size with diversified adversarial samples under such norm constraints.",
"However, they focus on the effects on generalization rather than the robustness against adversarial attacks.",
"Recently a set of certified defenses has been introduced, which guarantees robustness to some spe-cific types of attacks.",
"For example, Jia et al. (2019) and Huang et al. (2019) use a bounding technique, interval bound propagation (IBP) to formally verify a model's robustness against word substitution-based perturbations.",
"Shi et al. (2020) and Xu et al. (2020) proposed the robustness verification and training method for transformers based on linear relaxation-based perturbation analysis.",
"However, these defenses often lead to loose upper bounds for arbitrary networks and result in a higher cost of clean accuracy.",
"Furthermore, due to the difficulty of verification, certified defense methods are usually not scalable and remain hard to scale to complex prediction pipelines.",
"To achieve certified robustness on large architectures, Ye et al. (2020) proposed a certified robust method called SAFER which is structure-free.",
"However, the base classifier of SAFER is trained by the adversarial data augmentation.",
"As shown in our experiments, randomly perturbing a word to its synonyms performs poorly in practice.",
"In the image domain, randomization has been shown to overcome many of these obstacles in the IBP-based defense.",
"Empirically, Xie et al. (2017) showed that random resizing and padding in the input domain could improve the robustness.",
"Liu et al. (2018) proposed to add Gaussian noise in both the input layer and intermediate layers of CNN in both training and inference time to improve the robustness.",
"Lecuyer et al. (2019) provided a certified guarantee of this method, and later on, the bound is significantly improved in Cohen et al. (2019).",
"The resulting algorithm, called randomized smoothing, has become widely used in certifying ` 2 robustness for image classifiers.",
"These random smoothing methods are very much under-explored in NLP models.",
"The main reason is that the adversarial examples in texts are usually generated by word substitution-based perturbations instead of small ` p norm.",
"In this paper, we show that random smoothing can be integrating with adversarial training to boost the empirical robust accuracy.",
"We here consider a word substitution-based threat model, where every word in an input sentence can be replaced with one of its synonyms.",
"Given a sentence and synonym sets, we would like to ensure that the prediction of a model trained with our method cannot be altered by any word substitution-based perturbation to the sentence.",
"However, the number of possible perturbations scales exponentially with sentence length, so data augmentation cannot cover all perturbations of an input sentence.",
"We use a convex hull formed by a word and its synonyms to capture word substitutions, which allows us to search for the worse-case over the convex hull.",
"By minimizing the error with the worst-case, we can guarantee with high probability that the model is robust at any point within the convex hull (i.e., a set of synonyms).",
"The proposed method can be viewed as a kind of randomized defense on NLP models, where our main contribution is to show that it is essential to ensure the model works well in a region within the convex hull formed by the embeddings of a word and its synonyms instead of only ensuring model is good under discrete perturbation.",
"Although DNE does not provide certified lower bounds like IBP, it achieves much better accuracy on both clean and adversarial data on different models, datasets, and attacks compared with IBP.",
"DNE also can be easily integrated into any neural networks, including large architecture such as BERT.",
"Let f be a base classifier which maps an input sentence x 2 X to a class label y 2 Y .",
"We consider the setting where for each word x i in the sentence x , we are given a set of its synonyms S ( x i ) including x i itself, where we know replacing x i with any of S ( x i ) is unlikely to change the semantic meaning of the sentence 1 .",
"We relax the set of discrete points (a word and its synonyms) to a convex hull spanned by the word embeddings of all these points, denoted by C ( x i ) .",
"We assume any perturbation within this convex hull will keep the semantic meaning unchanged, and define a smoothed classifier g ( x ) based on random sampling within the convex hull as follows.",
"where x is generated by replacing the embedding of each word x i in the sentence x with a point randomly sampled from x i 's convex hull C ( x i ) .",
"In the training time, the base classifier is trained with vir-tual data augmentation sampled in the embedding space, where each word x i is replaced with a point in the convex hull containing C ( x i ) by the proposed sampling algorithm described Section 3.1.",
"A new adversarial training algorithm is also designed to enable NLP models to defense against the strong attacks that search for the worst-case over all combinations of word substitutions.",
"A similar sampling strategy is conducted in the inference time.",
"Note that it is impossible to precisely calculate the probabilities with which f classifies x as each class, so we use a Monte Carlo algorithm for evaluating g ( x ) .",
"As shown in Fig. 1",
"(a), for a sentence x , we draw k samples of x by running k noise-corrupted copies of x through the base classifier f ( x ) , where x is generated by replacing the embedding of every word x j in the sentence x with a point randomly sampled from C ( x j ) (the pentagon with yellow dashed borders).",
"If the class y appeared with maximal weight in the categorical distribution x , the smoothed classifier g ( x ) returns y .",
"The decision regions of the base classifier are drawn in different colors if we evaluate the smoothed classifier at an input x j , where the regions with different colors represent different classes.",
"Assuming that the word x i is replaced with x j by an adversary, we need to sample the points from the convex hull C ( x j ) in the inference time.",
"However, some of x j 's synonyms (indicated by yellow circles) are outside the region formed by x i and its synonyms (indicated by blue circles).",
"We thus should expand this region to the polygon with green dashed borders to make sure that the model makes the same prediction for any point sampled from the expanded region.",
"We ensure that the smoothed 1 Follow Jia et al. (2019), we base our sets of word substitutions S ( x i ) on the method of Alzantot et al. (2018).",
"classifier label x j as f ( x i ) by training the base classifier to label the instances sampled from the expanded region as f ( x i ) so that the blue region is always larger than green, yellow and pink ones.",
"The random perturbations of x are combinatorial, and thus training the base classifier f that consistently labels any perturbation of x as y requires checking an exponential number of predictions.",
"To better reflect those discrete word substitution-based perturbations, we sample the points from a convex hull using the Dirichlet distribution.",
"This allows us to control how far we can expect the points are from any vertex of the convex hull.",
"If a sampled point is very close to a vertex (i.e., a word), it simulates a word substitution-based perturbation in which the vertex is chosen to replace the original one.",
"Any point sampled from C ( x i ) can be represented as a convex combination of the embeddings of S ( x i ) : ( x i ) = X x j 2 S ( x i ) \u0000 j x j , (2) where \u0000 j \u0000 0 , j \u0000 j = 1 , and x j (in bold type) denotes the embedding of x j .",
"A vector \u0000 contains the weights drawn from the Dirichlet distribution as follows: \u0000 1 , . . . , \u0000 m Dir ( 1 , . . . , m ) , (3) where m is the size of S ( x i ) , and the Dirichlet distribution is parameterized by a vector of used to control the degree in which the words in S ( x i ) contribute to generate the vector ( x i ) .",
"For the smoothed classifier g to classify an adversarial example of x correctly and robustly, f needs to consistently classify x as the gold label of x .",
"Therefore, we train the base classifier with virtual data augmentation x for each training example x .",
"In Fig. 1",
"(b), we illustrate the process by considering a sentence with one word x i and the set of its synonyms (shown as blue circles).",
"The input perturbations span a convex hull of C ( x i ) around the word x i (the pentagon with blue borders, projected to 2D here).",
"Assuming that the word x i is replaced with x j by an adversary, noise-corrupted samples will be drawn from C ( x j ) (the pentagon with yellow dashed borders) instead of C ( x i ) .",
"If the size of the intersection of C ( x i ) and C ( x j ) is small, we cannot expect f will consistently classify x j as the same label as x i .",
"Therefore, we expand C ( x i ) to the convex hull spanned by the word embeddings of the union of S ( x i ) and all of S ( x j ) , x j 2 S ( x i ) , namely x i 's 1-hop neighbors and 2-hop neighbors in their embedding space, denoted by B ( x i ) .",
"We use e x to denote a virtual example created by replacing the embedding of every word x i in an input sentence x with a point randomly sampled from the expanded B ( x i ) by the Dirichlet distribution.",
"Such expansions will slightly hurt the performance on the clean data.",
"Recall that different values of can be used to control the degree in which the 1-hop and 2-hop neighbors contribute to generating e x .",
"In our implementation, we let the expected weights of the 2-hop neighbors are less than one-half of those of the 1-hop neighbors when computing e x as Eq.",
"(2) to reduce the impact on the clean accuracy.",
"The base classifier is trained by minimizing the cross-entropy error with virtual data augmentation by gradient descent.",
"We assume the base classifier takes form f ( x ) = arg max c 2 Y s c ( x ) , where each s c ( x ) is the scoring function for the class c .",
"That is, the outputs of the neural networks before the softmax layer.",
"Our objective is to maximize the sum of the log-probabilities that f will classify each e x as the label of x .",
"Let D be a training set of n instances, and each of them is a pair of ( x, y ) : X 8 ( x,y ) 2 D log P e x ( f ( e x ) = y ) = X 8 ( x,y ) 2 D log E e x 1 arg max c 2 Y s c ( e x ) = y \u0000 , (4) where e x is a virtual example randomly created for an input example x .",
"The softmax function can be viewed as a continuous, differentiable approximation of argmax: 1 arg max c 2 Y s c ( e x ) = y \u0000 exp( s y ( e x )) P c 2 Y exp( s c ( e x )) .",
"(5) By the concavity of log and Jensen's inequality, the objective is approximately lower-bounded by: X 8 ( x,y ) 2 DE e x log exp( s y ( e x )) P c 2 Y exp( s c ( e x )) \u0000 .",
"(6) This is the negative cross-entropy loss with virtual data augmentation.",
"Maximizing Eq.",
"(6) approximately maximizes Eq.",
"(4).",
"Since the virtual data point defined in Eq.",
"(2) is a linear combination of embeddings of S ( x i ) , the back-propagation will propagate the gradient to all these embeddings with nonzero coefficients, thus allowing updating all these embeddings together in a coordinated way when performing parameter updates.",
"As illustrated in Fig. 1, the whole convex hull will be shifted together at each iteration.",
"In contrast, traditional adversarial training only updates the embedding of one synonym (a vertex of the convex hull), which will distort the relative position of those embeddings and thus become slower and less stable.",
"It is probably why the word embeddings are fixed during training in the certified defenses (Huang et al., 2019; Jia et al., 2019).",
"Even though the word embeddings can be pre-trained, holding embeddings fixed makes them impossible to be fine-tuned for the tasks of interest, which may hurt the performance.",
"To promote higher robustness and invariance to any region within the convex hull, we further propose combining Dirichlet sampling with adversarial training to better explore different regions inside the convex hull B ( x i ) .",
"Any point sampled from B ( x i ) is represented as the convex combination of the embeddings of its vertices, which ensures that a series of points keep staying inside of the same B ( x i ) while searching for the worst-case over the entire convex hull by any optimization method.",
"Assuming that a virtual example e x is generated for an input sentence x , we search for the next adversarial example to maximize the model's prediction error by updating every vector of weights \u0000 = exp( ) by the following formula, each of them is used to represent a point sampled from B ( x i ) as Eq.",
"(2): \u0000 \u0000\u0000\u0000\u0000 @ log p ( e x, y ) @ \u0000\u0000\u0000\u0000 2 , p ( e x, y ) = exp( s y ( e x )) P c 2 Y exp( s c ( e x )) , (7) where is the step size.",
"In order to ensure that the updated \u0000 satisfy \u0000 j \u0000 0 and j \u0000 j = 1 as before, we sequentially apply logarithmic and softmax functions to \u0000 after it is randomly drawn from Dir ( ) .",
"Note that softmax (log( \u0000 )) = \u0000 , and will be updated instead of \u0000 in our implementation.",
"By updating only, the representation defined in Eq.",
"(2) also ensures that a series of points keep staying inside of the same convex hull while searching for the worst-case over B ( x i ) .",
"As shown in Fig. 1",
"(b), we apply this update multiple times with a small step size (arrow-linked red circles represent data points generated after each update by adding gradient-guided perturbations to their preceding ones).",
"When training the base classifier, we add all of the virtual examples generated at every search step (i.e., all of the points indicated by the red circles in Fig. 1",
"(b)) into the training set to better explore different regions around x .",
"As mentioned above, we use a Monte Carlo algorithm for evaluating g ( x ) .",
"Given an input sentence x , we draw k Monte Carlo samples of x by running k noise-corrupted copies of x through the base classifier f ( x ) , where each x is created by replacing the embedding of every word x i in the sentence x with a point randomly sampled with the Dirichlet distribution from C ( x i ) (not from the expanded convex hull B ( x i ) in the inference time).",
"We combine predictions by taking a weighted average of the softmax probability vectors of all the randomly created x , and take the argmax of this average vector as the final prediction.",
"We use CBW-D (Dubey et al., 2019) to compute those weights.",
"The idea behind it is to give more weight to the predictions that have more confidence in their results.",
"CBW-D calculates the weights w as a function of the differences between the maximum value of the softmax distribution and the other values as follows: w = X c 2 Y ,c 6 = y ( p ( x, y ) \u0000 p ( x, c )) r , (8) where y is the class having the maximum probability in a prediction, r is a hyperparameter tuned using cross-validation in preliminary experiments.",
"We conducted experiments on multiple data sets for text classification and natural language inference tasks.",
"Various model architectures (bag-of-words, CNN, LSTM, and attention-based) were used to evaluate our DNE and other defense methods under two recently proposed attacks.",
"Ren et al. (2019) described a greedy algorithm, called Probability Weighted Word Saliency (PWWS), for adversarial text attacks based on word substitutions with synonyms.",
"The word replacement order is determined by taking both word saliency and prediction probability into account.",
"Alzantot et al. (2018) developed a generic algorithm-based attack, denoted by GA, to generate semantically and syntactically similar adversarial examples.",
"They use a language model (LM) (Chelba et al., 2018) to rule out candidate substitute words that do not fit within the context.",
"However, unlike PWWS, ruling out some candidates by the LM will significantly reduce the number of candidate substitute words ( 65% off on average).",
"For a fair comparison, we report the robust accuracy under GA attack both with and without using the LM.",
"We measure adversarial accuracy on perturbations found by the two attacks (PWWS and GA) on 1 , 000 randomly selected test examples for each data set.",
"We primarily compare with recently proposed defense methods, including adversarial training (ADV) (Michel et al., 2019) and the interval bound propagation (IBP) based methods (Huang et al., 2019; Jia et al., 2019).",
"The former can improve the model's robustness without suffering many drops on the clean input data by adding adversarial examples in the training stage.",
"The latter was shown to be more robust to word substitution-based perturbations than ones trained with data augmentation.",
"To demonstrate that mixing the embedding of the original word with its synonyms performs better than naively replacing the word with its synonyms, we designed a new baseline, called RAN.",
"The models trained by RAN will take the corrupted copies of each input sentence as inputs, in which every word of the sentence is randomly replaced with one of its synonyms.",
"The same random replacement is used in the inference time, and the prediction scores are ensembled to get an output.",
"RAN can be viewed as a variant of SAFER (Ye et al., 2020), where during the training SAFER's perturbation set is replaced with the synonym set used by the adversaries and the number of ensembles is reduced to 16 (instead of 5 , 000 ) at the inference time, which make it feasible to be evaluated empirically under the attacks.",
"We experimented on two text classification data sets: Internet Movie Database (IMDB) (Maas et al., 2011) and AG News corpus (AGNEWS) (Zhang et al., 2015).",
"We implemented three models for these text classification tasks like (Jia et al., 2019).",
"The bag-of-words model (BOW) averages the word embeddings for each word in the input, then passes this through a one-layer feedforward network with 100 -dimensional hidden state to get a final logit.",
"The other two models are similar, except they run either a CNN or a two-layer LSTM on the word embeddings.",
"All models are trained on cross-entropy loss, and their hyperparameters are tuned on the validation set (see Appendix A.1 for details).",
"Table 1 reports both clean accuracy (CLN) and accuracy under two attack algorithms (PWWS and GA) on IMDB with three different model architectures (BOW, CNN, and LSTM).",
"We use GA-LM to denote the GA-based attack that rules out candidate substitute words that may not fit well with the context by the LM (Chelba et al., 2018).",
"We use ORIG to the testing and adversarial accuracy of the models trained without using any defense method.",
"As we can see from Table 1, DNE ( k = 16 ) outperforms ADV and IBP on the clean input data, and consistently performs better than the competitors across the three different architectures under all the attack algorithms.",
"For the text classification, LSTMs seem more vulnerable to adversarial attacks than BOWs and CNNs.",
"Under the strongest attack GA, while the accuracy of LSTMs trained by Table 1: Text classification on IMDB dataset.",
"ORIG, ADV, IBP, and RAN dropped to 0 .",
"2% , 32% , 64 .",
"3% , and 8 .",
"1% respectively, the LSTM trained by DNE still achieved 82 .",
"2% accuracy.",
"The results on AGNEWS are reported in Table 2, and we found similar trends as those on IMDB.",
"Any model performed on AGNEWS shows to be more robust than the same one on IMDB.",
"It is probably because the average length of the sentences in IMDB ( 255 words on average) is much longer than that in AGNEWS ( 43 words on average).",
"Longer sentences allow the adversaries to apply more word substitution-based perturbations to the examples.",
"Generally, DNE performs better than IBP and comparable to ADV on the clean data, while it outperforms the others in all other cases.",
"The results for both datasets show that our DNE consistently achieves better clean and robust accuracy.",
"We conducted the experiments of natural language inference on Stanford Natural Language Inference (SNLI) (Bowman et al., 2015) corpus.",
"We also implemented three models for this task.",
"The bag-of-words model (BOW) encodes the premise and hypothesis separately by summing their word vectors, then feeds the concatenation of these encodings to a two-layer feedforward network.",
"The other two models are similar, except they run either a Decomposable Attention (DecomAtt) (Parikh et al., 2016) or BERT (Devlin et al., 2019) on the word embeddings to generate the sentence representations, which uses attention between the premise and hypothesis to compute richer representations of each word in both sentences.",
"All models are trained with cross-entropy loss, and their hyperparameters are tuned on the validation set (see Appendix A.2).",
"better than the others on the robust accuracy while suffering little performance drop on the clean data on SNLI.",
"Although our proposed baseline RAN ( k = 16 ) achieves a slightly higher accuracy (just 1 . 2% difference) with BERT under PWWS attack, its accuracy rapidly drops to 27% under the more sophisticated attack GA, while DNE still yields 62 .",
"7% in accuracy.",
"The results on SNLI show that DNE can be applied to attention-based models like DecomAtt and scales well to large architectures such as BERT.",
"We leave the results of IBP with BERT as unknown since it is still a question whether IBP-based methods can be applied to BERT.",
"4.3 Ablation Study We conducted an ablation study over IMDB validation set on DNE with CNNs to analyze the robustness and generalization strength of different variants.",
"The w/o EXPANSION in the second row of Table 4 indicates that given any word x i in a sentence, we generate virtual examples by sampling from C ( x i ) instead of the expanded B ( x i ) during the training.",
"The variant of DNE trained without using the adversarial training algorithm described in Section 3.3 is indicated by w/o ADV-TRAIN.",
"If the single-point update strategy is applied to train DNE, we still use the same gradient-guided optimization method to find adversarial examples over B ( x i ) , but the found adversarial example x j is represented as x i + \u0000 , where \u0000 is the distance between x i and x j .",
"By such representation, only x i will be updated during the training instead of the embeddings of all its synonyms, and this variant is indicated by w/o COORD-UPD.",
"In the last row, we also report the results predicted without using the ensemble method (i.e., k = 1 ).",
"As we can see from Table 4, the differences in accuracy among the variants of DNE are negligible on the clean data.",
"The key components to improve the robustness of the models in descending order by their importance are the following: sampling from the expanded convex hull, combining with adversarial training, updating the word embeddings together, and using the ensemble to get the prediction.",
"We also observed that the stronger the attack algorithm is, the more effective these components will be.",
"When both expansion and adversarial are removed, the resulting accuracies on the validation set of IMDB dataset with the CNN-based model drop to 48 .",
"6% (PWWS) and 17 .",
"0% (GA).",
"In all the above experiments, we simply set the value of for 1-hop neighbors to 1 .",
"0 , and that for 2-hop neighbors to 0 .",
"5 .",
"We also conducted two experiments to investigate whether the Dirichlet distribution is essential.",
"In the first one, we uniformly sample the weights (by setting the value of for both 1-hop and 2-hop neighbors to 1 . 0 ) and do an adversarial training step.",
"The clean accuracy is 82 .",
"4% , and the accuracies under the PWWS and GA attacks are 79 .",
"8% and 78 .",
"21% respectively.",
"In the second experiment, we randomly sample a vertex from 2 -hop neighbors and then do the same adversarial training.",
"The resulting accuracies are 85% (clean), 75 .",
"8% (PWWS), and 54 .",
"6% (GA).",
"We found there is a trade-off between the clean accuracy and the accuracy under the attack.",
"Generally, the greater the value of is, the more robust the models will be, but the worse they perform on the clean data.",
"We also used different values of in the Dirichlet distribution to control the degree in which 1 -hop and 2 -hop neighbors contribute to generating adversarial examples.",
"If we treat 2 -hop neighbors equally as 1 -hop ones, it will significantly reduce the model's accuracy on the clean data, although it may lead to more robust models.",
"Although this study mainly focuses on the setting specified by (Jia et al., 2019), we also conducted experiments in which the defenders do not know how the attackers generate synonyms.",
"We used the synonyms suggested by (Alzantot et al., 2018) for training and evaluated the resulting models with CNNs and LSTMs on IMDB and AGNEWS datasets under a new attack system, called TextFooler (Jin et al., 2020).",
"We strictly followed the method proposed in (Jin et al., 2020) to generate synonyms during the attacking phase.",
"The experimental results show that DNE achieved 30 .",
"6% and 13 .",
"4% higher in average accuracy than ADV on AGNEWS and IMDB respectively.",
"In this study, we develop a novel defense algorithm for NLP models to substantially improve the robust accuracy without sacrificing their performance too much on clean data.",
"This method is broadly applicable, generic, scalable, and can be incorporated with little effort in any neural network, and scales to large architectures.",
"A novel adversarial training algorithm is also proposed, enabling NLP models to defend against the strong attacks that search for the worst-case over all combinations of word substitutions.",
"We demonstrated through extensive experimentation that our adversarially trained smooth classifiers consistently outperform all existing empirical and certified defenses by a significant margin on three datasets across different network architectures, establishing state-of-the-art for defenses against adversarial text attacks.",
"We choose to focus on synonym swapping because it is one of the most influential and widely-used attack methods.",
"There is still no effective method to defend against existing attack algorithms from this kind, such as Hotflip (Ebrahimi et al., 2018), PWWS (2019), GA (2018), TextFooler (Jin et al., 2020) etc.",
"A general method to defend more different attacks is worth exploring, but we choose to leave this as future work.",
"This work was supported by Shanghai Municipal Science and Technology Major Project (No. 2021SHZDZX0103), National Science Foundation of China (No. 62076068) and Zhangjiang Lab."
] | [
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"objective",
"other"
] |
[
"In human-level NLP tasks, such as predicting mental health, personality, or demographics, the number of observations is often smaller than the standard 768+ hidden state sizes of each layer within modern transformer-based language models, limiting the ability to effectively leverage transformers.",
"Here, we provide a systematic study on the role of dimension reduction methods (principal components analysis, factorization techniques, or multi-layer auto-encoders) as well as the dimensionality of embedding vectors and sample sizes as a function of predictive performance.",
"We first find that fine-tuning large models with a limited amount of data pose a significant difficulty which can be overcome with a pre-trained dimension reduction regime.",
"RoBERTa consistently achieves top performance in human-level tasks, with PCA giving benefit over other reduction methods in better handling users that write longer texts.",
"Finally, we observe that a majority of the tasks achieve results comparable to the best performance with just 112 of the embedding dimensions.",
"Transformer based language models (LMs) have quickly become the foundation for accurately approaching many tasks in natural language processing (Vaswani et al., 2017; Devlin et al., 2019).",
"Owing to their success is their ability to capture both syntactic and semantic information (Tenney et al., 2019), modeled over large, deep attention-based networks (transformers) with hidden state sizes on the order of 1000 over 10s of layers (Liu et al., 2019; Gururangan et al., 2020).",
"In total such models typically have from hundreds of millions (De-vlin et al., 2019) to a few billion parameters (Raffel et al., 2020).",
"However, the size of such models presents a challenge for tasks involving small num-bers of observations, such as for the growing number of tasks focused on human-level NLP.",
"Human-level NLP tasks, rooted in computational social science, focus on making predictions about people from their language use patterns.",
"Some of the more common tasks include age and gender prediction (Sap et al., 2014; Morgan-Lopez et al., 2017) , personality (Park et al., 2015; Lynn et al., 2020), and mental health prediction (Coppersmith et al., 2014; Guntuku et al., 2017; Lynn et al., 2018).",
"Such tasks present an interesting challenge for the NLP community to model the people behind the language rather than the language itself, and the social scientific community has begun to see success of such approaches as an alternative or supplement to standard psychological assessment techniques like questionnaires (Kern et al., 2016; Eichstaedt et al., 2018).",
"Generally, such work is helping to embed NLP in a greater social and human context (Hovy and Spruit, 2016; Lynn et al., 2019).",
"Despite the simultaneous growth of both (1) the use of transformers and (2) human-level NLP, the effective merging of transformers for human-level tasks has received little attention.",
"In a recent human-level shared task on mental health, most participants did not utilize transformers (Zirikly et al., 2019).",
"A central challenge for their utilization in such scenarios is that the number of training examples (i.e. sample size) is often only hundreds while the parameters for such deep models are in the hundreds of millions.",
"For example, recent human-level NLP shared tasks focused on mental health have had N = 947 (Milne et al., 2016), N = 9 , 146 (Lynn et al., 2018) and N = 993 (Zirikly et al., 2019) training examples.",
"Such sizes all but rules out the increasingly popular approach of fine-tuning transformers whereby all its millions of parameters are allowed to be updated toward the specific task one is trying to achieve (De-vlin et al., 2019; Mayfield and Black, 2020).",
"Recent research not only highlights the difficulty in fine-tuning with few samples (Jiang et al., 2020) but it also becomes unreliable even with thousands of training examples (Mosbach et al., 2020).",
"On the other hand, some of the common transformer-based approaches of deriving contextual embeddings from the top layers of a pre-trained model (Devlin et al., 2019; Clark et al., 2019) still leaves one with approximately an equal number of embedding dimensions as training size.",
"In fact, in one of the few successful cases of using transformers for a human-level task, further dimensionality reduction was used to avoid over-fit (Matero et al., 2019), but an empirical understanding of the application of transformers for human-level tasks which models are best and the relationship between embedding dimensions, sample size, and accuracy has yet to be established.",
"In this work, we empirically explore strategies to effectively utilize transformer-based LMs for relatively small sample-size human-level tasks.",
"We provide the first systematic comparison of the most widely used transformer models for demographic, personality, and mental health prediction tasks.",
"Then, we consider the role of dimension reduction to address the challenge of applying such models on small sample sizes, yielding a suggested minimum number of dimensions necessary given a sample size for each of demographic, personality, and mental health tasks 1 .",
"While it is suspected that transformer LMs contain more dimensions than necessary for documentor word-level NLP (Li and Eisner, 2019; Bao and Qiao, 2019), this represents the first study on transformer dimensionality for human-level tasks.",
"Recently, NLP has taken to human-level predictive tasks using increasingly sophisticated techniques.",
"The most common approaches use n-grams and LDA (Blei et al., 2003) to model a person's language and behaviors (Resnik et al., 2013; Kern et al., 2016).",
"Other approaches utilize word embeddings (Mikolov et al., 2013; Pennington et al., 2014) and more recently, contextual word representations (Ambalavanan et al., 2019).",
"Our work is inspired by one of the top performing systems at a recent mental health prediction shared task (Zirikly et al., 2019) that utilized transformer-based contextualized word embeddings fed through a non-negative matrix fac-1 dimension reduction techniques can also be pre-trained leveraging larger sets of unlabeled data torization to reduce dimensionality (Matero et al., 2019).",
"While the approach seems reasonable for addressing the dimensionality challenge in using transformers, many critical questions remain unanswered:",
"(a) Which type of transformer model is best?",
"(b) Would fine-tuning have worked instead?",
"and",
"(c) Does such an approach generalize to other human-level tasks?",
"Most of the time, one does not have a luxury of a shared task for their problem at hand to determine a best approach.",
"Here, we look across many human-level tasks, some of which with the luxury of having relatively large sample sizes (in the thousands) from which to establish upper-bounds, and ultimately to draw generalizable information on how to approach a human-level task given its domain (demographic, personality, mental health) and sample size.",
"Our work also falls in line with a rising trend in AI and NLP to quantify the number of dimensions necessary.",
"While this has not been considered for human-level tasks, it has been explored in other domains.",
"The post processing algorithm (Mu and Viswanath, 2018) of the static word embeddings motivated by the power law distribution of maximum explained variance and the domination of mean vector turned out to be very effective in making these embeddings more discriminative.",
"The analysis of contextual embedding models (Etha-yarajh, 2019) suggest that the static embeddings contribute to less than 5% to the explained variance, the contribution of the mean vector starts dominating when contextual embedding models are used for human-level tasks.",
"This is an effect of averaging the message embeddings to form user representations in human-level tasks.",
"This further motivates the need to process these contextual embeddings into more discriminative features.",
"Lastly, our work weighs into the discussion on just which type of model is best in order to produce effective contextual embedding models.",
"A majority of the models fall under two broad categories based on how they are pre-trained auto-encoders (AE) and auto-regressive (AR) models.",
"We compare the performance of AE and AR style LMs by comparing the performance of two widely used models from each category with comparable number of parameters.",
"From the experiments involving BERT, RoBERTa (Liu et al., 2019), XLNet (Yang et al., 2019) and GPT-2 (Radford et al., 2019), we find that AE based models perform better than AR style models ( with comparable model sizes ), and RoBERTa is the best choice amongst these four widely used models.",
"We evaluate approaches over 7 human-level tasks spanning Demographics, Mental Health, and personality prediction.",
"The 3 datasets used for these tasks are described below.",
"FB-Demogs.",
"(age, gen, ope, ext)",
"One of our goals was to leverage one of the largest human-level datasets in order to evaluate over subsam-ples of sizes.",
"For this, we used the Facebook demographic and personality dataset of Kosinski et al. (2013).",
"The data was collected from approximately 71k consenting participants who shared Facebook posts along with demographic and personality scores from Jan-2009 through Oct-2011.",
"The users in this sample had written at least a 1000 words and had selected English as their primary language.",
"Age (age) was self-reported and limited to those 65 years or younger (data beyond this age becomes very sparse) as in (Sap et al., 2014).",
"Gender (gen) was only provided as a limited single binary, male-female classification.",
"Personality was derived from the Big 5 personality traits questionnaires, including both extraversion (ext one's tendency to be energized by social interaction) and openess (ope, one's tendency to be open to new ideas) (Schwartz et al., 2013).",
"Disattenuated Pearson correlation 2 ( r dis ) was used to measure the performance of these two personality prediction tasks.",
"CLPsych-2018.",
"(bsag, gen2)",
"The CLPsych 2018 shared task (Lynn et al., 2018) consisted of sub-tasks aimed at early prediction of mental health scores (depression, anxiety and BSAG 3 score) based on their language.",
"The data for this shared task (Power and Elliott, 2005) comprised of English essays written by 11 year old students along with their gender (gen2) and income classes.",
"There were 9217 students' essays for training and 1000 for testing.",
"The average word count in an essay was less than 200.",
"Each essay was annotated with the student's psychological health measure, 2 Disattenuated Pearson correlation helps account for the error of the measurement instrument (Kosinski et al., 2013; Murphy and Davidshofer, 1988).",
"Following (Lynn et al., 2020), we use reliabilities: r xx = 0 .",
"70 and r yy = 0 .",
"77 .",
"3 Bristol Social Adjustment Guide (Ghodsian, 1977) scores contains twelve sub-scales that measures different aspects of childhood behavior.",
"BSAG (when 11 years old) and distress scores at ages 23, 33, 42 and 50.",
"This task used a disattenuated pearson correlation as the metric ( r dis ).",
"CLPsych-2019.",
"(sui)",
"This 2019 shared task (Zirikly et al., 2019) comprised of 3 sub-tasks for predicting the suicide risk level in reddit users.",
"This included a history of user posts on r/SuicideWatch (SW), a subreddit dedicated to those wanting to seek outside help for processing their current state of emotions.",
"Their posts on other subreddits (NonSuicideWatch) were also collected.",
"The users were annotated with one of the 4 risk levels: none, low, moderate and severe risk based on their history of posts.",
"In total this task spans 496 users in training and 125 in testing.",
"We focused on Task A, predicting suicide risk of a user by evaluating their (English) posts across SW, measured via macro-F1.",
"Here we discuss how we utilized representations from transformers, our approaches to dimensionality reduction, and our technique for robust evaluation using bootstrapped sampling.",
"The second to last layer representation of all the messages was averaged to produce a 768 dimensional feature for each user 4 .",
"These user representations are reduced to lower dimensions as described in the following paragraphs.",
"The message representation from a layer was attained by averaging the token embeddings of that layer.",
"To con-4 The second to last layer was chosen owing to its consistent performance in capturing semantic and syntactic structures (Jawahar et al., 2019).",
"sider a variety of transformer LM architectures, we explored two popular auto-encoder (BERT and RoBERTa) and two auto-regressive (XLNet and GPT-2) transformer-based models.",
"For fine-tuning evaluations, we used the transformer based model that performs best across the majority of our task suite.",
"Transformers are typically trained on single messages or pairs of messages, at a time.",
"Since we are tuning towards a human-level task, we label each user's message with their human-level attribute and treat it as a standard document-level task (Morales et al., 2019).",
"Since we are interested in relative differences in performance, we limit each user to at most 20 messages approximately the median number of messages, randomly sampled, to save compute time for the fine tuning experiments.",
"Algorithm 1 Dimension Reduction and Evaluation Notation: h D : hidden size, f ( ) : function to train dimension reduction, : Linear Model, g ( , ) : Logistic loss function for classification and L2 loss for regression, : learning rate, T : Number of iterations (100).",
"Data: D pt RN pt h D : Pre-training embeddings, D max RN max h D : Task training embeddings, D te RN te h D : Test embeddings, Y max : Outcome for train set, Y te : Outcome for test set.",
"1: W f ( D pt ) 2: D max D max W 3: D te D te W 4: for i = 1 , . . . , 10 do 5: (0) i (cid:126) 0 6: Sample ( D ta , Y ta ) from ( D max , Y max ) 7: for j = 1 , . . . , T do 8: ( j ) i ( j 1) i g ( D ta , Y ta ) 9: end for 10: Y te i D te ( T ) i 11: end for 12: Evaluate ( Y te , Y te ) 4.2 Dimension Reduction We explore singular value decomposition-based methods such as Principal components analysis (PCA) (Halko et al., 2011), Non-negative matrix factorization (NMF) (Fvotte and Idier, 2011) and Factor analysis (FA) as well as a deep learning approach: multi-layer non linear auto encoders (NLAE) (Hinton and Salakhutdinov, 2006).",
"We also considered the post processing algorithm (PPA) of word embeddings 5 (Mu and Viswanath, 2018) that has shown effectiveness with PCA on word level (Raunak et al., 2019).",
"Importantly, besides transformer LMs being pre-trained, so too can dimension reduction.",
"Therefore, we distinguish: (1) learning the transformation from higher dimension to lower dimensions (preferably on a large data sample from the same domain) and (2) applying the learned transformation (on the task's train/test set).",
"For the first step, we used a separate set of 56k unlabeled user data in the case of FB-demog 6 .",
"For CLPsych-2018 and -2019 (where separate data from the exact domains was not readily available), we used the task training data to train the dimension reduction.",
"Since variance explained in factor analysis typically follows a power law, these methods transformed the 768 original embedding dimensions down to k , in powers of 2: 16, 32, 64, 128, 256 or 512.",
"We systematically evaluate the role of training sample ( N ta ) versus embedding dimensions ( k ) for human-level prediction tasks.",
"The approach is described in algorithm 1.",
"Varying N ta , the task-specific train data (after dimension reduction) is sampled randomly (with replacement) to get ten training samples with N ta users each.",
"Small N ta values simulate a low-data regime and were used to understand its relationship with the least number of dimensions required to perform the best ( N ta vs k ).",
"Bootstrapped sampling was done to arrive at a conservative estimate of performance.",
"Each of the bootstrapped samples was used to train either an L2 penalized (ridge) regression model or logistic regression for the regression and classification tasks respectively.",
"The performance on the test set using models from each bootstrapped training sample was recorded in order to derive a mean and standard error for each N ta and k for each task.",
"To summarize results over the many tasks and possible k and N ta values in a useful fashion, we propose a first k to peak (fkp) ' metric.",
"For each N ta , this is the first observed k value for which the mean score is within the 95% confidence interval of the peak performance.",
"This quantifies the minimum number of dimensions required for peak performance.",
"We start by comparing transformer LMs, replicating the setup of one of the state-of-the-art systems for the CLPsych-2019 task in which embeddings were reduced from BERT-base to approximately 100 dimensions using NMF (Matero et al., 2019).",
"Specifically, we used 128 dimensions (to stick with powers of 2 that we use throughout this work) as we explore the other LMs over multiple tasks (we will explore other dimensions next) and otherwise use the bootstrapped evaluation described in the method.",
"Table 2 shows the comparison of the four transformer LMs when varying the sample size ( N ta ) between two low data regimes: 100 and 500 7 .",
"RoBERTa and BERT were the best performing models in almost all the tasks, suggesting auto-encoders based LMs are better than auto-regressive models for these human-level tasks.",
"Further, RoBERTa performed better than BERT in the majority of cases.",
"Since the number of model parameters are comparable, this may be attributable to RoBERTa's increased pre-training corpus, which is inclusive of more human discourse and larger vocabularies in comparison to BERT.",
"We next evaluate fine-tuning in these low data situations 8 .",
"Utilizing RoBERTa, the best performing transformer from the previous experiments, we perform fine-tuning across the age and gender tasks.",
"Following (Sun et al., 2019; Mosbach et al., 2020), we freeze layers 0-9 and fine-tune layers 10 and 11.",
"Even these top 2 layers alone of RoBERTa still result in a model that is updating tens of millions of parameters while being tuned to a dataset of hundreds of users and at most 10,000 messages.",
"In table 3, results for age and gender are shown for both sample sizes of 100 and 500.",
"For Age, the average prediction across all of a user's messages was used as the user's prediction and for gender the mode was used.",
"Overall, we find that fine-tuning 8 As we are focused on readily available models, we consider substantial changes to the architecture or training as outside the scope of this systematic evaluation of existing techniques.",
"offers lower performance with increased overhead for both train time and modeling complexity (hy-perparameter tuning, layer selection, etc).",
"We did robustness checks for hyper-parameters to offer more confidence that this result was not simply due to the fastidious nature of fine-tuning.",
"The process is described in Appendix B, including an extensive exploration of hyper-parameters, which never resulted in improvements over the pre-trained setup.",
"We are left to conclude that fine-tuning over such small user samples, at least with current typical techniques, is not able to produce results on par with using transformers to produce pre-trained embeddings.",
"We evaluated the reduction techniques in low data regime by comparing their performance on the downstream tasks across 100 and 500 training samples ( N ta ).",
"As described in the methods, techniques including PCA, NMF and FA along with NLAE, were applied to reduce the 768 dimensional RoBERTa embeddings to 128 features.",
"The results in table 4 show that PCA and NLAE perform most consistently, with PCA having the best scores in the majority tasks.",
"NLAE's performance appears dependent on the amount of data available during the pre-training.",
"This is evident from the results in Table 4 where the N pt was set to a uniform value and tested for all the tasks with N ta set to 100 and 500.",
"Thus, PCA appears a more reliable, showing more generalization for low samples.",
"Now that we have found (1) RoBERTa generally performed best, (2) pre-trainining worked better than fine-tuning, and (3) PCA was most consistently best for dimension reduction (often doing better than the full dimensions), we can systematically evaluate model performance as a function of training sample size ( N ta ) and number of dimensions ( k ) over tasks spanning demographics, personality, and mental health.",
"We exponentially increase k from 16 to 512, recognizing that variance explained decreases exponentially with dimension (Mu and Viswanath, 2018).",
"The performance is also compared with that of using the RoBERTa embeddings without any reduction.",
"Figure 1 compares the scores at reduced dimensions for age, ext, ope and bsag.",
"These charts depict the experiments on typical low data regime ( N ta 1000 ).",
"Lower dimensional representations performed comparable to the peak performance with just 13 the features while covering the most Dimension reduced (mean std err) All dimensions (mean) Figure 1: Comparison of performance for all regression tasks: age, ext, ope and bsag over varying N ta and k .",
"number of tasks and just 112 features for the majority of tasks.",
"Charts exploring other ranges of N ta values and remaining tasks can be found in the appendix D.1.",
"Lastly, we devise an experiment motivated by answering the question of how many dimensions are necessary to achieve top results, given a limited sample size.",
"Specifically, we define first k to peak ' ( fkp ) as the least valued k that produces an accuracy equivalent to the peak performance.",
"A 95% confidence interval was computed for the best score (peak) for each task and each N ta based on bootstrapped resamples, and fkp was the least number of dimensions where this threshold was passed.",
"in future human-level NLP tasks, where such an experiment (which relies on resampling over larger amounts of training data) is typically not feasible.",
"Table 5 shows the fkp over all of the training sample sizes ( N ta ).",
"The exponential median (med) in the table is calculated as follows: med = 2 Median(log(x)) The fkp results suggest that more training samples available yield ability to leverage more dimensions, but the degree to which depends on the task.",
"In fact, utilizing all the embedding dimensions was only effective for demographic prediction tasks.",
"The other two tasks benefited from reduction, often with only 112 to 16 of the original second to last transformer layer dimensions.",
"Here, we seek to better understand why using pre-trained models worked better than fine-tuning, and differences between using PCA and NMF components in the low sample setting ( N ta = 500 ).",
"Pre-trained vs Fine-tuned.",
"We looked at categories of language from LIWC (Tausczik and Pennebaker, 2010), correlated with the difference in the absolute error of the pre-trained and fine-tuned model in age prediction.",
"Table 6 suggests that pre-trained model is better at handling users with language conforming to the formal rules, and fine-tuning helps in learning better representation of the affect words and captures informal language well.",
"Furthermore, these LIWC variables are also known to be associated with age (Schwartz et al., 2013).",
"Additional analysis comparing these two models is available in appendix E.1.",
"PCA vs NMF.",
"Figure 2 suggests that PCA is better at handling longer text sequences than NMF (> 55 one grams on avg) when trained with less 2 3 4 5 log(Avg 1grams Per Msg) 0 10 20 Absolute Error nmfpca Age Prediction 2 3 4 5 log(Avg 1grams Per Msg) 0 1 2 Absolute Error nmfpca Ext Prediction Figure 2: Comparison of the absolute error of NMF and PCA with the average number of 1 grams per message.",
"data.",
"This choice wouldn't make much difference when used for Tweet-like short texts, but the errors diverge rapidly for longer samples.",
"We also see that PCA is better at capturing information from these texts that have higher predictive power in downstream tasks.",
"This is discussed in appendix E.2 along with other interesting findings involving the comparison of PCA and the pre-trained model in E.3.",
"Ethical Consideration.",
"We used existing datasets that were either collected with participant consent (FB and CLPsych 2018) or public data with identifiers removed and collected in a non-intrusive manner (CLPsych 2019).",
"All procedures were reviewed and approved by both our institutional review board as well as the IRB of the creators of the data set.",
"Our work can be seen as part of the growing body of interdisciplinary research intended to understanding human attributes associated with language, aiming towards applications that can improve human life, such as producing better mental health assessments that could ultimately save lives.",
"However, at this stage, our models are not intended to be used in practice for mental health care nor labeling of individuals publicly with mental health, personality, or demographic scores.",
"Even when the point comes where such models are ready for testing in clinical settings, this should only be done with oversight from professionals in mental health care to establish the failure modes and their rates (e.g. false-positives leading to incorrect treatment or false-negatives leading to missed care; increased inaccuracies due to evolving language; disparities in failure modes by demographics).",
"Malicious use possibilities for which this work is not intended include targeting advertising to individuals using language-based psychology scores, which could present harmful content to those suffering from mental health conditions.",
"We intend that the results of our empirical study are used to inform fellow researchers in computational linguistics and psychology on how to better utilize contextual embeddings towards the goal of improving psychological and mental health assessments.",
"Mental health conditions, such as depression, are widespread and many suffering from such conditions are under-served with only 13 49% receiving minimally adequate treatment (Kessler et al., 2003; Wang et al., 2005).",
"Marginalized populations, such as those with low income or minorities, are especially under-served (Saraceno et al., 2007).",
"Such populations are well represented in social media (Center, 2021) and with this technology developed largely over social media and predominantly using self-reported labels from users (i.e., rather than annotator-perceived labels that sometimes introduce bias (Sap et al., 2019; Flekova et al., 2016)), we do not expect that marginalized populations are more likely to hit failure modes.",
"Still, tests for error disparities (Shah et al., 2020) should be carried out in conjunction with clinical researchers before this technology is deployed.",
"We believe this technology offers the potential to broaden the coverage of mental health care to such populations where resources are currently limited.",
"Future assessments built on the learnings of this work, and in conjunction with clinical mental health researchers, could help the under-served by both better classifying one's condition as well as identifying an ideal treatment.",
"Any applications to human subjects should consider the ethical implications, undergo human subjects review, and the predictions made by the model should not be shared with the individuals without consulting the experts.",
"Limitations.",
"Each dataset brings its own unique selection biases across groups of people, which is one reason we tested across many datasets covering a variety of human demographics.",
"Most notably, the FB dataset is skewed young and is geographically focused on residents within the United States.",
"The CLPsych 2018 dataset is a representative sample of citizens of the United Kingdom, all born on the same week, and the CLPsych-2019 dataset was further limited primarily to those posting in a suicide-related forum (Zirikly et al., 2019).",
"Further, tokenization techniques can also impact language model performance (Bostrom and Durrett, 2020).",
"To avoid oversimplification of complex human attributes, in line with psychological research (Haslam et al., 2012), all outcomes were kept in their most dimensional form e.g. personality scores were kept as real values rather than divided into bins and the CLPsych-2019 risk levels were kept at 4 levels to yield gradation in assessments as justified by Zirikly et al., 2019.",
"We provide the first empirical evaluation of the effectiveness of contextual embeddings as a function of dimensionality and sample size for human-level prediction tasks.",
"Multiple human-level tasks along with many of the most popular language model techniques, were systematically evaluated in conjunction with dimension reduction techniques to derive optimal setups for low sample regimes characteristic of many human-level tasks.",
"We first show the fine-tuning transformer LMs in low-data scenarios yields worse performance than pre-trained models.",
"We then show that reducing dimensions of contextual embeddings can improve performance and while past work used non-negative matrix factorization (Matero et al., 2019), we note that PCA gives the most reliable improvement.",
"Auto-encoder based transformer language models gave better performance, on average, than their auto-regressive contemporaries of comparable sizes.",
"We find optimized versions of BERT, specifically RoBERTa, to yield the best results.",
"Finally, we find that many human-level tasks can be achieved with a fraction, often 16 th or 112 th , the total transformer hidden-state size without sacri-ficing significant accuracy.",
"Generally, using fewer dimensions also reduces variance in model performance, in line with traditional bias-variance tradeoffs and, thus, increases the chance of generalizing to new populations.",
"Further it can aid in explainability especially when considering that these dimension reduction models can be pre-trained and standardized, and thus compared across problem sets and studies."
] | [
"abstain",
"method",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"abstain",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain"
] |
[
"Machine Learning has been the quintessential solution for many AI problems, but learning models are heavily dependent on specific training data.",
"Some learning models can be incorporated with prior knowledge using a Bayesian setup, but these learning models do not have the ability to access any organized world knowledge on demand.",
"In this work, we propose to enhance learning models with world knowledge in the form of Knowledge Graph (KG) fact triples for Natural Language Processing (NLP) tasks.",
"Our aim is to develop a deep learning model that can extract relevant prior support facts from knowledge graphs depending on the task using attention mechanism.",
"We introduce a convolution-based model for learning representations of knowledge graph entity and relation clusters in order to reduce the attention space.",
"We show that the proposed method is highly scalable to the amount of prior information that has to be processed and can be applied to any generic NLP task.",
"Using this method we show significant improvement in performance for text classification with 20Newsgroups (News20) & DBPedia datasets, and natural language inference with Stanford Natural Language Inference (SNLI) dataset.",
"We also demonstrate that a deep learning model can be trained with substantially less amount of labeled training data, when it has access to organized world knowledge in the form of a knowledge base.",
"Today, machine learning is centered around algorithms that can be trained on available task-specific labeled and unlabeled training samples.",
"Although learning paradigms like Transfer Learning (Pan and Yang, 2010) attempt to incorporate equal contribution Main work done during internship at Accenture Technology Labs knowledge from one task into another, these techniques are limited in scalability and are specific to the task at hand.",
"On the other hand, humans have the intrinsic ability to elicit required past knowledge from the world on demand and infuse it with newly learned concepts to solve problems.",
"The question that we address in this paper is the following: Is it possible to develop learning models that can be trained in a way that it is able to infuse a general body of world knowledge for prediction apart from learning based on training data?",
"Knowledge Graphs (Nickel et al., 2016a) are a popular source of such structured world knowledge.",
"Knowledge Graphs represent information in the form of fact triplets, consisting of a subject entity, relation and object entity (exam-ple: < Italy, capital, Rome > ).",
"The entities represent the nodes of the graph and their relations act as edges.",
"A fact triple ( subject entity, relation, object relation ) is represented as ( h, r, t ) .",
"Practical knowledge bases congregate information from secondary databases or extract facts from unstructured text using various statistical learning mechanisms, examples of such systems are NELL (Mitchell et al., 2015) and DeepDive (Niu 313 et al., 2012).",
"There are human created knowledge bases as well, like Freebase (FB15k) (Bollacker et al., 2008) and WordNet (Miller et al., 1990).",
"The knowledge present in these knowledge bases includes common knowledge and partially covers common-sense knowledge and domain knowledge (Song and Roth, 2017).",
"Knowledge Graphs and Knowledge Bases are conceptually equivalent for our purpose and we will use the name interchangeably in this paper.",
"We illustrate the significance of world knowledge using a few examples.",
"For the example of a Natural Language Inference (NLI) problem (MacCartney, 2009), consider the two following statements, A: The couple is walking on the sea shore and B: The man and woman are wide awake .",
"Here, for a learning model to infer B from A, it should have access to the common knowledge that The man and woman and The couple means the same since this information may not be specific for a particular inference.",
"Further, it is not possible for a model to learn all such correlations from just the labeled training data available for the task.",
"Consider another example of classifying the news snippet, Donald Trump offered his condolences towards the hurricane victims and their families in Texas .",
"We cannot classify it as a political news unless we know the facts < Donald Trump, president, United States > and < Texas, state, United States > .",
"We posit that machine learning models, apart from training them on data with the ground-truth can also be trained to fetch relevant information from structured knowledge bases in order to enhance their performance.",
"In this work, we propose a deep learning model that can extract relevant support facts on demand from a knowledge base (Mitchell et al., 2015) and incorporate it in the feature space along with the features learned from the training data (shown in Figure 1).",
"This is a challenging task, as knowledge bases typically have millions of fact triples.",
"Our proposed model involves a deep learning mechanism to jointly model this look-up scheme along with the task specific training of the model.",
"The look-up mechanism and model is generic enough so that it can be augmented to any task specific learning model to boost the learning performance.",
"In this paper, we have established superior performance of the proposed KG-augmented models over vanilla model on text classification and natural language inference.",
"Although there is a plethora of work on knowledge graph representation (Nickel et al., 2016a) (Mitchell et al., 2015) (Niu et al., 2012) from natural language text, no attempt to augment learning models with knowledge graph information have been done.",
"To the best of our knowledge this is the first attempt to incorporate world knowledge from a knowledge base for learning models.",
"Knowledge Graph entities/relations need to be encoded into a numerical representation for processing.",
"Before describing the model, we provide a brief overview of graph encoding techniques.",
"Various KG embedding techniques can be classified at a high level into: Structure-based embeddings and Semantically-enriched embeddings .",
"Structure-based embeddings : TransE (Bordes et al., 2013) is the introductory work on knowledge graph representation, which translated subject entity to object entity using one-dimensional relation vector ( h + r = t ) .",
"Variants of the TransE (Bordes et al., 2013) model uses translation of the entity vectors over relation specific subspaces.",
"TransH (Wang et al., 2014b) introduced the relation-specific hyperplane to translate the entities.",
"Similar work utilizing only the structure of the graph include ManifoldE (Xiao et al., 2015b), TransG (Xiao et al., 2015a), TransD (Ji et al., 2015), TransM (Fan et al., 2014), HolE (Nickel et al., 2016b) and ProjE (Shi and Weninger, 2017).",
"Semantically-enriched embeddings : These embedding techniques learn to represent enti-ties/relations of the KG along with its semantic information.",
"Neural Tensor Network(NTN) (Socher et al., 2013) was the pioneering work in this field which initialized entity vectors with the average word embeddings followed by tensor-based operations.",
"Recent works involving this idea are Joint Alignment (Zhong et al., 2015) and SSP (Xiao et al., 2017).",
"DKRL (Xie et al., 2016) is a KG representation technique which also takes into account the descriptive nature of text keeping the simple structure of TransE model.",
"Pre-trained word2vec (Mikolov et al., 2013) is used to form the entity representation by passing through a Convolutional Neural Network (CNN) (Kim, 2014) architecture constraining the relationships to hold.",
"In our experiments we have used the DKRL (Xie et al., 2016) encoding scheme as it emphasizes on the semantic description of the text.",
"Moreover, DKRL fundamentally uses TransE (Bordes et al., 2013) method for encoding structural information.",
"Therefore, we can retrieve relevant entities & relation and obtain the complete the fact using t = h + r .",
"This reduces the complexity of fact retrieval as the number of entities/relations is much less compared to the number of facts, thus making the retrieval process faster.",
"Conventional supervised learning models with parameters , given training data x and label y , tries to maximize the following function",
"The optimized parameters are given as,",
"In this work, we propose to augment the supervised learning process by incorporation of world knowledge features x w .",
"The world knowledge features are retrieved using the data x , using a separate model where, x w = F ( x, (2) ) .",
"Thus, our modified objective function can be expressed as max P ( y | x, x w , (1) ) where, = { (1) , (2) } .",
"The optimized parameters can be obtained using the equation = argmax log P ( y | x, F ( x, (2) ) , (1) ) The subsequent sections focus on the formulation of the function F which is responsible for fact triple retrieval using the data sample x .",
"Here it is important to note that, we are not assuming any structural form for P based on F .",
"So the method is generic and applicable to augment any supervised learning setting with any form for P , only constraint being P should be such that the error gradient can be computed with respect to F .",
"In the experiments we have used softmax using the LSTM (Greff et al., 2015) encodings of the input as the form for P .",
"As for F , we use soft attention (Luong et al., 2015; Bahdanau et al., 2014) using the LSTM encodings of the input and appropriate representations of the fact(s).",
"Based on the representation used for the facts, we propose two models ( a ) Vanilla Model ( b ) Convolution-based entity/relation cluster representation, for fact retrieval in the subsequent sections.",
"The entities and relationships of KG are encoded using DKRL, explained earlier.",
"Let e i R m stand for the encoding of the entity i and r j R m stands for j th relationship in the KG.",
"The input text in the form of concatenated word vectors, x = ( x 1 , x 2 , . . . , x T ) is first encoded using an LSTM (Greff et al., 2015) module as follows, h t = f ( x t , h t 1 ) and o = 1 TTX t =1 h t , h t R n is the hidden state of the LSTM at time t , f is a non-linear function and T is the sequence length.",
"Then a context vector is formed from o as follows, C = ReLU( o TW ) , where, W R n m represent the weight parameters.",
"The same procedure is duplicated with separate LSTMs to form two seperate context vectors, one for entity retrieval ( CE ) and one for relationship retrieval ( CR ).",
"As the number of fact triples in a KG is in the order of millions in the vanilla model, we resort to generating attention over the entity and relation space separately.",
"The fact is then formed using the retrieved entity and relation.",
"The attention for the entity, e i using entity context vector is given by e i = exp( CTE e i ) | E | P j =0 exp( CT E e j ) where | E | is the number of entities in the KG.",
"Similarly the attention for a relation vector r i is computed as r i = exp( CTR r i ) | R | P j =0 exp( CTR r j ) where | R | is the number of relations in the KG.",
"The final entity and relation vector retrieval is computed by the weighted sum with the attention 315 Figure 2: Vanilla Entity/Relationship Retrieval Block Diagram values of individual retrieved entity/relation vectors.",
"X X Figure 2 shows the schematic diagram for en-tity/relation retrieval.",
"After the final entity and relation vectors are computed, we look forward to completion of the fact triple.",
"The KG embedding technique used for the experiment is DKRL which inherently uses the TransE model assump-tion ( h + r t ).",
"Therefore, using the subject entity and relation we form the object entity as t = e + r .",
"Thus the fact triplet retrieved is F = [ e, r, e + r ] , where F R 3 m .",
"This retrieved fact information is concatenated along with the context vector ( C ) of input x obtained using LSTM module.",
"The final classification label y is computed as follows, F 0 = ReLU( FTV ) y = softmax([ F 0 : C ] TU ) where, V R 3 m u and U R 2 u u are model parameters to be learned.",
"y is used to compute the cross entropy loss.",
"We minimize this loss averaged across the training samples, to learn the various model parameters using stochastic gradient descent (Bottou, 2012).",
"The final prediction y , now includes information from both dataset specific samples and world knowledge to aid in enhanced performance.",
"While jointly training the attention mechanism tunes itself to retrieve relevant facts that are required to do the final classification.",
"The vanilla model attends over the entire en-tity/relation space which is not a good approach as we observe that the gradient for each attention value gets saturated easily.",
"While training the classification and retrieval module together, the model tends to ignore the KG part and gradient propagates only through the classification module.",
"This is expected to an extent as the most pertinent information for the task at hand comes from the training samples, only background aiding information comes from KG.",
"After few epochs of training, the KG retrieved fact always converged to a fixed vector.",
"To overcome this problem, we attempted pretraining KG retrieval part separately.",
"A pre-trained KG model is used to retrieve the facts and then concatenate with the classification module, while we allow error to be propagate through the pre-trained model, at the time of joint training.",
"We infer that KG doesn't return noise and has essential information for the task as the separate KG part alone shows significant performance (59% for News20 & 66% for SNLI).",
"Figure 3 depicts the entire training scheme.",
"This procedure solved the issue of gradient saturation in the KG retrieval part 316 Figure 3: Separately Training Knowledge Graph Retrieval and Jointly Training the Full Model at the time of joint training.",
"In this section, we propose a mechanism to reduce the large number of entities/relationships over which attention has to be generated in the knowledge graph.",
"We propose to reduce the attention space by learning the representation of similar entity/relation vectors and attending over them.",
"Each of the clusters were then encoded using convolutional filters.",
"The output of the k -means clustering is a sequence of entity/relation vectors { e T 1 , e T 2 , , e Tq } , where e i R m .",
"For each cluster these vectors were stacked to form E as the 2D input to the CNN encoder, where E R m q .",
"During experimentation for finding a suitable fil-ter shape, it was observed that using 2-D filters the model failed to converge at all.",
"Therefore, we inferred that the latent representation of two different indices in the vector e i , should not be tampered using convolution.",
"We then resorted to use 1-D convolution filters which slide along only the columns of E , as shown Figure 4.",
"The stride length along y -axis is the window length k .",
"The output of the convolution layer is expressed as, E 0 ( i, j ) = WT [ e i,j , e i +1 ,j , . . . , e i + k 1 ,j ] T where, E 0 ( i, j ) is the ( i, j ) th element of the output matrix E 0 and W R k is the convolution weight filter.",
"A pooling layer followed the convolution layer in order to reduce the parameter space, we used 1-D window only along the y -axis similar to the convolutional kernel mentioned above.",
"We used a two layered convolution network with the stride length k & max-pool windows n is adjusted to obtain output E i R m , where i is the cluster index.",
"Similar procedure of clustering followed by the encoding of the cluster entities is done for relations as well.",
"Thus both the entity and relation space were reduced to contain fewer elements, one each for each cluster.",
"After the compact entity space E and relation space R is formed, we followed the same steps as earlier for forming the attention, but now the training was more effective as the gradient was propagating effectively and was not choked by the large space.",
"As the convolution architecture is also simultaneously trained, attention mechanism was not burdened as before, to learn over the large space of entities and relations.",
"Another point that needs to be mentioned here is regarding ranking/ordering items in the clusters, we have done experiments to verify the ordering does not affect the final result.",
"We have verified this by randomly shuffling the en-tities/relationships in every clusters and the ac-317 curacy output remained within an error bound of 0 .",
"5% .",
"In various permutations, the representations learned by the convolution operator for clusters varies, but it does not affect the overall results.",
"Regarding the interpretation of what convolution operator learns, the operator is applied along each dimension of the entity/relationship vector, to learn a representation of the clusters.",
"This representation includes information from relevant entities in the cluster, as the relevant entities varies across tasks, the representation learned using convolution also adapts accordingly.",
"It is analogous to learning relevant features from an image, in our case the convolution layer learns the features focusing on relevant entities/relations in a cluster pertaining to the task.",
"Our experiments were designed to analyze whether a deep learning model is being improved when it has access to KG facts from a relevant source.",
"The selection of knowledge graph has to be pertinent to the task at hand, as currently there is no single knowledge base that contains multiple kinds of information and can cater to all tasks.",
"We illustrate with results that the performance of a deep learning model improves when it has access to relevant facts.",
"We also illustrate that as the model learns faster with access to knowledge bases, we can train deep learning models with substantially less training data, without compromising on the accuracy.",
"In the subsequent section we briefly describe the datasets and associated Knowledge Bases used.",
"In our experiments, we have mainly used the popular text classification dataset 20Newsgroups (Lichman, 2013) and the Natural Language Inference dataset, Stanford Natural Language Inference (SNLI) corpus (Bowman et al., 2015).",
"We have also done experiments on DBPedia ontology classification dataset 1 , with a very strong baseline.",
"These datasets are chosen as they share domain knowledge with two most popular knowledge bases, Freebase (FB15k) (Bollacker et al., 2008) and WordNet (WN18) (Bordes et al., 2013).",
"The training and test size of the datasets are mentioned in Table",
"1. 1 http://wiki.dbpedia.org/ services-resources/dbpedia-data-set-2014 Dataset Train Size Test Size # Classes News20 16000 2000 20 SNLI 549367 9824 3 DBPedia 553,000 70,000 14 Table 1: Dataset Specifications Freebase (FB15k) (Bollacker et al., 2008) contains facts about people, places and things (con-tains 14904 entities, 1345 relations & 4.9M fact triples), which is useful for text classification in 20Newsgroups (Lichman, 2013) dataset.",
"On the other hand, WordNet (WN18) (Bordes et al., 2013) (has 40943 entities, 18 relations & 1.5M fact triples) contains facts about common day-to-day things (example: furniture includes bed), which can help in inference tasks like SNLI.",
"Both the knowledge bases are directed graphs, due to fewer number of relations WN18 the entities are more likely to be connected using the same type of relations.",
"For experiments relating to both the datasets 20Newsgroups & SNLI we used the standard LSTM as the classification module.",
"As iterated earlier, our KG based fact retrieval is indepen-dent of the base model used.",
"We show improvement in performance using the proposed models by KG fact retrieval.",
"We use classification accuracy of the test set as our evaluation metric.",
"All experiments were carried on a Dell Precision Tower 7910 server with Quadro M5000 GPU with 8 GB of memory.",
"The models were trained using the Adam's Optimizer (Kingma and Ba, 2014) in a stochastic gradient descent (Bottou, 2012) fashion.",
"The models were implemented using Ten-sorFlow (Abadi et al., 2015).",
"The relevant hyper-parameters are listed in Table",
"2. The word embeddings for the experiments were obtained using the pre-trained GloVe (Pennington et al., 2014) 2 vectors.",
"For words missing in the pre-trained vectors, the local GloVe vectors which was trained on the corresponding dataset was used.",
"Table 3 shows the results of test accuracy of the various methods proposed on the datasets News20 & SNLI.",
"We observe that incorporation of KG facts using the basic vanilla model improves the performance slightly, as the retrieval model was 2 http://nlp.stanford.edu/data/glove.840B.300d.zip 318 Hyper-parameter News20 SNLI Batch size 256 1024 Learning rate 0.05 0.05 Word Vector Dimension 300 300 Sequence Length 300 85 LSTM hidden-state Dimension 200 200 KG Embedding Dimension 50 50 # Clusters 20 20 # Epochs 20 20 Table 2: Hyper-parameters which were used in experiments for News20 & SNLI datasets not getting trained effectively.",
"The convolution-based model shows significant improvement over the normal LSTM classification.",
"While tuning the parameters of the convolution for clustered enti-ties/relations it was observed that smaller stride length and longer max-pool window improved performance.",
"For News20 dataset we show an improvement of almost 3% and for SNLI an improvement of almost 5%.",
"The work is motivated more from the perspective of whether incorporation of world knowledge will improve any deep learning model rather than beating the state-of-the-art performance.",
"Although LSTM is used to encode the input for the model as well as the retrieval vector, as mentioned earlier, these two modules need not be same.",
"For encoding the input any complex state-of-the-art model can be used.",
"LSTM has also been used to generate the retrieval vector.",
"For DBPedia ontology classification dataset, we have used a strong baseline of 98.6%, and after augmenting it with KG (Freebase) using convolution based model we saw an improvement of 0.2%.",
"As the baseline is stronger, the improvement quantum has decreased.",
"This is quite intuitive as complex models are self-sufficient in learning from the data by itself and therefore the room available for further improvement is relatively less.",
"The improvement as observed in the experiments is significant in weaker learning models, however it is also capable of improving stronger baselines as is evident from the results of DBPedia dataset.",
"We hypothesized that as Knowledge Graph is feeding more information to the model, we can achieve better performance with less training data.",
"To verify this we have performed experiments on varying dataset fractions for 20Newsgroups dataset as shown in Figure",
"5. From the plot, we observe that KG augmented LSTM with 70% data outperforms the baseline model with full dataset support, thereby reducing the dependency on labeled data by 30%.",
"We also designed an experiment to compare the accuracy of the baseline model trained on full training data and compared it with the accuracy of the KG augmented model trained with just 70% of the training data for 20Newsgroups and SNLI datasets.",
"The accuracy and training loss plots across training epochs is given in Figure",
"6. Even with just 70% of the data, KG augmented model is able to achieve better accuracy compared to the vanilla LSTM model trained on the full data.",
"This clearly indicates that relevant information pertaining to the task is retrieved from the knowledge graph and the training loss reduction is not due to lesser data only.",
"Also note that training loss is substantially less for KG LSTM compared to normal LSTM when the dataset size is reduced.",
"This result is very promising, to reduce the large labeled training data requirement of large deep learning 319 0 5 10 15 20 20 40 60 Dataset Fraction A cc u r a c y ( % ) LSTM (70% dataset) LSTM (100% dataset) KG LSTM (70% data)",
"The basic idea of infusing general world knowledge for learning tasks, especially for natural language processing, has not been attempted before.",
"For multi-label image classification, the use of KGs has been pursued recently by (Marino et al., 2016).",
"In this work, they first obtain labels of the input data (using a different model), use these labels to populate features from the KG and in turn use these features back for the final classification.",
"For NLP tasks the information needed may not necessarily depend on the final class, and we are directly using all the information available in the input for populating the relevant information from the knowledge graphs.",
"Our attempt is very different from Transfer Learning (Pan and Yang, 2010).",
"In Transfer Learning the focus is on training the model for one task and tuning the trained model to use it for another task.",
"This is heavily dependent on the alignment between source task and destination task and transferred information is in the model.",
"In our case, general world knowledge is being infused into the learning model for any given task.",
"By the same logic, our work is different from domain adaptation (Glorot et al., 2011) as well.",
"There has been attempts to use world knowledge (Song and Roth, 2017) for creating more labeled training data and providing distant supervision etc.",
"Incorporating Inductive Biases (Ridge-way, 2016) based on the known information about a domain onto the structure of the learned models, is an active area of research.",
"However our motivation and approach is fundamentally different from these works.",
"In this work we have illustrated the need for incorporating world knowledge in training task specific models.",
"We presented a novel convolution-based architecture to reduce the attention space over entities and relations that outperformed other models.",
"With significant improvements over the vanilla baselines for two well known datasets, we have illustrated the efficacy of our proposed methods in enhancing the performance of deep learning models.",
"We showcased that the proposed method can be used to reduce labeled training data requirements of deep learning models.",
"Although in this work we focused only on NLP tasks and using LSTM as the baseline model, the proposed approach is applicable for other domain tasks as well, with more complicated deep learning models as baseline.",
"To the best of our knowledge this is the first attempt at infusing general world knowledge for task specific training of deep learning models.",
"Being the first work of its kind, there is a lot of scope for improvement.",
"A more sophisticated model which is able to retrieve facts more ef-ficiently from millions of entries can be formulated.",
"Currently we have focused only on a flat attention structure, a hierarchical attention mechanism would be more suitable.",
"The model uses soft attention to enable training by simple stochastic gradient descent.",
"Hard attention over facts using reinforcement learning can be pursued further.",
"This will further help in selection of multi-facts, that are not of similar type but relevant to the task.",
"The convolution based model, helped to reduce the space over entities and relationships over which attention had to be generated.",
"However more sophisticated techniques using similarity based search (Wang et al., 2014a; Mu and Liu, 2017) can be pursued towards this purpose.",
"The results from the initial experiments illustrates the effectiveness of our proposed approach, advocating further investigations in these directions."
] | [
"abstain",
"abstain",
"objective",
"objective",
"method",
"objective",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective"
] |
[
"Recently, finetuning a pretrained language model to capture the similarity between sentence embeddings has shown the state-of-the-art performance on the semantic textual similarity (STS) task.",
"However, the absence of an interpretation method for the sentence similarity makes it difficult to explain the model output.",
"In this work, we explicitly describe the sentence distance as the weighted sum of contextualized token distances on the basis of a transportation problem, and then present the optimal transport-based distance measure, named RCMD; it identifies and leverages semantically-aligned token pairs.",
"In the end, we propose CLRCMD, a contrastive learning framework that optimizes RCMD of sentence pairs, which enhances the quality of sentence similarity and their interpretation.",
"Extensive experiments demonstrate that our learning framework outperforms other baselines on both STS and interpretable-STS benchmarks, indicating that it computes effective sentence similarity and also provides interpretation consistent with human judgement.",
"The code and checkpoint are publicly available at https: //github.com/sh0416/clrcmd .",
"Predicting the semantic similarity between two sentences has been extensively studied in the literature (Gomaa et al., 2013; Agirre et al., 2015; Ma-jumder et al., 2016; Cer et al., 2017).",
"Several recent studies successfully utilized a pretrained language model such as BERT (Devlin et al., 2019) by finetuning it to capture sentence similarity (Reimers and Gurevych, 2019).",
"To be specific, they define a similarity score between sentence embeddings, which are obtained by aggregating contextualized token embeddings (e.g., avg pooling) or using a special token (e.g., [CLS] ), then optimize the score This work was done during internship at Scatterlab.",
"for natural language inference (NLI) or semantic textual similarity (STS) tasks (Gao et al., 2021).",
"Along with the quality of sentence similarity, interpreting the predicted sentence similarity is also important for end-users to better understand the results (Agirre et al., 2016; Gilpin et al., 2018; Rogers et al., 2020).",
"In general, finding out the cross-sentence alignment and the importance of each aligned part is useful for analyzing sentence similarity (Sultan et al., 2015).",
"For example, there were several attempts to use explicit features (e.g., TF-IDF) for easily analyzing the interaction among the shared terms (Salton and Buckley, 1988) or to adopt sophisticated metrics (e.g., word mover's distance) for explicitly describing it by the importance and similarity of word pairs across two sentences (Kusner et al., 2015).",
"However, for recent approaches that leverage sentence embeddings from a pretrained model, it has not been studied how the cross-sentence interaction of each part contributes to the final sentence similarity.",
"In this work, we propose an analytical method based on optimal transport to analyze existing ap-5969 proaches that leverage a pretrained model.",
"We consider a sentence similarity measure a solution to a transportation problem, which aims to transport a collection of contextualized tokens in a sentence to the ones in another sentence.",
"As byproducts of the problem, we obtain a cost matrix and a transportation matrix, which encode the similarities of all token pairs across sentences and their contributions to the sentence similarity, respectively.",
"Using this analytical method, we point out that the existing approaches suffer from the rank-1 constraint in the transportation matrix; this eventually keeps the model from effectively capturing the similarities of semantically-aligned token pairs into sentence similarity.",
"For example, considering transportation in a contextualized embedding space (Figure 1), the distance between averaged token embeddings (or-ange arrows) cannot clearly represent the distance of semantically-aligned token pairs (blue arrows).",
"To resolve the above limitation and enhance the interpretability of a model, we present a novel distance measure and a contrastive learning framework that optimizes the distance between sentences.",
"First, we apply optimal transport in a contextualized embedding space and leverage the optimal solution for a relaxed transportation problem as our distance measure.",
"This sentence distance is composed of the distances of semantically-aligned token pairs; this makes the result easily interpretable.",
"Furthermore, we present a contrastive learning framework that adopts the proposed distance to finetune the model with token-level supervision.",
"It optimizes the model to learn the relevance of semantically-aligned token pairs from that of sentence pairs, which further enhances interpretability.",
"We extensively evaluate our approach and validate the effectiveness of its sentence similarity and interpretation.",
"The comparison on 7 STS benchmarks supports the superiority of sentence similarity predicted by the model trained by our framework.",
"In particular, the evaluation on 2 interpretable-STS datasets demonstrates that the proposed distance measure finds out semantically relevant token pairs that are more consistent with human judgement compared to other baseline methods.",
"Our qualitative analysis shows that both the token alignment and their similarity scores from our model serve as useful resources for end-users to better understand the sentence similarity.",
"Most recent studies tried to leverage a pretrained language model with various model architectures and training objectives for STS tasks, achieving the state-of-the-art performance.",
"In terms of model architecture, Devlin et al. (2019) focus on exhaustive cross-correlation between sentences by taking a concatenated text of two sentences as an input, while Reimers and Gurevych (2019) improve scalability based on a Siamese network and Humeau et al. (2020) adopt a hybrid approach.",
"Along with the progress of model architectures, many advanced objectives for STS tasks were proposed as well.",
"Specifically, Reimers and Gurevych (2019) mainly use the classification objective for an NLI dataset, and Wu et al. (2020) adopt contrastive learning to utilize self-supervision from a large corpus.",
"Yan et al. (2021); Gao et al. (2021) incorporate a parallel corpus such as NLI datasets into their contrastive learning framework.",
"Despite their effectiveness, the interpretability of the above models for STS tasks was not fully explored (Belinkov and Glass, 2019).",
"One related task is interpretable STS, which aims to predict chunk alignment between two sentences (Agirre et al., 2016).",
"For this task, a variety of supervised approaches were proposed based on neural networks (Konopk et al., 2016; Lopez-Gazpio et al., 2016), linear programming (Tekumalla and Jat, 2016), and pretrained models (Maji et al., 2020).",
"However, these methods cannot predict the similarity between sentences because they focus on finding chunk alignment only.",
"To the best of our knowledge, no previous approaches based on a pretrained model have taken into account both sentence similarity and interpretation.",
"Optimal transport (Monge, 1781) has been successfully applied to many applications in natural language processing (Li et al., 2020; Xu et al., 2021), by the help of its ability to find a plausible correspondence between two objects (Lee et al., 2021a,b).",
"For example, Kusner et al. (2015) adopt optimal transport to measure the distance between two documents with pretrained word vectors.",
"Zhao et al. (2019) adopt optimal transport for evaluating text generation and Zhang et al. (2020) take a greedy approach leveraging pretrained language model.",
"In addition, Swanson et al. (2020) discover 5970 the rationale in text-matching via optimal transport, thereby improving model interpretability.",
"One well-known limitation of optimal transport is that finding the optimal solution is computationally intensive, and thus approximation schemes for this problem have been extensively researched (Grauman and Darrell, 2004; Shirdhonkar and Jacobs, 2008).",
"To get the solution efficiently, Cuturi (2013) provides a regularizer inspired by a probabilistic theory and then uses Sinkhorn's algorithm.",
"Kusner et al. (2015) relax the problem to get the quadratic-time solution by removing one of the constraints, and Wu et al. (2018) introduce a kernel method to approximate the optimal transport.",
"We first analyze the similarity measure used by existing models from the perspective of a transportation problem.",
"Considering the above analysis, we present a novel distance measure and a contrastive sentence learning framework to enhance the interpretability of a sentence similarity model.",
"We briefly explain the transportation problem and how to interpret the total transportation cost as a distance measure.",
"A transportation problem consists of three components: states before and after transportation, and a cost matrix.",
"In general, the two states are represented in high-dimensional simplex, i.e., d 1 d 1 and d 2 d 2 , where each dimension implies a specific location with a nonnegative quantity.",
"The cost matrix M R d 1 d 2 encodes the unit transportation cost from location i to j into M i,j .",
"In this situation, we search the transportation plan to transport from d 1 to d 2 with the minimum cost.",
"Using the above notations, the optimization problem is written as follows: minimize T R d 1 d 2 0 (cid:88) i,j T i,j M i,j (1) subject to T (cid:62) (cid:126) 1 = d 2 , T (cid:126) 1 = d 1 , where each entry of the transportation matrix T i,j indicates how much quantity is transferred from location i to j .",
"The optimal solution to this problem is called optimal transport, which is also known as earth mover's distance (EMD): d EMD M := (cid:88) i,j T i,j M i,j .",
"In Equation (2), the distance is computed by the sum of element-wise multiplications of the optimal transportation matrix T and the cost matrix M .",
"In this sense, EMD considers the optimality of distance when combining unit costs in M .",
"That is, the priority of each unit cost when being fused to the distance is encoded in the transportation matrix, which serves as a useful resource for analyzing the distance.",
"We express cosine similarity with average pooling as a transportation problem and analyze its properties in terms of the transportation matrix.",
"Note that this similarity measure is widely adopted in most of the previous studies (Reimers and Gurevych, 2019; Wu et al., 2020; Gao et al., 2021).",
"Formally, for a sentence of length L , the sentence embedding is generated by applying average pooling to L contextualized token embeddings, i.e., s = 1 L (cid:80) L i =1 x i , where x i is the i -th token embedding obtained from a pretrained model.",
"Using the sentence embeddings, the sentence similarity is defined by s AVG = cos( s 1 , s 2 ) = s 1 (cid:62) s 2 (cid:107) s 1 (cid:107)(cid:107) s 2 (cid:107) .",
"This average pooling-based sentence similarity can be converted into the distance, d AVG = 1 s AVG , described by the token embeddings as follows: d AVG = 1 L 1 (cid:88) i =1 L 2 (cid:88) j =1 1 L 1 L 2 (cid:107) x 1 i (cid:107)(cid:107) x 2 j (cid:107) (cid:107) s 1 (cid:107) (cid:107) s 2 (cid:107) x 1 i (cid:62) x 2 j (cid:107) x 1 i (cid:107)(cid:107) x 2 j (cid:107) .",
"From the perspective of Equation (1), this distance is interpreted as a naive solution of a special transportation problem, where the cost matrix and the transportation matrix are MAVG i,j = (cid:107) s 1 (cid:107)(cid:107) s 2 (cid:107) (cid:107) x 1 i (cid:107)(cid:107) x 2 j (cid:107) cos( x 1 i , x 2 j ) , TAVG i,j = 1 L 1 L 2 (cid:107) x 1 i (cid:107)(cid:107) x 2 j (cid:107) (cid:107) s 1 (cid:107) (cid:107) s 2 (cid:107) .",
"(3) Each entry of the cost matrix includes negative cosine similarities between token embeddings, and the contribution of each token pair to the sentence distance (i.e., the transportation matrix) is determined by the norms of the token embeddings.",
"In theory, the rank of the transportation matrix is constrained to be one, which prevents effective integration of the token distances into the sentence distance.",
"In practice, it is impossible to involve only 5971 semantically-aligned token pairs across sentences because all possible token pairs are considered by the products of their norms.",
"From this analysis, we point out that the average pooling-based similarity is not effective enough to capture the token correspondence between sentences.",
"To resolve the ineffectiveness of the existing measure, we introduce a novel distance measure based on optimal transport.",
"We first define a transportation problem that considers semantic relevance in a contextualized embedding space.",
"Given the token embeddings of two sentences from a pretrained language model, we construct a cost matrix MCMD RL 1 L 2 that encodes token similarities using cosine distance, and define the state vectors for the two sentences as one vectors normalized by their sentence lengths d 1 := 1 L 1 (cid:126) 1 and d 2 := 1 L 2 (cid:126) 1 .",
"As discussed in Section 3.1, we consider the optimal solution to this problem as a distance measure named contextualized token mover's distance (CMD): MCMD i,j := 1 cos (cid:0) x 1 i , x 2 j (cid:1) , d CMD M := (cid:88) i,j T i,j MCMD i,j .",
"However, finding T incurs huge computational complexity of O ( L 3 log L ) where L = max( L 1 , L 2 ) (Villani, 2008).",
"For this reason, we relax the optimization problem by removing the first constraint, T (cid:62) (cid:126) 1 = d (cid:48) , similar to Kusner et al. (2015).",
"The optimal solution for this relaxed transportation problem is found in O ( L 2 ) , keeping the rank of the transportation matrix larger than one.",
"In the end, the optimal transportation matrix and the corresponding distance named relaxed CMD (RCMD) are derived as follows: TRCMD 1 i,j = (cid:40) 1 L 1 if j = argmin j (cid:48) MCMD i,j (cid:48) 0 otherwise, d RCMD 1 M := 1 L 1 (cid:88) i min j MCMD i,j .",
"(4) Similarly, the elimination of the second constraint, T (cid:126) 1 = d , results in TRCMD 2 and d RCMD 2 M , where the solutions for the two relaxed problems use min operation on the cost matrix in a row-wise and a column-wise manner, respectively.",
"Note that TRCMD 1 represents the token-level binary alignment from the first sentence to the second sentence and accordingly the final distance is computed by averaging all the distances of the aligned token pairs.",
"Also, it is obvious that TRCMD 1 has a much higher rank than TAVG , which implies that it can express more complex token-level semantic relationship between two sentences.",
"We remark that our solution provides better interpretability of semantic textual similarity compared to the case of average pooling.",
"For the sentence distance in Equation (3), TAVG assigns non-zero values to all token pairs that include irrelevant pairs; this makes it difficult to interpret the result.",
"On the contrary, TRCMD 1 in Equation (4) is designed to explicitly involve the most relevant token pairs across sentences for the sentence distance, which allows us to interpret the result easily.",
"We present a contrastive learning framework for RCMD (CLRCMD) that incorporates RCMD into the state-of-the-art contrastive learning framework.",
"To this end, we convert RCMD to the corresponding similarity by s RCMD 1 M = 1 d RCMD 1 M : s RCMD 1 M ( s 1 , s 2 ) = 1 L 1 L 1 (cid:88) i =1 max j cos( x 1 i , x 2 j ) .",
"s RCMD 2 M is computed in the same manner as well, and we average them to consider bidirectional semantic alignment between two sentences; this provides diverse gradient signals during optimization.",
"The final similarity is described by s RCMD M ( s 1 , s 2 ) := 1 2 (cid:16) s RCMD 1 M ( s 1 , s 2 ) + s RCMD 2 M ( s 1 , s 2 ) (cid:17) .",
"Adopting this similarity measure, the contrastive learning objective for the i -th sentence pair in a training batch is defined as follows: log exp( s RCMD M ( s i , s i + ) / ) (cid:80) Bj =1 (exp( s RCMD M ( s i , s j + ) / ) + exp( s RCMD M ( s i , s j ) / )) , where is the temperature parameter and B is the batch size.",
"Following (Gao et al., 2021), CLRCMD uses the other sentences in the batch to generate negative pairs.",
"We argue that CLRCMD enhances both the sentence similarity and its interpretability in the following aspects.",
"First, CLRCMD alleviates the catastrophic forgetting of pretrained semantics during 5972 Model STS12 STS13 STS14 STS15 STS16 STS-B SICK-R Avg BERT base -avg 29.12 59.96 47.22 60.61 63.72 47.20 58.25 52.30 SBERT base 70.97 76.53 73.19 79.09 74.30 77.03 72.91 74.86 SBERT base -flow 69.78 77.27 74.35 82.01 77.46 79.12 76.21 76.60 SBERT base -whitening 69.65 77.57 74.66 82.27 78.39 79.52 76.91 77.00 SimCSE cls -BERT base 75.30 84.67 80.19 85.40 80.82 84.25 80.39 81.57 SimCSE avg -BERT base 75.88 83.28 80.26 86.06 81.33 84.91 79.94 81.67 CLRCMD-BERT base 75.23 85.06 80.99 86.26 81.50 85.21 80.49 82.11 RoBERTa base -avg 32.50 55.78 45.00 60.61 61.68 55.31 61.66 53.22 SRoBERTa base 71.54 72.49 70.80 78.74 73.69 77.77 74.46 74.21 SRoBERTa base -whitening 70.46 77.07 74.46 81.64 76.43 79.49 76.65 76.60 SimCSE cls -RoBERTa base 76.53 85.21 80.95 86.03 82.57 85.83 80.50 82.52 SimCSE avg -RoBERTa base 75.75 85.10 80.85 85.95 83.33 85.55 79.41 82.28 CLRCMD-RoBERTa base 75.68 85.76 80.92 86.58 83.48 85.89 81.01 82.76 Table 1: The results on 7 STS benchmarks.",
"the finetuning process.",
"Its token-level supervision is produced by leveraging the textual semantics encoded in a pretrained checkpoint, because token pairs are semantically aligned according to their similarities in the contextualized embedding space.",
"Namely, CLRCMD updates the parameters to improve the quality of sentence similarity while less breaking token-level semantics in the pretrained checkpoint.",
"Furthermore, CLRCMD directly distills the relevance of a sentence pair into the relevance of semantically-aligned token pairs.",
"In this sense, our contextualized embedding space effectively captures the token-level semantic relevance from training sentence pairs, which provides better interpretation for its sentence similarity.",
"To analyze our approach in various viewpoints, we design and conduct experiments that focus on the following three research questions:",
"RQ1 Does CLRCMD effectively measure sentence similarities using a pretrained language model?",
"RQ2 Does CLRCMD provide the interpretation of sentence similarity which is well aligned with human judgements?",
"RQ3 Does CLRCMD efficiently compute its sentence similarity for training and inference?",
"We finetune a pretrained model using CLRCMD in the following settings.",
"Following previous work (Gao et al., 2021), we use NLI datasets with hard negatives: SNLI (Bowman et al., 2015) and MNLI (Williams et al., 2018).",
"We use a pretrained backbone attached with a single head, which is the same with (Gao et al., 2021).",
"As the initial checkpoint of the pretrained models, we employ bert-base-uncased and roberta-base provided by huggingface (Devlin et al., 2019; Liu et al., 2019).",
"Adam optimizer is used with the initial learning rate 5 e 5 and linear decay schedule.",
"Fp16 training is enabled where the maximum batch size is 128 on a single V100 GPU, and the softmax temperature is set to = 0 .",
"05 (Gao et al., 2021).",
"The training is proceeded with 4 different random seeds and the best model is chosen using the best Spearman correlation on STSb validation set which is evaluated every 250 steps during training.",
"We evaluate the similarity model finetuned CLRCMD for STS task to quantitatively measure the quality of sentence similarity ( RQ1 ).",
"Metric We measure Spearman correlation for each of seven STS benchmarks and calculate their average to compare the capability of representing sentences in general (Conneau and Kiela, 2018).",
"Baselines We select the baselines that leverage a pretrained model, and they turn out to outperform other traditional baselines.",
"We only list the baseline names for BERT base below; the names for RoBERTa base are obtained by replacing BERT base 5973 with RoBERTa base .",
"BERT base -avg generates sentence embeddings by averaging the token embeddings from BERT base without finetuning.",
"It indicates zero-shot performance of a checkpoint.",
"SBERT base (Reimers and Gurevych, 2019) is a pioneering work to finetune a pretrained model for sentence embeddings.",
"It trains a Siamese network using NLI datasets.",
"SimCSE cls -BERT base (Gao et al., 2021) adopts a contrastive learning framework (Chen et al., 2020) using SNLI and MNLI datasets.",
"The contextualized embedding of [CLS] is used as a sentence embedding.",
"SimCSE avg -BERT base (Gao et al., 2021) is the same with SimCSE cls -BERT base except that it performs average pooling on token embeddings to obtain a sentence embedding.",
"Result Table 1 reports Spearman correlation for each dataset and their average.",
"For most of the datasets, CLRCMD shows higher correlation compared to the state-of-the-art baselines.",
"In particular, for STS14, STS15, SICK-R datasets, CLRCMD-BERT base achieves comparable performance to SimCSE cls -RoBERTa base whose backbone language model is pretrained with 10 times larger data compared to BERT base .",
"This implies that finetuning with token-level supervision from CLRCMD achieves the performance as good as using an expensively pretrained checkpoint.",
"Next, we measure the performance of our approach on interpretable STS (iSTS) tasks in order to validate that CLRCMD embeds a sufficient level of interpretability even without any supervision (i.e., labeled training data) about semantically-aligned chunk pairs ( RQ2 ).",
"Experimental setup We utilize the images and headlines data sources included in SemEval2016 Task 2: iSTS (Agirre et al., 2016).",
"We measure the agreement between human judgement (gold semantic alignment across sentences) and the contributions of all token pairs to sentence similarity (element-wise multiplication of (1 M ) and T ).",
"One challenge to use our similarity model for this task is to convert token pair contributions into chunk-level alignment.",
"First, we summarize token pair contributions into chunk Model images headlines BERT base -avg 82.45 85.98 BERT base -RCMD 83.00 88.25 SimCSE avg -BERT base 82.98 85.80 CLRCMD-BERT base 87.25 90.55 RoBERTa base -avg 61.68 52.01 RoBERTa base -RCMD 82.44 88.92 SimCSE avg -RoBERTa base 73.66 77.30 CLRCMD-RoBERTa base 84.93 88.45 Table 2: The results on SemEval2016 task 2: iSTS.",
"pair contributions by applying simple average pooling based on the chunk mapping represented by c ( i ) = { k | is_overlap ( c i , t k ) } , where c i is the i -th chunk and t k is the k -th token in a sentence.",
"1 Then, to obtain the alignment based on the pairwise chunk contributions, we design a criterion for selecting confident chunk pairs ( i, j ) as follows: C i,j = 1 | c 1 ( i ) || c 2 ( j ) | c 1 ( i ) (cid:88) k c 2 ( j ) (cid:88) l T k,l M k,l , a ( i, j ) = I [ j = argmax j (cid:48) C i,j (cid:48) ] I [ i = argmax i (cid:48) C i (cid:48) ,j ] .",
"Using the aligned chunk pairs obtained by each method, we compute the alignment F1 score as the evaluation metric, which indicates the agreement between human judgement and chunk contribution.",
"2 We consider eight different configurations to investigate the effectiveness of the following components: 1) sentence similarity, 2) contrastive learning, and 3) pretrained checkpoints.",
"Result Table 2 shows the clear tendency of iSTS performance with respect to each of the above components.",
"First of all, the token pair contribution from RCMD is more consistent with human judgement than that from average pooling.",
"RCMD improves alignment F1 scores even without finetuning (BERT base -RCMD and RoBERTa base -RCMD), indicating that RCMD effectively discovers the token-level relevance encoded inside a pretrained checkpoint.",
"In addition, the alignment F1 score increases when we finetune a model using CLRCMD.",
"Notably, CLRCMD-BERT base successfully improves the alignment F1 score whereas 1 We use gold standard chunking information to focus on alignment only, which is the second subtrack in iSTS.",
"SimCSE avg -BERT base does not.",
"This result shows that finetuning a model using the similarity measure based on semantically-aligned token pairs (i.e., fine-grained supervision induced by RCMD) further enhances the interpretability of a model.",
"We qualitatively analyze the sentence similarity from the perspective of the transportation problem in order to demonstrate that a model trained by CLRCMD provides clear and accurate explanation ( RQ2 ).",
"To this end, we visualize the contribution of token pairs obtained from CLRCMD-BERT base and that from SimCSE avg -BERT base , and then clarify how their sentence similarity is computed differently from each other.",
"Three sentence pairs are randomly selected from STS13 dataset.",
"Figure 2 illustrates the token pair contribution heatmap for positive, neutral, and negative sentence pairs.",
"CLRCMD vs. SimCSE avg Overall, CLRCMD aligns two sentences better than the baseline.",
"To be specific, CLRCMD effectively highlights the contributions of semantically-relevant token pairs and excludes the other contributions (Figure 2 up-per).",
"On the contrary, SimCSE avg fails to represent meaningful token-level relevance for sentence similarity (Figure 2 lower).",
"The rank-1 constraint of SimCSE avg prevents the model from getting any plausible alignment between two sentences, while it simply tunes the contributions of all possible token pairs at once.",
"We emphasize that the super-fluous correlation in the heatmap not only inhibits the capability to capture sentence similarity, but also makes it difficult for humans to understand how sentence similarity is computed.",
"Case study on positive, neutral, and negative sentence pairs For the positive pair (Figure 2 left), CLRCMD clearly matches all semantically-aligned token pairs including the linking words ({,, and}{and}), synonyms ({realize, comprehend}{comprehend}), and omitted contexts ({the}{the nature of, of}).",
"For the neutral pair (Figure 2 middle), the two sentences have the same lexical structure except for the date.",
"In this case, CLRCMD assigns low contributions to the token pairs about day and month ({25, august}{19, july}), while keeping the contributions high for all the other pairs of identical tokens.",
"Therefore, end-users can clearly figure out which part is semantically different based on their contributions as well as alignment.",
"In case of the negative pair (Figure 2 right), both the models are not able to find any plausible alignment; CLRCMD lowers contributions for most of the token pairs ex-5975 Batch size 16 32 64 128 RCMD dense 7.5 22.2 OOM OOM RCMD sparse 4.6 6.1 10.6 25.8 Table 3: GPU memory usage (GB) of CLRCMD with various batch sizes.",
"cept the token pair with identical contents (after riots).",
"That is, end-users also can interpret the negative pair based on the heatmap where semantic correspondence between two sentences does not clearly exist but few overlapped tokens highly contribute to the sentence similarity.",
"We measure GPU memory usage and inference time of CLRCMD-BERT base to demonstrate that CLRCMD can be executed on a single GPU and an inference of our model takes almost the same cost to that of the baseline ( RQ3 ).",
"Implementation of RCMD We implement two variants of RCMD, RCMD dense and RCMD sparse , to investigate the effect of exploiting the sparseness in RCMD.",
"Both of them calculate sentence distance by the sum of element-wise multiplications of the cost matrix and the transportation matrix.",
"For an input sentence pair, RCMD dense maintains the full pairwise token distances ( MCMD ), whereas RCMD sparse only keeps the token distances at which the transportation matrix has nonzero values ( { MCMD i,j | TCMD i,j (cid:54) = 0 } ).",
"Note that the number of nonzero entries in the transportation matrix of RCMD is at most 2 L , which is an order of magnitude smaller than the number of all entries, L 2 .",
"Result Table 3 reports the GPU memory usage during the finetuning process.",
"For batch-wise contrastive learning, GPU memory requirement becomes O ( B 2 ) in terms of the batch size B , because all pairwise sentence similarities within a batch need to be computed.",
"In this situation, RCMD dense using a dense matrix drastically increases GPU memory usage by O ( B 2 L 2 ) , and as a result, the batch size cannot grow larger than 32.",
"In contrast, RCMD sparse successfully enlarges the batch size up to 128 by exploiting sparseness in the transportation matrix of RCMD, which eventually reduces the space complexity to O ( B 2 L ) .",
"Experimental setup We measure the time for predicting the similarities of 512 sentence pairs on a single V100 GPU while increasing the sequence length from 8 to 128, which is the most influential factor for inference time.",
"We repeat this process 10 times and report the average values.",
"Result Figure 3 shows the average elapsed time for inference.",
"The model with RCMD takes almost the same inference time as the model with the simple average pooling-based similarity.",
"We highlight that 98% of the sentences in STS13 dataset consist of at most 48 tokens and particularly, the time difference is negligible in case of predicting the sentence pairs whose sentences have less than 48 tokens.",
"This result shows that significant increment of inference time does not occur within the range of the sequence length owing to parallel GPU computations, even though RCMD has the quadratic time complexity with respect to the sentence length.",
"In this work, we present CLRCMD, a learning framework for an interpretable sentence similarity model based on optimal transport.",
"First, we view each sentence similarity measure as a transportation problem, pointing out the unexpressiveness of the existing pooling-based similarity.",
"Integrating the concept of optimal transport into a pretrained language model, CLRCMD defines the distance measure by using the semantically-aligned token pairs between two sentences and furthermore, it finetunes a model with this distance based on contrastive learning for better interpretability.",
"We empirically show that CLRCMD accurately predicts 5976 sentence similarity while providing interpretable token pair contributions consistent with human judgements.",
"With the belief that the ability to interpret model behavior is critical for future AI models, we focus on enhancing this virtue targeted on STS task throughout this research.",
"This work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2018-0-00584, (SW starlab) Development of Decision Support System Software based on Next-Generation Machine Learning) and the NRF grant funded by the MSIT (South Korea, No.2020R1A2B5B03097210) ) and Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2019-0-01906, Artificial Intelligence Graduate School Pro-gram(POSTECH))."
] | [
"abstain",
"abstain",
"method",
"objective",
"objective",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"method",
"abstain",
"objective",
"objective",
"abstain",
"objective",
"abstain",
"method",
"method",
"abstain",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"other",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"objective",
"abstain",
"objective",
"method",
"other"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.