sentences
sequence
labels
sequence
[ "Grammatical Error Correction (GEC) has been recently modeled using the sequence-to-sequence framework.", "However, unlike sequence transduction problems such as machine translation, GEC suffers from the lack of plentiful parallel data.", "We describe two approaches for generating large parallel datasets for GEC using publicly available Wikipedia data.", "The first method extracts source-target pairs from Wikipedia edit histories with minimal filtration heuristics, while the second method introduces noise into Wikipedia sentences via round-trip translation through bridge languages.", "Both strategies yield similar sized parallel corpora containing around 4B tokens.", "We employ an iterative decoding strategy that is tailored to the loosely supervised nature of our constructed corpora.", "We demonstrate that neural GEC models trained using either type of corpora give similar performance.", "Fine-tuning these models on the Lang-8 corpus and ensembling allows us to surpass the state of the art on both the CoNLL-2014 benchmark and the JFLEG task.", "We provide systematic analysis that compares the two approaches to data generation and highlights the effectiveness of ensembling.", "Equal contribution.", "Listing order is random.", "Jared conducted systematic experiments to determine useful variants of the Wikipedia revisions corpus, pre-training and finetuning strategies, and iterative decoding.", "Chris implemented the ensemble and provided background knowledge and resources related to GEC.", "Shankar ran training and decoding experiments using round-trip translated data.", "Jared, Chris and Shankar wrote the paper.", "Noam identified Wikipedia revisions as a source of training data.", "Noam developed the heuristics for using the full Wikipedia revisions at scale and conducted initial experiments to train Transformer models for GEC.", "Noam and Niki provided guidance on training Transformer models using the Tensor2Tensor toolkit.", "Simon proposed using round-trip translations as a source for training data, and corrupting them with common errors extracted from Wikipedia revisions.", "Simon generated such data for this paper.", "Much progress in the Grammatical Error Correction (GEC) task can be credited to approaching the problem as a translation task (Brockett et al., 2006) from an ungrammatical source language to a grammatical target language.", "This has enabled Neural Machine Translation (NMT) sequence-to-sequence (S2S) models and techniques to be applied to the GEC task (Tao et al., 2018b; Chollampatt and Ng, 2018; Junczys-Dowmunt et al., 2018).", "However, the efficacy of NMT techniques is degraded for low-resource tasks (Koehn and Knowles, 2017).", "This poses difficulties for S2S approaches to GEC, as Lang-8, the largest publicly available parallel corpus, contains only 25 M words (Mizumoto et al., 2011).", "Motivated by this data scarcity, we present two contrasting approaches to generating parallel data for GEC that make use of publicly available English language Wikipedia revision histories 12 .", "Our first strategy is to mine real-world errors.", "We attempt to accumulate sourcetarget pairs from grammatical errors and their human-curated corrections gleaned from the Wikipedia revision histories.", "Unlike previous work (Grundkiewicz and Junczys-Dowmunt, 2014), we apply minimal filtering so as to generate a large and noisy corpus of 4 B tokens (Table 1).", "As a consequence of such permissive filtering, the generated corpus contains a large number of real grammatical corrections, but also noise from a variety of sources, including edits with drastic semantic changes, imperfect corrections, ignored errors, and Wikipedia spam.", "sponding source sentences by translating the target into another language and back.", "This roundtrip translation introduces relatively clean errors, so the generated corpus is much less noisy than the human-derived Wikipedia corpus.", "However, these synthetic corruptions, unlike human errors, are limited to the domain of errors that the translation models are prone to making.", "Both approaches benefit from the broad scope of topics in Wikipedia.", "Table 1 : Statistics computed over extant training sets for GEC (top) and corpora generated from Wikipedia in this work (bottom).", "We train the Transformer sequence-to-sequence model ( Vaswani et al., 2017) on data generated from the two schemes.", "Fine-tuning the models on the Lang-8 corpus gives us additional improvements which allow a single model to surpass the state-of-art on both the CoNLL-2014 and the JFLEG tasks.", "Finally, we explore how to combine the two data sources by comparing a single model trained on all the data to an ensemble of models.", "Wikipedia provides a dump of the revision histories of all Wikipedia pages.", "For each Wikipedia page, the dump contains chronological snapshots of the entire content of the page before and after every submitted edit; thus two consecutive snapshots characterize a single revision to the page.", "Because a small number of popular pages see disproportionate traffic, some pages grow very large.", "As we are interested in the edits between snapshots, and not identical content that is typically in higher proportion in the revision histories for the largest pages, we discard pages larger than 64Mb.", "To prevent remaining large pages from skewing the dataset towards their topics with their many revisions, we downsample consecutive revisions from individual pages, selecting only log 1 .", "5 ( n ) pairs for a page with a total of n revisions.", "This reduces the total amount of data 20-fold.", "Each remaining pair of consecutive snapshots forms a sourcetarget pair.", "The process for extracting examples from a page's revision history is illustrated in Figure", "1. From the XML of each page in a sourcetarget pair, we extract and align the text, removing non-text elements.", "We then probabilistically cut the aligned text, skipping over non-aligned sequences.", "Two cuts bound an example pair, for which the source sequence is provided by the older snapshot, and the target sequence by the newer snapshot.", "Following extraction of the examples, we do a small amount of corruption and filtration in order to train a model proficient at both spelling and grammar correction.", "We probabilistically introduce spelling errors in the source sequences at a rate of 0 .", "003 per character, randomly selecting deletion, insertion, replacement, or transposition of adjacent characters for each introduced error.", "We throw out examples exceeding a maximum length of 256 word-pieces.", "The majority of examples extracted by this process have identical source and target.", "Since this is not ideal for a GEC parallel corpus, we downsample identity examples by 99% to achieve 3.8% identical examples in the final dataset.", "The data generation scripts we use have been opensourced 3 .", "In Figure 2, we show examples of extracted sourcetarget pairs.", "While some of the edits are grammatical error corrections, the vast majority are not.", "As an alternative approach to extracting the edits from Wikipedia revisions, we extract sentences from the identity examples that were discarded during edit extraction, and generate a separate parallel corpus by introducing noise into those sentences using round-trip translation via a bridge language.", "Therefore, the original sentence from Wikipedia is the target sentence and output of the round-trip translation is the corresponding source sentence.", "The round trip translations introduce noise according to both the weaknesses of the translation models and the various inherent ambiguities of translation.", "We create a corrupted dataset using each bridge language.", "We use French (Fr), German (De), Japanese (Ja) and Russian (Ru) as bridge languages because they are high-resource languages and relatively dissim-3 https://github.com/tensorflow/tensor2tensor/ blob/master/tensor2tensor/data_generators/wiki_ revision.py source : target Revision downsampling log 1.5 (n) sampled revisions live page oldest newest Single page history n revisions live page Example pair extraction diffs align cut cuts extract <text> target source : : : : : : Figure 1 : Process for extracting sourcetarget pairs from revision history of a Wikipedia page.", "Figure 2 : Example sourcetarget pairs from each corpus.", "ilar from each other.", "Thus, we compute a total of four corrupted datasets.", "The translations are obtained using a competitive machine translation system (Wu et al., 2016).", "These round trip translated sentence-pairs contained only a small fraction of identity translations compared to those that are present in real-world GEC corpora.", "To address this deficiency, we augment this corpus with 2.5% identity translations.", "Analogous to Section 2, we want the models to learn both spelling and grammar correction.", "Thus, we randomly corrupt single characters via insertion, deletion, and transposition, each with a probability of 0 .", "005 / 3 .", "Round-trip translations do not contain some types of word and phrase errors (e.g., your/you're, should of/should have) and so we additionally corrupt the translated text by stochastically introducing common errors identified in Wikipedia.", "We first examine the Wikipedia revision histories to extract edits of up to three words whose source and target phrases are close in edit distance, and which do not contain num-bers or capitalization.", "For each of the remaining edits (original, revised), we compute the probability that the user typed original when they intended to type revised : P ( original | revised ) = C ( original , revised ) C ( revised ) , where C ( x ) refers to the counts of x in the corpus.", "We then probabilistically apply these rules to corrupt the translated text.", "This process produces a parallel corpus identical in size to the Wikipedia Revision corpus, though with vastly different characteristics.", "Because the target sentences are Wikipedia sentences that were left unchanged for at least one Wikipedia revision, they are less likely to contain poor grammar, misspellings, or spam than the target sequences of the revisions data.", "Figure 3 : F 0 .", "5 with iterative decoding on the CoNLL dev set.", "Triangles indicate performance with single-shot decoding.", "Also, the errors introduced by round-trip translation are relatively clean, but they represent only a subset of the domain of real-world errors.", "In contrast, the Wikipedia data likely has good coverage of the domain of real-world grammatical errors, but is polluted by significant noise.", "Examples from both corpora are shown in Figure", "2. Examples of round-trip translations for each bridge language are shown in Table", "2. 4 Iterative Decoding Many sentences that require grammatical correction contain multiple errors.", "As a result, it can be difficult to correct all errors in a single decoding pass.", "This is specifically a problem when using models trained on noisy parallel data such as Lang-8 where the target sentences still contain grammatical errors.", "Following other work on the GEC task (Dahlmeier and Ng, 2012a; Tao et al., 2018a), we employ an iterative decoding algorithm that allows the model to make multiple incremental corrections.", "This allows the model multiple chances to suggest individually high-confidence changes, accruing incremental improvements until it cannot find any more edits to make.", "Our iterative decoding algorithm is presented in Algorithm", "1. Given the source sentence S and a hypothesis H , Cost ( H ) refers to the negative log probability logP ( H | S ) using the sequence-to-sequence model.", "In each iteration, the algorithm performs a conventional beam search but is only allowed to output a rewrite (non-identity translation) if it has high confidence i.e., its cost is less than the cost of the identity translation times a pre-specified threshold.", "Using iterative decoding allows a stricter threshold value than what is optimal for single-shot decoding, as a change ignored for being low confidence in one decoding iteration may be selected in the next.", "Using incremental edits produces a significant improvement in performance over single-shot decoding for models trained on the Wikipedia revision data, a highly noisy corpus, while models trained on the relatively clean round-trip translation data see no improvment.", "All models finetuned on Lang-8 see improvement with iterative decoding (Figure 3, Table 3).", "In Table 4, we show an example of iterative decoding in action.", "The model continues to refine the input until it reaches a sentence that does not require any edits.", "We generally see fewer edits being applied as the model gets closer to the final result.", "In this work, we use the Transformer sequence-to-sequence model (Vaswani et al., 2017), using the Tensor2Tensor opensource implementation.", "4 We use 6 layers for both the encoder and the decoder, 8 attention heads, embedding size d model = 1024 , a position-wise feed forward network at every layer of inner size d ff = 4096 , and Adafactor as optimizer with inverse squared root decay (Shazeer and Stern, 2018) 5 .", "The word tokens are split into subwords using a variant of the byte-pair encoding technique (Sennrich et al., 2016b), described in Schuster and Nakajima (2012).", "4 https://github.com/tensorflow/tensor2tensor 5 We used the transformer clean big tpu setting.", "Table 2 : Example sentences generated via round-trip translation with introduced spelling errors.", "Table 3 : Comparing iterative decoding to single-shot decoding for two models, trained on all Wikipedia revisions data and on all round-trip translation (RTT) data.", "Table 4 : Iterative decoding on a sample sentence.", "with a batch size of approximately 64,000 word pieces.", "While training on the Wikipedia corpora, we set the learning rate to 0.01 for the first 10,000 steps, then decrease it proportionally to the inverse square root of the number of steps after that.", "We then finetune our models on Lang-8 for 50 epochs and use a constant learning rate of 3 10 5 .", "We stop the fine-tuning before the models start to overfit on a development set drawn from Lang-8.", "We report results on the CoNLL-2014 test set (Ng et al., 2014) and the JFLEG test set (Napoles et al., 2017; Heilman et al., 2014).", "Our initial experiments with iterative decoding showed that increasing beam sizes beyond 4 did not yield improvements in performance.", "Thus, we report all results using a beam size of 4.", "Our ensemble models are obtained by decoding with 4 identical Transformers trained and finetuned separately.", "Ensembles of neural translation systems are typically constructed by computing the logits from each individual system and combining them using either an arithmetic average (Sutskever et al., 2014) or a geometric average (Cromieres et al., 2016).", "Similar to Cromieres et al. (2016), we find that a geometric average outperforms an arithmetic average.", "Hence, we report results using only this scheme.", "Following (Grundkiewicz and Junczys-Dowmunt, 2018; Junczys-Dowmunt et al., 2018), we preprocess JFLEG development and test sets with a spell-checking component but do not apply spelling correction to CoNLL sets.", "For CoNLL sets, we pick the best iterative decoding threshold and number of iterations on a subset of the CoNLL-2014 training set, sampled to have the same ratio of modified to unmodified sentences as the CoNLL-2014 dev set.", "For JFLEG, we pick the best decoding threshold on the JFLEG dev set.We report performance of our models by measuring F 0 .", "5 with the M 2 scorer (Dahlmeier and Ng, 2012b) on the CoNLL-2014 dev and test sets, and the GLEU+ metric ( Napoles et al., 2016) on the JFLEG dev and test sets.", "Table 5 reports statistics computed over the development and test sets.", "Table 5 : Statistics for test/dev data.", "Table 6 : Performance of the models trained on variants of data extracted from Wikipedia revision histories (top panel) and then fine-tuned on Lang-8 (bottom panel), and of a model trained only on Lang-8 with the same architecture.", "rate of revision downsampling, and maximum edit distance.", "We generate four data sets using variations of these values: Default setting uses the default values described in Section 2, Max-edit-28 and Max-edit-6 correspond to maximum edit distance of 28 and 6 wordpieces respectively, and Dwnsample-1.35 corresponds to a revision downsampling rate of log 1 .", "35 ( n ) for a page with a total of n revisions (whereas the default setting uses a rate of log 1 .", "5 ( n ) ).", "We train a fifth model on the union of the datasets.", "Table 6 shows that varying the data generation parameters led to modest variation in performance, but training on the union of the diverse datasets did not yield any benefit.", "Fine-tuning yields large improvements for all models.", "As a sanity check, we also trained a model only on Lang-8 with the same architecture.", "All pretrained and fine-tuned models substantially outperform this Lang-8 only model, confirming the usefulness of pre-training.", "As for the Revision data, we train a model on each of the round-trip translation datasets, and a fifth model on the union of their data, then fine-tune all models.", "The results are shown in Table 7.", "Using Japanese as the bridge language gives the best performance on CoNLL-2014, even when compared to the model trained on all round-trip data.", "This is likely because the error patterns generated using Japanese round-trip translations are very similar to those in CoNLL-2014 set, created from non-native speakers of English (Ng et al., 2014).", "Pooling all round-trip translations dilutes this similarity and lowers performance on CoNLL-2014.", "However, the model trained on all data performs best on the JFLEG set, which has a different distribution of errors relative to CoNLL-2014 (Napoles et al., 2017).", "After fine-tuning, all round-trip models perform considerably better than the Lang-8 model.", "Table 7 : Performance of the models trained on the roundtrip translations (top panel) and fine-tuned on Lang8 (bottom panel) and of a model trained only on Lang-8 with the same architecture.", "Having generated multiple diverse datasets, we investigate strategies for utilizing combinations of data from multiple sources.", "For each corpus, we train a single model on all data and compare its performance to an ensemble of the 4 individually-trained models (Table 8).", "The ensemble clearly outperforms the single model for both types of data.", "We additionally train a single model on the union of all Revisions and Round-Trip Translated datasets reported on in Tables 6 and 7, which we compare to an ensemble of the 8 models trained individually on those datasets.", "When Wikipedia edits are combined with the round-trip translations, the single-model performance remains unchanged on CoNLL-2014, while the ensemble shows an improvement.", "This suggests that when utilizing disparate sources of data, an ensemble is preferable to combining the data.", "We compare the performance of our best individual system, trained on all revisions, the best ensemble of 8 models trained from both revisions and roundtrip translations on the CoNLL-2014 and JFLEG datasets (Table 9).", "We only report performance of models that use publicly available Model CoNLL-2014 JFLEG Precision Recall F 0 .", "Table 8 : Combining datasets using either a single model trained on all data versus an ensemble of models.", "All models are fine-tuned on Lang-8.", "Lang-8 and CoNLL datasets.", "Our single system trained on all revisions outperforms all previous systems on both datasets, and our ensemble improves upon the single system result 6 .", "All models trained on Wikipedia-derived data are demonstrated to benefit significantly from finetuning on Lang-8 (Tables 6 and 7).", "In Table 10, we compare example corrections proposed by two Wikipedia-derived models to the corrections proposed by their fine-tuned counterparts.", "The changes proposed by the revisions-trained model often appear to be improvements to the original sentence, but fall outside the scope of GEC.", "Models finetuned on Lang-8 learn to make more con-servative corrections.", "The finetuning on Lang-8 can be viewed as an adaptation technique that shifts the model from the Wikipedia-editing task to the GEC task.", "On Wikipedia, it is common to see substantial edits that make the text more concise and readable, e.g. replacing which is RFID for short with (RFID), or removing less important clauses like Then we can see that.", "But these are not appropriate for GEC as they are editorial style fixes rather than grammatical fixes.", "The models trained on round-trip translation seem to be make fewer drastic changes.", "Table 11 reports F 0 .", "5 across broad error categories for models trained from revisions and round-trip translations on the CoNLL-2014 test 6 Using non-public sentences beyond the regular Lang-8 and CoNLL datasets, Tao et al. (2018b) recently obtained an F 0 .", "5 of 61.3 on CoNLL-2014 and a GLEU of 62.4 on JFLEG.", "Using finetuning data beyond the standard datasets, we obtain an F 0 .", "5 of 62.8 on CoNLL-2014 and a GLEU of 65.0 on JFLEG.", "set.", "The error categories were tagged using the approach in Bryant et al. (2017).", "Although the overall F 0 .", "5 of the 2 ensembles are similar, there are notable differences on specific categories.", "The ensemble using round-trip translation performs considerably better on prepositions and pronouns while the revision ensemble is better on morphology and orthography.", "Thus, each system may have advantages on specific domains.", "Progress in GEC has accelerated rapidly since the CoNLL-2014 Shared Task (Ng et al., 2014).", "Rozovskaya and Roth (2016) combined a Phrase Based Machine Translation (PBMT) model trained on the Lang-8 dataset (Mizumoto et al., 2011) with error specific classifiers.", "Junczys-Dowmunt and Grundkiewicz (2016) combined a PBMT model with bitext features and a larger language model.", "The first Neural Machine Translation (NMT) model to reach the state of the art on CoNLL-2014 (Chollampatt and Ng, 2018) used an ensemble of four convolutional sequence-to-sequence models followed by rescoring.", "The current state of the art ( F 0 . 5 of 56.25 on CoNLL-2014) using publicly available Lang-8 and CoNLL data was achieved by Grundkiewicz and Junczys-Dowmunt (2018) with a hybrid PBMT-NMT system.", "A neural-only result with an F 0 .", "5 of 56.1 on CoNLL-2014 was reported by Junczys-Dowmunt et al. (2018) using an ensemble of neural Transformer models (Vaswani et al., 2017), where the decoder side of each model is pretrained as a language model.", "From a modeling perspective, our approach can be viewed as a direct extension of this last work.", "Rather than pretraining only the decoder as a language model, we pretrain on a large amount of parallel data from either Wikipedia revision histories or from round-trip translations.", "While pretraining on out-of-domain data has been employed previously for neural machine translation (Luong and Manning, 2015), it has not been presented in GEC thus far, perhaps due to the absence of such large datasets.", "Tao et al. (2018b) apply iterative decoding, where two neural models, trained in left-to-right and right-to-left directions, are applied in an interleaved manner.", "Similar to their study, we find that iterative decoding can improve the performance of GEC.", "Prior work (Brockett et al., 2006; Foster and Andersen, 2009; Rozovskaya and Roth, Model CoNLL-2014 JFLEG Precision Recall F 0 .", "Table 9 : Comparison of recent state-of-the-art models (top) and our best single-system and ensemble models (bottom) on the", "2010), (Felice et al., 2014; Xie et al., 2016; Rei et al., 2017) has investigated multiple strategies for generating artificial errors in GEC.", "Cahill et al. (2013) show that preposition corrections extracted from Wikipedia revisions improve the quality of a GEC model for correcting preposition errors.", "Back-translation (Sennrich et al., 2016a; Xie et al., 2018) addresses data sparsity by introducing noise into a clean corpus using a translation model trained in the clean to noisy direction.", "However, training such a reverse translation model also requires access to parallel data which is scarce for GEC.", "In contrast, round-trip translation attempts to introduce noise via bridge translations.", "Round-trip translations have been investigated for GEC.", "Madnani et al.", "Madnani et al. (2012) combine round-trip translations to generate a lattice from which the best correction is extracted using a language model.", "Desilets et al. (2009) use round-trip translations for correcting preposition errors.", "In contrast to these approaches, we employ round-trip translations for generating a large parallel training corpus for neural GEC models.", "Motivated by data scarcity for the GEC task, we present two contrasting approaches for generating large parallel corpora from the same publicly available data source.", "We believe both techniques offer promising research avenues for further development on the task.", "We show that models trained exclusively on minimally filtered English Wikipedia revisions can already be valuable for the GEC task.", "This approach can be easily extended to the many other languages represented in Wikipedia, presenting an Error Type Revisions Round-trip Translations Pre-trained Fine-tuned Ensemble Pre-trained Fine-tuned Ensemble Adjective 16.9 29.4 36.6 14.4 27.8 37.9 Adverb 31.5 39.7 43.5 21.7 33.3 44.6 Determiner 31.3 57.2 59.4 27.4 57.7 59.5 Morphology 64.5 66.1 66.1 38.7 59.3 62.0 Noun 24.1 28.6 33.2 8.6 27.5 32.4 Orthography 69.4 57.1 69.6 19.2 58.6 57.9 Preposition 33.0 49.2 55.6 30.3 52.7 61.9 Pronoun 34.9 34.1 44.6 24.4 41.7 50.1 Punctuation 26.7 29.5 36.4 29.8 18.4 33.3 Spelling 60.6 69.2 66.7 51.0 58.5 62.5 Verb 36.1 47.1 43.2 20.7 45.2 43.2 Word Order 45.5 33.3 52.1 34.8 42.9 45.5 Table 11 : F 0 .", "5 across error categories on the CoNLL-2014 test set.", "opportunity to extend GEC into languages that may have no extant GEC corpora.", "While we expect pre-training on Wikipedia to give us a reasonable model, it may be crucial to fine-tune this model on small amounts of clean, in-domain corpora to achieve good performance.", "When extracting examples from the Wikipedia revisions, we implemented minimal filtration in pursuit of simplicity, and to produce a sufficiently large dataset.", "Implementing more complex filtration in order to reduce the noise in the generated dataset will likely be a productive avenue to increase the value of this approach.", "The performance achieved by the reported Wikipedia revisions-trained models, both with and without finetuning, may be used as a baseline by which to evaluate smaller, cleaner datasets drawn from Wikipedia revisions.", "Round-trip translation takes advantage of the advanced state of the task of Machine Translation relative to GEC by leveraging extant translation models as a source of grammatical-style data corruption.", "In this work, we only experiment with producing English-language GEC corpora, but this technique can be extended to any of the many languages for which translation models exist.", "It would be useful to assess how the translation quality influences the performance of the resulting GEC model.", "In our experiments with round-trip translation, we used target sentences drawn from Wikipedia to maintain a reasonable comparability between the two techniques.", "However, there is no constraint preventing the application of roundtrip translation to diverse data sources; any source of clean text can be turned into a parallel GEC corpus.", "This can be used to increase diversity in the generated data, or to generate domain-specific GEC corpora (e.g. patents).", "We observe that pooling two diverse data sources used to train competitively performing models on the same task can degrade performance.", "This suggests that within datasets useful for a specific task, there may be greater value to be discovered in finding optimal partitions of the data for training models which can then be combined using ensembles.", "Prior work in combining diverse data sources includes addition of special tokens (John-son et al., 2017) and meta-learning (Finn et al., 2017).", "We intend to compare ensembling with these alternatives.", "We have opensourced the scripts used to extract example pairs from Wikipedia, which we hope will become a resource in the further development of models for GEC as well as other NLP tasks that rely on edit histories, such as sentence re-writing (Botha et al., 2018) and text simplification (Tonelli et al., 2016).", "We thank Jayakumar Hoskere, Emily Pitler, Slav Petrov, Daniel Andor, Alla Rozovskaya and Antonis Anastasopoulos for helpful suggestions.", "We also thank Jayakumar Hoskere, Shruti Gupta and Anmol Gulati for providing various GEC resources that were used in this paper." ]
[ "abstain", "abstain", "method", "abstain", "abstain", "method", "objective", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "result", "abstain", "result", "method", "result", "abstain", "method", "abstain", "abstain", "method", "other", "abstain", "abstain", "method", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "method", "objective", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result", "result", "result", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "abstain", "method", "other", "other", "abstain", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "method", "result", "other", "abstain", "abstain", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "result", "abstain", "abstain", "method", "method", "other", "other" ]
[ "We present a novel approach to the problem of text style transfer.", "Unlike previous approaches requiring style-labeled training data, our method makes use of readily-available unlabeled text by relying on the implicit connection in style between adjacent sentences, and uses labeled data only at inference time.", "We adapt T5 (Raffel et al., 2020), a strong pretrained text-to-text model, to extract a style vector from text and use it to condition the decoder to perform style transfer.", "As our label-free training results in a style vector space encoding many facets of style, we recast transfers as targeted restyling vector operations that adjust specific attributes of the input while preserving others.", "We demonstrate that training on unlabeled Amazon reviews data results in a model that is competitive on sentiment transfer, even compared to models trained fully on labeled data.", "Furthermore, applying our novel method to a diverse corpus of unlabeled web text results in a single model capable of transferring along multiple dimensions of style (dialect, emotiveness, formality, politeness, sentiment) despite no additional training and using only a handful of exemplars at inference time.", "There has been a recent surge of interest in text style transfer, with the aim of training models able to modify specific attributes of input text (e.g., sentiment or formality) while preserving the remaining content.", "For example, a sentiment transfer model might transform the input best book ever! into worst book ever!, while a formality transfer model might change the same input into This is the best book I have ever read.", "In these contexts, we de-fine style as the attributes intended to be changed, Work done while at Google Research.", "Please direct correspondence to [email protected] , [email protected] and [email protected] .", "while content consists of the attributes intended to be preserved.", "1 Work in this area falls into three categories.", "Supervised approaches like Jhamtani et al. (2017) transfer between pre-selected styles, and rely on parallel training data to learn the desired input/output correspondence.", "This method is limited by the availability of parallel corpora.", "So-called unsupervised approaches like Li et al. (2018) and Lample et al. (2019) remove the need for parallel data, but still require that all training examples have style labels, and are limited to transfer between a pre-specified set of styles.", "Few-shot approaches like that of Xu et al. (2020) remove the need for any training labels, instead using a small number of labeled examples during inference.", "While the most challenging, this offers the potential for transferring between arbitrary styles at inference time and has significant value, as curated datasets are not available for many style attributes.", "In this work, we explore the hypothesis that large pretrained text-to-text models like T5 (Raffel et al., 2020) already contain a strong representation of textual style, which can be extracted and used to condition the decoder of a style transfer model through a relatively lightweight fine-tuning procedure.", "To isolate style information in the absence of labels, we rely on the observation that style is a slow-moving feature, which tends to be consistent over large spans of text.", "Specifically, given two adjacent sentences from an unlabeled corpus, we train our model to extract a style vector from the first and use that vector to perform denoising and other reconstruction tasks on the second.", "This technique extends the approach of Lample et al. (2019) to the few-shot setting, and is loosely reminiscent of the work of Akama et al. (2018), who found 1 Krishna et al. (2020) use a different definition of style, under which certain transfers such as sentiment would instead be examples of attribute transfer.", "large context windows useful for encoding style information in word embeddings.", "Our approach also allows us to reformulate the style transfer operation as a directional operation in style vector space using the difference between target and source style vectors; we call this targeted restyling.", "When combined with a novel tunable inference technique for controlling token add/delete rates, this gives our final model: Text S tyle E xtraction and T unable T argeted R estyling (TextSETTR).", "Our main contributions are to: (1) present a new, flexible approach to few-shot style transfer, (2) use sentence adjacency as a means for inducing text style representations, (3) reframe style transfer as targeted restyling directional operations in style space, (4) introduce tunable inference for finer-grained control of transfers, (5) show the effectiveness of noisy back-translation training, and (6) illustrate few-shot generalization to a range of style attributes including dialect, emotiveness, formality, politeness, and sentiment.", "Figure 1 illustrates our proposed TextSETTR architecture.", "At a high level, our approach follows Lample et al. (2019), who train a denoising auto-encoder conditioned on a fixed-width style vector.", "The key difference in our case is that the true style is unknown at training time.", "To overcome this, we jointly train a style extractor component to induce a useful style representation (that can aid in reconstruction) from text in the nearby context.", "We describe this in more detail below.", "We conduct our experiments using a modified version of the Text-to-Text Transfer Transformer (T5) (Raffel et al., 2020).", "Like T5, our model includes a transformer-based encoder and decoder.", "As in T5 pretraining, the input to the encoder is a corrupted version of the target, resulting in a reconstruction task.", "Our goal is to design a type of corruption that results in this training task resembling style transfer, despite the lack of labeled training data.", "Our core addition to T5 is the style extractor.", "This component's architecture is based on that of the encoder, and its input is an uncorrupted sentence in the same style as the target; relying on our assumption that style is a slow-moving feature, we use the sentence preceding the target (the context) for this.", "This encourages extracting a style representation that is useful for repairing the corrupted input.", "We note that this can result in a representation that encodes slow-moving attributes in general, which may include some features that do not fit an intuitive definition of textual style (such as topic).", "The only architectural difference between the encoder and style extractor is that we mean-pool the style extractor's hidden state sequence into a single fixed-width style vector; in our experiments, the dimensionality of this vector and the encoder hidden states is 1024.", "To incorporate the style vector into the rest of the model, we simply add it to each of the final encoder hidden states.", "We initialize the weights of our model with those of a pretrained T5 model.", "We initialize both the style extractor and encoder from the pretrained encoder, but the weights are not tied during training.", "We experiment with combinations of three different reconstruction tasks, each contributing a loss term.", "All three share the same overall structure, where a sentence s i in the dataset is corrupted by some function f to produce s i = f ( s i ) .", "The cross-entropy loss is calculated using the uncorrupted sentence s i as the target, the corrupted sentence s i as the input, and the uncorrupted preceding sentence s i 1 as the context.", "The three choices of f are Noise (N), Back-Translation (BT), and Noisy Back-Translation (NBT), described below.", "Noise (N) This function corrupts the input by", "(i) dropping,", "(ii) replacing, and/or", "(iii) shuffling tokens, in that order.", "For each example we sample a separate noise probability p for each sub-type of noise from a uniform distribution in the range 2060%; doing so should widen the model's range of possible style transfers at test time.", "For drop noise, we drop each token in s i with probability p .", "For replace noise, let s ik be the k th token within s i .", "For each s i , a random other example s j is chosen, and then each token s ik is replaced by s jk with probability p .", "If s j has fewer than k tokens, then the replacement does not occur.", "For shuffle noise, each token in s i is chosen with probability p , and then all chosen tokens are randomly shuffled to the position of another chosen token, leaving non-chosen tokens in place.", "The use of drop and shuffle noise results in a loss term similar to the denoising loss used by Lample et al. (2019).", "Their motivation for this loss was Style Target (A B) + Inp Tuning Ranges Add Delete 40-70% 25-35% It doesn't work Input Encoder Decoder Output It works great Ex Ex Style A Exemplars Ex Ex Style B Exemplars Ex A great product.", "to encourage language modeling.", "As we fine-tune an already-strong T5 language model in our experiments, our motivation is rather to introduce a conditional element to the language model, in the form of the extracted style vector input.", "Back-Translation (BT) This corruption function, used by Lample et al. (2019), runs the current version of the model in inference mode to transfer s i into a different style, giving the corrupted s i .", "In prior work using labels, specifying a different target style was straightforward.", "In our case, because we do not have access to labels, we simply sample a random sentence s j to use as the context.", "To increase diversity of the generated examples, we decode with sampling instead of greedy decoding.", "Because s i is produced by a strong language model, BT should result in examples where both the input and output are coherent sentences, matching our inference setting.", "By contrast, Noise corruption does not resemble test-time inputs.", "Noisy Back-Translation (NBT) This novel corruption function is a composition of the previous two.", "Noise is first applied to s i as described above, and the result is used as the input (with randomly-sampled s j as the context) to the model in inference mode to produce s i via sampling, as in BT.", "Once the model has learned to undo random noise, NBT should produce training examples where some of the tokens are preserved from s i while others were generated by the model itself under the influence of the incorrect context s j .", "This is similar to BT, but we hypothesize that it may be better suited to style transfer.", "BT was originally used for machine translation (Sennrich et al., 2016), a setting where most or all input tokens need to change.", "In contrast, style transfer within a single language usually requires only changing a subset of tokens; the training examples resulting from NBT should have this property.", "We believe that this will encourage the model to identify which tokens in the input do not match the target style indicated by s i 1 and change them, which is exactly what we want a style transfer model to do.", "Final Loss The final loss term used for training is the sum of the above loss terms, each calculated from the same input s i .", "However, not every model we experiment with includes all three losses.", "Tunable Add/Delete Rates In preliminary experiments, we observed a recurring problem that the model would often change either far too little (fail-ing to achieve the target style), or far too much (failing to preserve the input content).", "To address this problem, we introduce a tunable inference mechanism to constrain how much content should be added and deleted at inference time.", "For every input/output pair during training, we calculate the proportions of tokens that were added and deleted.", "The add rate is the proportion of output tokens absent from the input, and the delete rate is the proportion of input tokens absent from the output.", "2 We provide these rates to the decoder as ranges covering but not necessarily centered 2 This calculation ignores word order.", "As one example, if a token appears three times in the input and five times in the output, two of the five occurrences are counted as added.", "Targeted Restyling While previous work on style transfer has largely assumed a fixed set of discrete styles, we expect our model's learned style representations to capture a rich summary of the sentence covering many attributes without specifying them beforehand.", "For example, a given style vector might encode that a sentence is informal, humorous, in British English, and so on.", "In this framework, transferring a single attribute (e.g., informal formal) is not as simple as just providing a vanilla formal style target, as this would ignore all the other attributes that defined the original input.", "Rather, we must operate in style space to construct a new target style that is simultaneously formal, humorous, British, and so on.", "Concretely, at inference time, we assume access to a small set of exemplar sentences (between 1 and 100) for both the source value (e.g., informal) and target value (e.g., formal) of the attribute being modified.", "We infer style vectors for each exemplar using the style extractor, and take the mean of each class, giving vectors v src and v trg .", "Assuming the exemplar pools are relatively diverse, this averaging should wash out most untargeted attributes.", "To transfer an input sentence x , we apply a targeted restyling in the appropriate direction.", "After extracting the original style from the input itself, v x , we compute the target output style by moving in the direction of the delta between the source and target attributes values, as in (1), producing the style vector used for decoding.", "In practice, we find that the delta scale is an important hyperparame-ter to tune.", "Generally values in the range [1.0, 10.0] work well, with the best values depending on the attribute and the exemplars in question.", "To evaluate our approach and better understand the effects of our various design choices, we test on few-shot sentiment transfer, using the Amazon reviews dataset of Li et al. (2018).", "However, as their training split doesn't indicate which sentences 3 Specifically, we sample each range width uniformly from [0,1], and uniformly sample the alignment of the true rate within the range.", "The final ranges are clipped to [0,1], and a vector containing the upper and lower bound of each range is prepended to the encoder hidden state sequence.", "were adjacent in the original reviews, we make use of a different source of raw review text.", "Training Procedure Our unlabeled training data comes from the 233.1M Amazon reviews provided by Ni et al. (2019).", "Ignoring the star ratings completely, we extract adjacent lines from multi-line reviews to use as the context and input for our training procedure, giving 23.6M examples.", "We also preprocess all text to match the format of the Li et al. (2018) data, as detailed in Appendix A.4.", "Initializing our model from pretrained T5 (t5.1.1.large), we fine-tune on these examples, optimizing the joint reconstruction loss from Section 2. Our default TextSETTR configuration is selected based on preliminary experiments (on development data) varying the set of reconstruction tasks and inference procedures.", "The model uses an equally weighted combination of the Noise (N) and Noisy Back-Translation (NBT) tasks.", "For both tasks, we use drop and replace noise, but no shuffle noise.", "We fine-tune for 10k steps, with a batch size of 65,536 tokens, and a fixed learning rate of 1e-3.", "Evaluation Procedure Following prior work, we use automatic metrics to assess attribute control (sentiment) and content preservation on the data from Li et al. (2018).", "To estimate the sentiment of the output, we fine-tune a BERT-Large classifier (Devlin et al., 2019) on the train split, scoring 87.8% accuracy on the dev split.", "For content preservation, we follow Sudhakar et al. (2019) and Xu et al. (2020) and calculate self-BLEU between the output and input, using SacreBLEU (Post, 2018).", "4,5 Following Xu et al. (2018), we report G-score (the geometric mean of accuracy and content) as a summary of overall model quality.", "To perform transfers, we follow the procedure from Section 2.3.", "For our default setup, we sample 100 positive and 100 negative exemplars from the Li et al. (2018) train split.", "Unless otherwise specified, we use greedy decoding, a delta scale of =8, and add/delete tuning ranges of 2040%.", "Core Results Figure 2 shows our core results.", "Our default TextSETTR configuration (N+NBT training, tuning ranges 2040%) achieves 73.7% classifier-judged accuracy at swapping sentiment, while still staying somewhat close to the original 4 Version string: BLEU+case.mixed+numrefs.1+ smooth.exp+tok.13a+version.1.4.13 5 Some prior work reports instead BLEU scores between outputs and human-generated transfers from Li et al. (2018); we found this to be highly correlated with self-BLEU but report it in Appendix A.3 for completeness.", "2: Automatic evaluation metrics comparing our TextSETTR model, ablations, and previous work.", "Up-and-right is better.", "We train for 10k steps and use add/delete:2040% unless otherwise specified.", "We recalculate metrics for previous approaches, using our BERT classifier for accuracy, ensuring direct comparability.", "input text (self-BLEU 34.7).", "Due to our tunable inference technique, we can also trade off accuracy for content preservation by adjusting the add/delete rates, as seen in the points along the green line.", "Notably, TextSETTR outperforms the few-shot CP-G and CP-B models of Xu et al. (2020).", "More remarkably, TextSETTR outperforms several approaches that rely on training labels: CrossAligned (Shen et al., 2017) and Delete&Retrieve (Li et al., 2018).", "However there is still a small gap between our few-shot approach and the best labeled model, B-GST (Sudhakar et al., 2019).", "In Table 1, we compare with Lample et al. (2019) on the evaluation setting including pos pos and neg neg transfers.", "This setting doesn't match our inference procedure, which assumes that the input and output styles differ.", "Nevertheless, TextSETTR comes close to the performance of Lample et al. (2019), despite not benefiting from training labels.", "As automatic metrics can diverge from human judgment (Sudhakar et al., 2019), we also conduct human evaluations of the three strongest models from Figure 2. We sample 200 examples per transfer direction from the Li et al. (2018) test set, and ask three annotators to evaluate each input/output Model Sentiment Preservation Fluency TextSETTR (1030%) 2.0 3.5 2.9 TextSETTR (2040%) 2.5 2.6 4.0 Delete&Retrieve 2.5 3.1 3.3 B-GST 2.2 2.9 3.6 Table 2: Human evaluation metrics.", "pair on three metrics: sentiment transfer (how well the model changed the sentiment), content preservation, and fluency, on scales of 15.", "The results in Table 2 confirm that TextSETTR achieves similar quality to models that benefit from training labels.", "Further details are presented in Appendix A.5.", "Modifying Inference Procedure To better understand the value of our proposed targeted restyling mechanism, we consider an alternative inference procedure where we ignore the style of the input and simply use the average target exemplar style v trg as the style vector.", "We expect that since our learned style space covers multiple attributes, this will result in setting the target attribute (e.g. sentiment) while simultaneously overwriting all other style attributes (e.g. formality) using the average style of the target exemplars.", "This is borne out in our overwrite style ablation, which performs significantly worse than our baseline: accuracy drops from 54.0% to 25.3% with no gain in self-BLEU.", "To assess the value of tunable add/delete rates, we also train a model ( tunable) without this feature.", "While the automatic metrics are slightly above the TextSETTR line, we observe several advantages to the tunable model.", "For one, we observe it significantly reduces the variance in self-BLEU across different inputs.", "For example, focusing on the case of overly high self-BLEU, we find that without tunable inference, 14.6% of dev eval outputs are identical to their inputs, whereas with tunable inference, this goes to 0.9%.", "Additionally, through qualitative analysis in Section 4, we find that tunable inference allows more flexibility for controlling different types of transfer.", "Adjusting Data Sizes While our unlabeled training data set consists of 23.6M examples, our model only sees 5.1M of these over its 10k steps of training.", "Yet this is still nearly 10 more data than the 0.6M examples in the Li et al. (2018) training set used by previous approaches.", "For a more direct comparison, we experiment with a small train set, sampling 0.6M examples from our training set.", "Remarkably, the results in Figure 2 are nearly identical to our baseline, supporting our hypothesis that a fairly lightweight adaptation is sufficient to allow T5 to extract and transfer textual style.", "To test the limits of our model's generalization, we reduce the set of exemplars to four manually selected examples of each class.", "In this setting, we also find reducing delta scale to =4 is bene-ficial.", "The results, shown as manual exemplars in Figure 2, are still competitive, indicating that our approach generalizes well to this very-few-shot inference setting.", "In the other direction, we find that increasing the number of sampled exemplars from 100 to 1000 only provides small additional gains.", "Modifying Training Task Lample et al. (2019) showed promising results by combining noise (N) with back-translation (BT).", "However we find this combination unstable.", "6 When training for 10k steps, our N and N+BT models nearly always copy their input.", "Training for 50k steps recovers reasonable performance, but the metrics still fall below the TextSETTR line, using our novel NBT task.", "We also experiment with using NBT in isolation, but this again underperforms our baseline.", "We expect that the denoising task helps to ensure the NBT inputs (themselves the outputs of denoising) consist of realistic well-formed text.", "Finally, while Lample 6 For all experiments in the paper, we use 0.0 for the add/delete rates during the forward pass of back-translation.", "However we later found that using random add/delete rates in back-translation can improve performance in the N+BT setting.", "On sentiment transfer, this improved our N+BT ablation to self-BLEU 42.4, accuracy 71.4%, G-score 55.0.", "et al. (2019) use drop and shuffle noise, we find that only drop and replace are valuable.", "To demonstrate that our learned style extractor encodes multiple aspects of textual style, we compute style vectors for 12,000 lines of text from three review categories (Fashion, Software, Pantry) from the Ni et al. (2019) Amazon data.", "Within each category, we sample 2,000 positives (4 or 5 star) and 2,000 negatives (1 or 2 star), filtering examples where our BERT classifier disagrees with the label.", "Figure 3 (bottom) plots a 2D UMAP dimensionality reduction (McInnes et al., 2018) of the vectors, and shows clear separations among sentiments and product categories.", "The top row runs UMAP with the same settings, but over style vectors from our model before training, where the style extractor is initialized from pretrained T5.", "The contrast is a clear indication that our training procedure is helping to learn a representation space where sentiment and topic values are well separated.", "To confirm that the observed separation isn't an artifact of dimensionality reduction, we compute the average distance between style vectors", "(a) within a class, and", "(b) across classes.", "We measure separation as the relative increase in mean distance between these two conditions.", "For product category, we find TextSETTR training improves separation from 1 .", "7 % to 8 .", "1 %.", "For sentiment, TextSETTR training improves separation from 0 .", "9 % to 4 .", "7 %.", "An advantage of few-shot style transfer is that, in theory, a single model can perform transfer along any dimension of style given only a few exemplars, without the need for additional training.", "In this section, we investigate the degree to which our approach achieves this goal in practice.", "For this purpose, we train a single general-purpose TextSETTR model, with the same configuration as our model from Section 3, except fine-tuned for 200k steps on English Common Crawl data (the same C4 data that T5 pretrained on) instead of Amazon reviews.", "Qualitative Evaluation Given that our architecture limits the style representation to 1024 dimensions, one may ask how the unsupervised model will make use of this capacity, and which style attributes will be encoded in the learned space.", "Encouragingly, we find that our model trained on un-Before TextSETTR training (pretrained T5 initialization) After TextSETTR training Figure 3: 2D UMAP embeddings of the style vectors extracted by our TextSETTR model before and after training, for text inputs from Amazon reviews covering three product categories and two sentiment labels.", "labeled Common Crawl data is capable of transferring along many independent axes of style.", "Table 3 shows selected successful examples of our Common Crawl model transferring emotiveness, dialect, politeness, formality and sentiment.", "The same model is used in each case, with no additional training.", "At inference time, a tiny set of exemplars (15 examples of each class) is the only labeled data used to compute the style vector delta; these exemplars are presented in Appendix A.2.", "Across each type of transfer, we see evidence of generalization beyond the specifics of the chosen exemplars.", "In making text more emotive, the model uses amazing and blown away , despite these terms not occurring in the exemplars.", "In making text more polite, the model inserts novel hedges like perhaps and I could be wrong .", "In transferring between American and British styles, the model generalizes to unseen vocabulary items ( elevator lift ) and draws sound analogies ( senators MPs ).", "We do note though that the latter case illustrates that the model is willing to change the semantic content of the input in cases where it would otherwise be out-of-place in the target style.", "Future work includes investigating ways to control this in settings where such behavior is not desired.", "Quantitative Evaluation To assess the quality of our general-purpose TextSETTR model, we benchmark the same model on three distinct transfer tasks in Table 4. 7 The sentiment transfer task follows the evaluation procedure from Section 3. While our generic model underperforms our model trained on Amazon reviews, it still outperforms other few-shot methods.", "For author transfer , we use the Shakespeare-to-modern task of Jhamtani et al. (2017).", "Here, TextSETTR outperforms the previous best model of He et al. (2020) that leveraged 36,790 labeled examples during training.", "For personality transfer , we use the task of Li et al. (2020), which requires transferring between three personalities: angry, happy, malicious.", "We compare 8 TextSETTR, which sees no labels in training and only 100 of each class in inference, with CARA (Li et al., 2020), which trained on 2,604 labels.", "7 For each task, we set our tuning ranges to 2040% and compute target styles using 100 exemplars of each class taken from the train set.", "We use values of sentiment:8, author:16, personality:8.", "To measure accuracy, we fine-tune BERT-Large classifiers over the training data, reaching validation accuracies of sentiment:87.8%, author:89.7%, personality:81.9%.", "8 Note, as Li et al. (2020) use a different classifier to assess accuracy, those numbers may not be directly comparable.", "In addition to performing style and attribute transfer, we find that our system can also be used as a style-aware language model capable of completing prompts in a specified style.", "Examples of completions in American and British English are given in Table 5. In each case, the input is of the form My favorite X: .", "Despite the fact that TextSETTR is not trained specifically for completions, we can use the add/delete rates to encourage the model to insert a few additional tokens, while leaving the original prompt largely unchanged.", "9 The completions demonstrate knowledge of stereotypical American and British culture.", "It is remarkable that the model is able to generalize to deeper cultural differences such as music and drink preferences, given only the shallow vocabulary differences (e.g., neighbor vs. neighbour ) presented in the limited set of exemplars in Table 9.", "It is also worth highlighting that, thanks to our directional transfer procedure, these completions are not merely typical American or typical British such as we would expect from a conditional language model trained on each sub-domain of text.", "Rather, since our inference procedure pushes the style away from one domain and towards the other, the resulting completions are distinctive representations of each dialect.", "As one example, we expect 9 We note that in transferring American to British, the model prefers to change the prompt from favorite to favourite .", "quinoa would not only be a common American favorite, but also an uncommon British favorite.", "Additional examples of using our model for tasks other than pure style transfer are presented in Appendix A.1.", "As mentioned at the outset, recent work on text style transfer falls into three classes: supervised, unsupervised, and few-shot.", "Supervised style transfer has seen limited research due to the dif-ficulty of obtaining parallel data.", "Examples include Jhamtani et al. (2017) and Carlson et al. (2018).", "Unsupervised Approaches The bulk of research has focused on unsupervised approaches, which rely on labeled but non-parallel data.", "Typically, labels are assumed to be available for both source and target styles (Shen et al. 2017, Li et al. 2018, Niu et al. 2018, and many others).", "Zhao et al. (2018) explore the case where only the target style is labeled.", "The use of labels at training time can aid modeling, but limits the applicability of these methods, as labeled datasets are not readily available for many attributes of interest.", "Our work differs from the above by removing the need for training labels, and offering a single model that can target an unrestricted set of style attributes.", "Despite these differences, our work shares some similarities with past work.", "For example, our encoder-decoder architecture and corruption methods are similar to Lample et al. (2019), and we leverage a strong pretrained language model, as in Sudhakar et al. (2019) and Wu et al. (2019).", "Few-Shot Approaches A few-shot approach has recently been explored by Xu et al. (2020).", "The authors train a variational auto-encoder on unlabeled text, where a manipulable portion of the latent representation is constrained to fall on a k-dimensional simplex.", "To perform transfer, they identify empirically the basis vector that most strongly corresponds to the target attribute, and manipulate its magnitude.", "Compared to our approach, a key difference is that the number of latent factors must be chosen ahead of time, which limits the number of attributes that may be controlled.", "Additionally, there is no guarantee that a single basis of the learned simplex will correspond to a target attribute such as dialect or politeness.", "Controlled Generation A separate strand of research explores controlled generation methods for supplementing generative language models to allow control of specific attributes of the output text.", "As with style transfer, this can be achieved either through labeled training examples, as in CTRL (Keskar et al., 2019) and PPLM (Dathathri et al., 2020), or a few-shot approach, as in CoCon (Chan et al., 2020).", "These models differ from style transfer models in that they aim to generate plausible continuations following a prompt, as opposed to transferring attributes of a fully-formed input while preserving as much content as possible.", "It is not clear if controlled generation models could be used to perform style transfer, and they have not to our knowledge been evaluated in this context.", "We have presented a unique approach to few-shot text style transfer that is competitive with systems trained with labels (an easier setting), while allowing control of how much of the input is changed.", "We demonstrate that this approach can produce a single system capable of transferring many different styles while requiring only a handful of exemplars at inference time.", "We thank Llion Jones, Rami Al-Rfou, and Daniel Gildea for helpful discussion and comments on an earlier draft." ]
[ "objective", "method", "method", "result", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method", "objective", "other", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "result", "result", "result", "abstain", "abstain", "result", "abstain", "result", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "abstain", "other", "other", "other", "method", "other", "other", "other", "other", "method", "method", "objective", "other" ]
[ "We show that explicit pragmatic inference aids in correctly generating and following natural language instructions for complex, sequential tasks.", "Our pragmatics-enabled models reason about why speakers produce certain instructions, and about how listeners will react upon hearing them.", "Like previous pragmatic models, we use learned base listener and speaker models to build a pragmatic speaker that uses the base listener to simulate the interpretation of candidate descriptions, and a pragmatic listener that reasons counterfactually about alternative descriptions.", "We extend these models to tasks with sequential structure.", "Evaluation of language generation and interpretation shows that pragmatic inference improves state-of-the-art listener models (at correctly interpreting human instructions) and speaker models (at producing instructions correctly interpreted by humans) in diverse settings.", "How should speakers and listeners reason about each other when they communicate?", "A core insight of computational pragmatics is that speaker and listener agents operate within a cooperative game-theoretic context, and that each agent ben-efits from reasoning about others' intents and actions within that context.", "Pragmatic inference has been studied by a long line of work in linguistics, natural language processing, and cognitive science.", "In this paper, we present a technique for layering explicit pragmatic inference on top of models for complex, sequential instruction-following and instruction-generation tasks.", "We investigate a range of current data sets for both tasks, showing that pragmatic behavior arises naturally from this inference procedure, and gives rise to state-of-the-art results in a variety of domains.", "Consider the example shown in Figure 1a, in which a speaker agent must describe a route to", "a target position in a hallway.", "A conventional learned instruction-generating model produces a truthful description of the route ( walk forward four times ).", "But the pragmatic speaker in this paper, which is capable of reasoning about the listener, chooses to also include additional information ( the intersection with the bare concrete hall ), to reduce potential ambiguity and increase the odds that the listener reaches the correct destination.", "This same reasoning procedure also allows a listener agent to overcome ambiguity in instructions by reasoning counterfactually about the speaker (Figure 1b).", "Given the command walk along the blue carpet and you pass two objects , a conven-1951 tional learned instruction-following model is will-ing to consider all paths that pass two objects, and ultimately arrives at an unintended final position.", "But a pragmatic listener that reasons about the speaker can infer that the long path would have been more easily described as go to the sofa , and thus that the shorter path is probably intended.", "In these two examples, which are produced by the system we describe in this paper, a unified reasoning process (choose the output sequence which is most preferred by an embedded model of the other agent) produces pragmatic behavior for both speakers and listeners.", "The application of models with explicit pragmatic reasoning abilities has so far been largely restricted to simple reference games, in which the listener's only task is to select the right item from among a small set of candidate referents given a single short utterance from the speaker.", "But as the example shows, there are real-world instruction following and generation tasks with rich action spaces that might also benefit from pragmatic modeling.", "Moreover, approaches that learn to map directly between human-annotated instructions and action sequences are ultimately limited by the effectiveness of the humans themselves.", "The promise of pragmatic modeling is that we can use these same annotations to build a model with a different (and perhaps even better) mechanism for interpreting and generating instructions.", "The primary contribution of this work is to show how existing models of pragmatic reasoning can be extended to support instruction following and generation for challenging, multi-step, interactive tasks.", "Our experimental evaluation focuses on four instruction-following domains which have been studied using both semantic parsers and attentional neural models.", "We investigate the interrelated tasks of instruction following and instruction generation, and show that incorporating an explicit model of pragmatics helps in both cases.", "Reasoning about the human listener allows a speaker model to produce instructions that are easier for humans to interpret correctly in all domains (with absolute gains in accuracy ranging from 12% to 46%).", "Similarly, reasoning about the human speaker improves the accuracy of the listener models in interpreting instructions in most domains (with gains in accuracy of up to 10%).", "In all cases, the resulting systems are competitive with, and in many cases exceed, results from past state-of-the-art systems for these tasks.", "1 2 Problem Formulation Consider the instruction following and instruction generation tasks shown in Figure 1, where an agent must produce or interpret instructions about a structured world context (e.g. walk along the blue carpet and you pass two objects ).", "In the instruction following task, a listener agent begins in a world state (in Figure 1 an initial map location and orientation).", "The agent is then tasked with following a sequence of direction sentences d 1 . . . d K produced by humans.", "At each time t the agent receives a percept y t , which is a feature-based representation of the current world state, and chooses an action a t (e.g. move forward, or turn).", "The agent succeeds if it is able to reach the correct final state described by the directions.", "In the instruction generation task, the agent receives a sequence of actions a 1 , a T along with the world state y 1 , y T at each action, and must generate a sequence of direction sentences d 1 , . . . d K describing the actions.", "The agent succeeds if a human listener is able to correctly follow those directions to the intended final state.", "We evaluate models for both tasks in four domains.", "The first domain is the SAIL corpus of virtual environments and navigational directions (MacMahon et al., 2006; Chen and Mooney, 2011), where an agent navigates through a two-dimensional grid of hallways with patterned walls and floors and a discrete set of objects (Figure 1 shows a portion of one of these hallways).", "In the three SCONE domains (Long et al., 2016), the world contains a number of objects with various properties, such as colored beakers which an agent can combine, drain, and mix.", "Instructions describe how these objects should be manipulated.", "These domains were designed to elicit instructions with a variety of context-dependent language phenomena, including ellipsis and coreference (Long et al., 2016) which we might expect a model of pragmatics to help resolve (Potts, 2011).", "Pragmatics Our approach to pragmatics (Grice, 1975) belongs to a general category of rational speech acts models (Frank and Goodman, 2012), in which the interaction between speakers and listeners is modeled as a probabilistic process with Bayesian actors (Goodman and Stuhlmuller, 2013).", "Alternative formulations (e.g. with best-response rather than probabilistic dynamics) are also possible (Golland et al., 2010).", "Inference in these models is challenging even when the space of listener actions is extremely simple (Smith et al., 2013), and one of our goals in the present work is to show how this inference problem can be solved even in much richer action spaces than previously considered in computational pragmatics.", "This family of pragmatic models captures a number of important linguistic phenomena, especially those involving conversational implicature (Mon-roe and Potts, 2015); we note that many other topics studied under the broad heading of pragmat-ics, including presupposition and indexicality, require different machinery.", "Williams et al. (2015) use pragmatic reasoning with weighted inference rules to resolve ambiguity and generate clarification requests in a human-robot dialog task.", "Other recent work on pragmatic models focuses on the referring expression generation or contrastive captioning task introduced by Kazemzadeh et al. (2014).", "In this family are approaches that model the listener at training time (Mao et al., 2016), at evaluation time (Andreas and Klein, 2016; Monroe et al., 2017; Vedantam et al., 2017; Su et al., 2017) or both (Yu et al., 2017b; Luo and Shakhnarovich, 2017).", "Other conditional sequence rescoring models that are structurally similar but motivated by concerns other than pragmatics include Li et al. (2016) and Yu et al. (2017a).", "Lewis et al. (2017) perform a similar inference procedure for a competitive negotiation task.", "The language learning model of Wang et al. (2016) also features a structured output space and uses pragmatics to improve online predictions for a semantic parsing model.", "Our approach in this paper performs both generation and interpretation, and investigates both structured and unstructured output representations.", "Instruction following Work on instruction following tasks includes models that parse commands into structured representations processed by a rich execution model (Tellex et al., 2011; Chen, 2012; Artzi and Zettlemoyer, 2013; Guu et al., 2017), and models that map directly from instructions to a policy over primitive actions (Branavan et al., 2009), possibly mediated by an intermediate alignment or attention variable (An-dreas and Klein, 2015; Mei et al., 2016).", "We use a model similar to Mei et al. (2016) as our base listener in this paper, evaluating on the SAIL navigation task (MacMahon et al., 2006) as they did, as well as the SCONE context-dependent execution domains (Long et al., 2016).", "Instruction generation Previous work has also investigated the instruction generation task, in particular for navigational directions.", "The GIVE shared tasks (Byron et al., 2009; Koller et al., 2010; Striegnitz et al., 2011) have produced a large number of interactive direction-giving systems, both rule-based and learned.", "The work most immediately related to the generation task in this paper is that of Daniele et al. (2017), which also focuses on the SAIL dataset but requires substantial additional structured annotation for training, while both our base and pragmatic speaker models learn directly from strings and action sequences.", "Older work has studied the properties of effective human strategies for generating navigational directions (Anderson et al., 1991).", "Instructions of this kind can be used to extract templates for generation (Look, 2008; Dale et al., 2005), while here we focus on the more challenging problem of learning to generate new instructions from scratch.", "Like our pragmatic speaker model, Goeddel and Olson (2012) also reason about listener behavior when generating navigational instructions, but rely on rule-based models for interpretation.", "As a foundation for pragmatic inference, we assume that we have base listener and speaker models to map directions to actions and vice-versa.", "(Our notation for referring to models is adapted from Bergen et al.", "(2016).) The base listener, L 0 , produces a probability distribution over sequences of actions, conditioned on a representation of the directions and environment as seen before each action: PL 0 ( a 1: T | d 1: K , y 1: T ) .", "Similarly, the base speaker, S 0 , defines a distribution over possible descriptions conditioned on a representation of the actions and environment: PS 0 ( d 1: K | a 1: T , y 1: T ) .", "Our pragmatic inference procedure requires these base models to produce candidate outputs from a given input (actions from descriptions, for 1953 Base Listener Base Speaker Actions Base Listener Base Speaker Instructions Rational Listener Rational Speaker Pragmatic Inference Pragmatic Inference simulation simulation walk along the blue carpet + + Base Listener Base Speaker", "(a)", "(b) Figure 2:", "(a) Rational pragmatic models embed base listeners and speakers.", "Potential candidate sequences are drawn from one base model, and then the other scores each candidate to simulate whether it produces the desired pragmatic behavior.", "(b) The base listener and speaker are neural sequence-to-sequence models which are largely symmetric to each other.", "Each produces a representation of its input sequence (a description, for the listener; actions with associated environmental percepts, for the listener) using an LSTM encoder.", "The output sequence is generated by an LSTM decoder attending to the input.", "the listener; descriptions from actions, for the speaker), and calculate the probability of a fixed output given an input, but is otherwise agnostic to the form of the models.", "We use standard sequence-to-sequence models with attention for both the base listener and speaker (described in Section 5).", "Our models use segmented action sequences, with one segment (sub-sequence of actions) aligned with each description sentence d j , for all j { 1 . . . K } .", "This segmentation is either given as part of the training and testing data (in the instruction following task for the SAIL domain, and in both tasks for the SCONE domain, where each sentence corresponds to a single action), or is predicted by a separate segmentation model (in the generation task for the SAIL domain), see Section 5.", "Using these base models as self-contained modules, we derive a rational speaker and rational listener that perform inference using embedded instances of these base models (Figure 2a).", "When describing an action sequence, a rational speaker S 1 chooses a description that has a high chance of causing the listener modeled by L 0 to follow the given actions: S 1 ( a 1: T ) = argmax d 1: KPL 0 ( a 1: T | d 1: K , y 1: T ) (1) (noting that, in all settings we explore here, the percepts y 1: T are completely determined by the actions a 1: T ).", "Conversely, a rational listener L 1 follows a description by choosing an action sequence which has high probability of having caused the speaker, modeled by S 0 , to produce the description: L 1 ( d 1: K ) = argmax a 1: TPS 0 ( d 1: K | a 1: T , y 1: T ) (2) These optimization problems are intractable to solve for general base listener and speaker agents, including the sequence-to-sequence models we use, as they involve choosing an input (from a combinatorially large space of possible sequences) to maximize the probability of a fixed output sequence.", "We instead follow a simple approximate inference procedure, detailed in Section 4.2.", "We consider also incorporating the scores of the base model used to produce the candidates.", "For the case of the speaker, we define a combined rational speaker , denoted S 0 S 1 , that selects the candidate that maximizes a weighted product of probabilities under both the base listener and the base speaker: argmax d 1: KPL 0 ( a 1: T | d 1: K , y 1: T ) PS 0 ( d 1: K | a 1: T , y 1: T ) 1 (3) for a fixed interpolation hyperparameter [0 , 1] .", "There are several motivations for this combination with the base speaker score.", "First, as argued by Monroe et al. (2017), we would expect varying degrees of base and reasoned interpretation in human speech acts.", "Second, we want the descriptions produced by the model to be fluent descriptions of the actions.", "Since the base models are trained discriminatively, maximizing the probability of an output sequence for a fixed input sequence, their scoring behaviors for fixed outputs paired with inputs dissimilar to those seen in the training set may be 1954 poorly calibrated (for example when conditioning on ungrammatical descriptions).", "Incorporating the scores of the base model used to produce the candidates aims to prevent this behavior.", "To define rational listeners, we use the symmetric formulation: first, draw candidate action sequences from L 0 .", "For L 1 , choose the actions that achieve the highest probability under S 0 ; and for the combination model L 0 L 1 choose the actions with the highest weighted combination of S 0 and L 0 (paralleling equation 3).", "As in past work (Smith et al., 2013; Andreas and Klein, 2016; Monroe et al., 2017), we approximate the optimization problems in equations 1, 2, and 3: use the base models to generate candidates, and rescore them to find ones that are likely to produce the desired behavior.", "In the case of the rational speaker S 1 , we use the base speaker S 0 to produce a set of n candidate descriptions w (1)1: K 1 . . . w ( n ) 1: K n for the sequences a 1: T , y 1: T , using beam search.", "We then find the score of each description under PL 0 (us-ing it as the input sequence for the observed output actions we want the rational speaker to describe), or a weighted combination of PL 0 and the original candidate score PS 0 , and choose the description w ( j ) 1: K j with the largest score, approximately solving the maximizations in equations 1 or 3, respectively.", "We perform a symmetric procedure for the rational listener: produce action sequence candidates from the base listener, and rescore them using the base speaker.", "2 As the rational speaker must produce long output sequences (with multiple sentences), we interleave the speaker and listener in inference, determining each output sentence sequentially.", "From a list of candidate direction sentences from the base speaker for the current subsequence of actions, we choose the top-scoring direction under the listener model (which may also condition on the directions which have been output previously), and then 2 We use ensembles of models for the base listener and speaker (subsection 5.3), and to obtain candidates that are high-scoring under the combination of models in the ensemble, we perform standard beam search using all models in lock-step.", "At every timestep of the beam search, each possible extension of an output sequence is scored using the product of the extension's conditional probabilities across all models in the ensemble.", "move on to the next subsequence of actions.", "3 5 Base model details Given this framework, all that remains is to describe the base models L 0 and S 0 .", "We implement these as sequence-to-sequence models that map directions to actions (for the listener) or actions to directions (for the speaker), additionally conditioning on the world state at each timestep.", "Our base listener model, L 0 , predicts action sequences conditioned on an encoded representation of the directions and the current world state.", "In the SAIL domain, this is the model of Mei et al. (2016) (illustrated in green in Figure 2b for a single sentence and its associated actions), see do-main specifics below.", "Encoder Each direction sentence is encoded separately with a bidirectional LSTM (Hochre-iter and Schmidhuber, 1997); the LSTM's hidden states are reset for each sentence.", "We obtain a representation h ek for the k th word in the current sentence by concatenating an embedding for the word with its forward and backward LSTM outputs.", "Decoder We generate actions incrementally using an LSTM decoder with monotonic alignment between the direction sentences and subsequences of actions; at each timestep the decoder predicts the next action for the current sentence w 1: M (in-cluding choosing to shift to the next sentence).", "The decoder takes as input at timestep t the current world state, y t and a representation z t of the current sentence, updates the decoder state h d , and outputs a distribution over possible actions: h dt = LSTM d ( h dt 1 , [ W y y t , z t ]) q t = W o ( W y y t + W h h dt + W z z t ) p ( a t | a 1: t 1 , y 1: t , w 1: M ) exp( q t ) where all weight matrices W are learned parameters.", "The sentence representation z t is produced using an attention mechanism (Bahdanau et al., 2015) over the representation vectors h e 1 . . . h eM 3 We also experimented with sampling from the base models to produce these candidate lists, as was done in previous work (Andreas and Klein, 2016; Monroe et al., 2017).", "In early experiments, however, we found better performance with beam search in the rational models for all tasks.", "t,k exp( v tanh( W d h dt 1 + W e h ek )) z t = MX k =1 t,k h ek", "where the attention weights t,k are normalized to sum to one across positions k in the input, and weight matrices W and vector v are learned.", "Domain specifics For SAIL, we use the alignments between sentences and route segments annotated by Chen and Mooney (2011), which were also used in previous work (Artzi and Zettlemoyer, 2013; Artzi et al., 2014; Mei et al., 2016).", "Following Mei et al. (2016), we reset the decoder's hidden state for each sentence.", "In the SCONE domains, which have a larger space of possible outputs than SAIL, we extend the decoder by:", "(i) decomposing each action into an action type and arguments for it,", "(ii) using separate attention mechanisms for types and arguments and", "(iii) using state-dependent action embeddings.", "See Appendix A in the supplemental material for details.", "The SCONE domains are constructed so that each sentence corresponds to a single (non-decomposed) action; this provides our segmentation of the action sequence.", "While previous work (Daniele et al., 2017) has relied on more structured approaches, we construct our base speaker model S 0 using largely the same sequence-to-sequence machinery as above.", "S 0 (il-lustrated in orange in Figure 2b) encodes a sequence of actions and world states, and then uses a decoder to output a description.", "Encoder We encode the sequence of vector embeddings for the actions a t and world states y t using a bidirectional LSTM.", "Similar to the base listener's encoder, we then obtain a representation h et for timestep t by concatenating a t and y t with the LSTM outputs at that position.", "Decoder As in the listener, we use an LSTM decoder with monotonic alignment between direction sentences and subsequences of actions, and attention over the subsequences of actions.", "The decoder takes as input at position k an embedding for the previously generated word w k 1 and a representation z k of the current subsequence of actions and world states, and produces a distribution over words (including ending the description for the current subsequence and advancing to the next).", "The decoder's output distribution is produced by: h dk = LSTM d ( h dk 1 , [ w k 1 , z k ]) q k = W h h dk + W z z k p ( w k | w 1: k 1 , a 1: T , y 1: T ) exp( q k ) where all weight matrices W are learned parameters.", "4 As in the base listener, the input representation z k is produced by attending to the vectors h e 1 . . . h eT encoding the input sequence (here, encoding the subsequence of actions and world states to be described): k,t exp( v tanh( W d h dk 1 + W e h et )) z k = TX t =1 k,t h e t The decoder's LSTM state is reset at the beginning of each sentence.", "Domain specifics In SAIL, for comparison to the generation system of Daniele et al. (2017) which did not use segmented routes, we train a route segmenter for use at test time.", "We also represent routes using a collapsed representation of action sequences.", "In the SCONE domains, we", "(i) use the same context-dependent action embeddings used in the listener, and", "(ii) don't require an attention mechanism, since only a single action is used to produce a given sentence within the sequence of direction sentences.", "See Appendix A for more details.", "The base listener and speaker models are trained independently to maximize the conditional likelihoods of the actionsdirections pairs in the training sets.", "See Appendix A for details on the optimization, LSTM variant, and hyperparameters.", "We use ensembles for the base listener L 0 and base speaker S 0 , where each ensemble consists of 10 models trained from separate random parameter initializations.", "This follows the experimental setup of Mei et al. (2016) for the SAIL base listener.", "4 All parameters are distinct from those used in the base listener; the listener and speaker are trained separately.", "We evaluate speaker and listener agents on both the instruction following and instruction generation tasks in the SAIL domain and three SCONE domains (Section 2).", "For all domains, we compare the rational listener and speaker against the base listener and speaker, as well as against past state-of-the-art results for each task and domain.", "Finally, we examine pragmatic inference from a model combination perspective, comparing the pragmatic reranking procedure to ensembles of a larger number of base speakers or listeners.", "For all experiments, we use beam search both to generate candidate lists for the rational systems (section 4.2) and to generate the base model's output.", "We fix the beam size n to be the same in both the base and rational systems, using n = 20 for the speakers and n = 40 for the listeners.", "We tune the weight in the combined rational agents ( L 0 L 1 or S 0 S 1 ) to maximize accuracy (for listener models) or BLEU (for speaker models) on each domain's development data.", "We evaluate our listener models by their accuracy in carrying out human instructions: whether the systems were able to reach the final world state which the human was tasked with guiding them to.", "SAIL We follow standard cross-validation evaluation for the instruction following task on the SAIL dataset (Artzi and Zettlemoyer, 2013; Artzi listener Alchemy Scene Tangrams GPLL 52.9 46.2 37.3 L 0 69.7 70.9 69.6 L 0 L 1 72.0 72.7 69.6 accuracy gain +2.3 +1.8 +0.0 Table 2: Instruction-following results in the SCONE domains.", "a red guy appears on the far left then to orange's other side base listener, L 0 rational listener, L 0 L 1 Figure 3: Action traces produced for a partial instruction sequence (two instructions out of five) in the Scene domain.", "et al., 2014; Mei et al., 2016).", "5 Table 1 shows improvements over the base listener L 0 when using the rational listener L 0 L 1 in the singleand multi-sentence settings.", "We also report the best accuracies from past work.", "We see that the largest relative gains come in the multi-sentence setting, where handling ambiguity is potentially more important to avoid compounding errors.", "The rational model improves on the published results of Mei et al. (2016), and while it is still below the systems of Artzi and Zettlemoyer (2013) and Artzi et al. (2014), which use additional supervision in the form of hand-annotated seed lexicons and logical domain representations, it approaches their results in the single-sentence setting.", "5 Past work has differed in the handling of undetermined orientations in the routes, which occur in the first state for multi-sentence routes and the first segment of their corresponding single-sentence routes.", "For comparison to both types of past work, we train and evaluate listeners in two settings: Abs , which sets these undetermined starting orientations to be a fixed absolute orientation, and Rel , where an undetermined starting orientation is set to be a 90 degree rotation from the next state in the true route.", "(with no intermediate actions between start and end world states) on a subset of the full SCONE training data.", "We use the full training set, and to use a model and training procedure consistent with the SAIL setting, train listener and speaker models using the intermediate actions as supervision as well.", "6 The evaluation method and test data are the same as in past work on SCONE: models are provided with an initial world state and a sequence of 5 instructions to carry out, and are evaluated on their accuracy in reaching the intended final world state.", "Results are reported in Table", "2. We see gains from the rational system L 0 L 1 in both the Alchemy and Scene domains.", "The pragmatic inference procedure allows correcting errors or overly-literal interpretations from the base listener.", "An example is shown in Figure", "3. The base listener (left) interprets then to orange's other side incorrectly, while the rational listener discounts this interpretation (it could, for example, be better described by to the left of blue ) and produces the action the descriptions were meant to describe (right).", "To the extent that human annotators already account for pragmatic effects when generating instructions, examples like these suggest that our model's explicit reasoning is able to capture interpretation behavior that the base sequence-to-sequence listener model is unable to model.", "As our primary evaluation for the instruction generation task, we had Mechanical Turk workers carry out directions produced by the speaker mod-6", "mod-6 Since the pragmatic inference procedure we use is agnostic to the models' training method, it could also be applied to the models of Guu et al. (2017); however we find that pragmatic inference can improve even upon our stronger base listener models.", "els (and by other humans) in a simulated version of each domain.", "For SAIL, we use the simulator released by Daniele et al. (2017) which was used in their human evaluation results, and we construct simulators for the three SCONE domains.", "In all settings, we take a sample of 50 action sequences from the domain's test set (using the same sample as Daniele et al. (2017) for SAIL), and have three separate Turk workers attempt to follow the systems' directions for the action sequence.", "Table 3 gives the average accuracy of subjects in reaching the intended final world state across all sampled test instances, for each domain.", "The human-generated row reports subjects' accuracy at following the datasets' reference directions.", "The directions produced by the base speaker S 0 are often much harder to follow than those produced by humans (e.g. 29.3% of S 0 's directions are correctly interpretable for Alchemy, vs. 83.3% of human directions).", "However, we see substantial gains from the rational speaker S 0 S 1 over S 0 in all cases (with absolute gains in accuracy ranging from 12.4% to 46.0%), and the average accuracy of humans at following the rational speaker's directions is substantially higher than for human-produced directions in the Tangrams domain.", "In the SAIL evaluation, we also include the directions produced by the system of Daniele et al. (2017) (DBW), and find that the rational speaker's directions are followable to comparable accuracy.", "(Pap-7 See Appendix A for details on evaluating BLEU in the SAIL setting, where there may be a different number of reference and predicted sentences for a given example.", "ineni et al., 2002) in Table", "4. Consistent with past work (Krahmer and Theune, 2010), we find that BLEU score is a poor indicator of whether the directions can be correctly followed.", "Qualitatively, the rational inference procedure is most successful in fixing ambiguities in the base speaker model's descriptions.", "Figure 4 gives a typical example of this for the last few timesteps from a Tangrams instance.", "The base speaker correctly describes that the shape should be added back, but does not specify where to add it, which could lead a listener to add it in the same position it was deleted.", "The human speaker also makes this mistake in their description.", "This speaks to the difficulty of describing complex actions pragmatically even for humans in the Tangrams domain.", "The ability of the pragmatic speaker to produce directions that are easier to follow than humans' in this domain (Table 3) shows that the pragmatic model can generate something different (and in some cases better) than the training data.", "Finally, our rational models can be viewed as pragmatically-motivated model combinations, producing candidates using base listener or speaker models and reranking using a combination of scores from both.", "We want to verify that a rational listener using n ensembled base listeners and n base speakers outperforms a simple ensemble of 2 n base listeners (and similarly for the rational speaker).", "Fixing the total number of models to 20 in each listener experiment, we find that the rational listener (using an ensemble of 10 base listener models and 10 base speaker models) still substantially outperforms the ensembled base listener (using 20 base listener models): accuracy gains are 68.5 71.6%, 70.1 72.0%, 71.9 72.7%, and 69.1 69.6% for SAIL single-sentence Rel, Alchemy, Scene, and Tangrams, respectively.", "For the speaker experiments, fixing the total number of models to 10 (since inference in the speaker models is more expensive than in the follower models), we find similar gains as well: the rational speaker improves human accuracy at following the generated instructions from 61.9 73.4%, 30.7 74.7%, 32.0 66.0%, 58.7 92.7%, for SAIL, Alchemy, Scene, and Tangrams, respectively.", "8 7 Conclusion We have demonstrated that a simple procedure for pragmatic inference, with a unified treatment for speakers and listeners, obtains improvements for instruction following as well as instruction generation in multiple settings.", "The inference procedure is capable of reasoning about sequential, interdependent actions in non-trivial world contexts.", "We find that pragmatics improves upon the performance of the base models for both tasks, in most cases substantially.", "While this is perhaps unsurprising for the generation task, which has been discussed from a pragmatic perspective in a variety of recent work in NLP, it is encouraging that pragmatic reasoning can also improve performance for a grounded listening task with sequential, structured output spaces.", "We are grateful to Andrea Daniele for sharing the SAIL simulator and their system's outputs, to Hongyuan Mei for help with the dataset, and to Tom Griffiths and Chris Potts for helpful comments and discussion.", "This work was supported by DARPA through the Explainable Artificial Intelligence (XAI) program.", "DF is supported by a Huawei / Berkeley AI fellowship.", "JA is supported by a Facebook graduate fellowship.", "8 The accuracies for the base speakers are slightly different than in Table 3, despite being produced by the same systems, since we reran experiments to control as much as possible for time variation in the pool of Mechanical Turk workers." ]
[ "result", "method", "method", "objective", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "objective", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "method", "other", "other", "other", "other", "other", "other", "objective", "other", "method", "other", "other", "method", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "other", "other", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "other", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "other", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "result", "abstain", "other", "other", "other", "other", "other" ]
[ "Neural models have shown impressive performance gains in answering queries from natural language text.", "However, existing works are unable to support database queries, such as List/Count all female athletes who were born in 20th century , which require reasoning over sets of relevant facts with operations such as join, filtering and aggregation.", "We show that while state-of-the-art transformer models perform very well for small databases, they exhibit limitations in processing noisy data, numerical operations, and queries that aggregate facts.", "We propose a modular architecture to answer these database-style queries over multiple spans from text and aggregating these at scale.", "We evaluate the architecture using WIKINLDB, 1 a novel dataset for exploring such queries.", "Our architecture scales to databases containing thousands of facts whereas contemporary models are limited by how many facts can be encoded.", "In direct comparison on small databases, our approach increases overall answer accuracy from 85% to 90%.", "On larger databases, our approach retains its accuracy whereas transformer baselines could not encode the context.", "Question answering (QA) over text has made significant strides in recent years owing to the availability of new datasets and models.", "Machines have surpassed human performance on the well-known SQUaD task (Rajpurkar et al., 2016) where models extract answer spans from a short passage of text.", "The subsequent body of work has further considered incorporating retrieval from large corpora such as Wikipedia (Dhingra et al., 2017; Joshi et al., 2017; Kwiatkowski et al., 2019) to identify relevant information, conditioning answer generation (Chen 1 https://github.com/facebookresearch/ NeuralDB Facts: (8 of 500 shown) Queries: Nicholas lives in Washington D.C. with his wife.", "Sheryl is Nicholas's wife.", "Teuvo was born in 1912 in Ruskala.", "Sheryl's mother gave birth to her in 1978.", "Nicholas is a doctor.", "Sarah was born in Chicago in 1982.", "Sarah married John in 2010.", "Sarah works in a hospital in NY as a doctor.", "et al., 2017; Lewis et al., 2020b; Izacard and Grave, 2020).", "More sophisticated architectures have been proposed with incremental retrieval for multi-hop QA (Xiong et al., 2020; Das et al., 2019), where several passages are required, which may have low lexical or semantic similarity with the question.", "This paper considers the problem of answering questions similar to database queries, such as those shown in Figure", "1. For example, the query List all the female athletes in Wikipedia who were born in the 20th century , requires reasoning over hundreds or thousands of facts, retrieved from multiple Wikipedia pages, and applying set-based filters to them (e.g., gender, birth date).", "If our query further asked how many such athletes exist, we would have to perform an aggregation function to count the result set.", "The ability to answer the aforementioned queries would enable a new kind of database (Thorne et al., 2021) where facts can be described in natural language and would therefore obviate the need for a pre-defined schema, which is a major limitation of current database systems.", "An example application for such flexible text databases exists in the area of storing knowledge for personal assistants where users store data about their habits and experiences, their friends and their preferences, for which designing a schema is impractical.", "We introduce WIKINLDB, a benchmark dataset for exploring database reasoning over facts expressed in natural language.", "WIKINLDB contains a number of query types that require systems to return large set-based answers and aggregate over these (with operators such as count , min , and max ).", "Our dataset is generated using publicly available knowledge graph data, enabling large volumes of instances to be generated with minimal effort.", "Most queries in WIKINLDB require reasoning over hundreds of facts to generate answers, exposing limitations in current neural models.", "In contrast to DROP (Dua et al., 2019) where queries are answered over single passages, and bAbI (We-ston et al., 2015), where each query is based on a context of less than 20 facts, our dataset scales from databases of 25 instances to 1000, and could be extended further.", "We also introduce a modular architecture to support database reasoning over text and characterize its behavior on our reference dataset.", "We find that even on small databases of 25 facts, naive application of transformers is insufficient.", "When provided with only the relevant facts, the baseline yields an answer accuracy of 85%, whereas applying our proposed architecture yields 90% by better answering queries, such as count , that require computation.", "It is well known that transformer models do not scale well to large inputs due to the use of self-attention.", "We found that mechanisms such as Fusion in Decoder (Izacard and Grave, 2020, FiD) and LongFormer (Beltagy et al., 2020), which mitigate the scaling issue, harm the model: combining more than 2 facts with FiD resulted in answer accuracies of 76% and 39%, respectively.", "These issues were mitigated by our approach which generates intermediate query-based derivations of small numbers of facts in the database, before using conventional computation to aggregate the results.", "We refer to corpora that consist of unordered collections of facts expressed as short natural language sentences as Natural Language Databases (NLDBs).", "For example, a corpus may include all the utterances given to a personal assistant by its user, or all the claims uttered by a political figure.", "The texts in our corpora are similar to databases as they are sets of stand-alone facts.", "But unlike a database, they are not expressed as rows or triples in a pre-defined schema.", "For example, a sentence containing a single fact, Gustavo likes espresso or multiple facts, such as Robertson Howard, who attended the University of Virginia, is buried in the Congressional Cemetery .", "A query Q over a database, D , produces a set of answers: Q ( D ) = { a 1 , . . . , a l } .", "We consider the following four query types (see examples in Table 5): (1) Set queries are extractive queries that return a list of spans, such as entities, from the facts.", "(2) Boolean queries return a True/False answer.", "(3) Aggregation queries require computation over answer sets with an operator, such as count , min and max .", "For example: How many people work for Yale Law School? ).", "(4) Join queries require the combination of two (or more) facts to produce each answer.", "We combine join operations with set, Boolean and aggregation queries.", "For example, the query Who works in a company in France? considers both the relationship between people and employer as well as company locations.", "The NLP treatment of question answering, where systems encode the query and context (containing the background knowledge), forms a good starting point for NLDBs.", "Common model architectures are based on the transformer (Vaswani et al., 2017) in an encoder-decoder configuration.", "The encoder uses self-attention to conditionally encode the context with the query and the decoder allows conditional generation of outputs that are not necessarily present in the input.", "To scale question answering to reason over large knowledge-sources such as Wikipedia, task formulations typically retrieve text-spans from a corpus to condition answer generation (Chen et al., 2017; Dhingra et al., 2017).", "However, several challenges encountered in NLDBs preclude direct application of these techniques: John works at Shell Sarah is a doctor Sarah married John John works at Shell Sarah is a doctor Sarah married John Sarah married John Facts Support sets NULL John Query-basedderivation Neural SPJ Support Set Generator Query : How many peoples' spouses are doctors?", "Question answering systems combine a retrieval mechanism to select relevant spans from knowledge sources as context.", "This task is usually referred to as open-domain QA (Lewis et al., 2020a; Izacard and Grave, 2020).", "It is common to use a maximum input size of 512 or 1024 tokens for context.", "While extensions such as Linformer (Wang et al., 2020), Longformer (Beltagy et al., 2020) and Fusion in Decoder (Izacard and Grave, 2020) enable larger contexts to be encoded, their application of self-attention varies and the number of tokens that may be encoded is limited by GPU memory.", "Multiple answer spans The NLP formulation of question answering typically requires extracting a span from a single document or generating a short answer.", "Answering queries in a NLDB may require processing a large number of facts, generating a large number of items as answer, hundreds or thousands, and performing aggregations over large sets.", "Locality and document structure NLDBs do not enjoy the locality properties that usually hold in open-domain QA.", "In NLDBs, a query may be dependent on multiple facts that can be anywhere in the database.", "In fact, by definition, the current facts in a database can be reordered and the query answers should not change.", "In contrast, in open-domain QA, the fact needed to answer a given question is typically located in a paragraph or document with multiple sentences about the same subject, in combination with a document title, where this additional context may help information recall.", "facts to input to the model, NLDBs may require conditional retrieval from the database.", "For example, to answer the query Whose spouse is a doctor? we'd first need to fetch spouses and then their professions.", "Recent work on multi-hop query answering (e.g., Asai et al. (2019)), has started considering this issue but is restricted to the case where we're looking for a single answer.", "In NLDBs, we may need to perform multi-hops for sets of facts.", "To address the aforementioned challenges, we propose an instance of a Neural Database architecture (Thorne et al., 2021) that operates over textual facts with parallelizable non-blocking operators before aggregating the results.", "The three core components of the architecture, shown in Figure 2, are a Support Set Generator (SSG) which retrieves small sets of relevant facts called support sets, a parallelizable non-blocking Select-Project-Join (SPJ) operator which generates intermediate answers that can be unioned to produce the final answer, and an optional aggregation stage which uses conventional computation to perform numerical reasoning.", "The key insight underlying our architecture is to leverage neural models for what they excel at, namely, reasoning over a small set of facts.", "Neural SPJ Operator Given a single support set and a query, the SPJ (Select-Project-Join) operator outputs a machine readable intermediate representation of the answer that can be generated from the support set.", "For example, given the query Who was born in Montevideo? and the support set { Mario Sagario was born in Montevideo, Uruguay, ... } , the Neural SPJ would output the entity literal Mario Sagario .", "Examples of outputs are provided in Figure", "3. The SPJ operator is performing three functions: (1) for support sets that are insufficient to answer a question, the operator should return no output; (2) for queries that require short chains of reasoning over multiple facts, the SPJ operator joins the facts when generating the output; and (3) the SPJ generates a projection of the support set to a machine readable format dependent on the given query, and whether computation or aggregation is required.", "Because the SPJ operator is run in parallel, it can scale independently of the limitations on the size of the input of a single transformer.", "In contrast, the use of self-attention when encoding all facts as one input precludes parallelization, has high latency, and is limited by the memory required to compute the self-attention.", "By using the SPJ operator to perform query-dependent information extraction, aggregations can be performed over the generated outputs using conventional computation, which trivially scales to thousands of operands.", "Furthermore, this allows large result sets to be generated by the model, whereas accurately decoding long sequences using an encoder-decoder architecture remains an open challenge (Hupkes et al., 2020).", "Support Set Generator (SSG) A support set contains the minimal subset of sentences from the database needed to generate one single operand for the aggregation module by the SPJ operator.", "For example, for queries that are answered by a single sentence, e.g., Who is Sheryl's husband? , the support set containing a single fact should be returned, e.g., { Sheryl is Nicholas's spouse } .", "The output of the support set generator is a set of support sets, each of which is fed independently to a downstream SPJ module.", "Support sets may not be pairwise disjoint because some facts may be required for multiple answers.", "The SSG output should satisfy the following two properties: (1) If multiple facts are needed to produce an intermediate answer, they should all be in the support set.", "For example, if we queried When was Sheryl's husband born? , the support set should include a fact stating who the spouse is and a fact describing when they were born.", "(2) When performing aggregation, or outputting a set of answers, multiple support sets must be generated, each containing enough information to generate the intermediate results that are aggregated.", "For example, for the query Who is the oldest person? , each of the support sets would independently contain a fact that includes a person and indicates their age.", "Aggregation The outputs of the SPJ modules are intermediate answers to the query.", "For some queries, e.g., who lives in London? , the final answer is simply the union of the intermediate answers.", "In other cases, e.g., how many countries grow coffee? , an aggregation operator needs to be applied to the union of intermediate answers.", "Because output of the SPJ operators are machine readable, we can hence guarantee accuracy and scalability by performing aggregation using conventional computation.", "In this paper, we consider the aggregation functions min , max and count .", "In this section we introduce WIKINLDB, a novel dataset for training NLDBs which is generated by transforming structured data from Wikidata (Vrandecic and Krotzsch, 2014) into natural language facts and queries.", "Wikidata stores triples of the form (S,R,O) , where R is a relationship between the subject S and the object O , e.g., (Tim Cook, employedBy, Apple) .", "The scale and breadth of Wikidata enables us to generate databases of many sizes and variety.", "Facts To automate generation of questions and answers, sentences must be grounded in Wikidata identifiers.", "One approach to generate facts would be to use templates or collect them through grounded information extraction datasets such as T-REx (Elsahar et al., 2018).", "However, to ensure wider linguistic variety as well as accuracy of the mapping, we use verbalizations of knowledge graph triples that are synthesized through a sequence to sequence model.", "Concretely, we use generated sentences from KELM (Agarwal et al., 2020), which are not grounded with Wikidata IDs, and generate a post-hoc mapping back to Wiki-data.For example, given the sentence: The Slice of Life manga series The Film Lives On was written by Osamu Tezuka. we map it to the Wikidata triple (Q11332517,P50,Q193300) .", "Our mapping is a two-step process: firstly, we look up entity names from Wikipedia, returning multiple matches for Osamu Tezuka , and secondly fil-ter these based on which have an author relations to The Slice of Life in the Wikidata graph.", "While out of scope for this paper, this technique could be applied to generate training datasets for novel domains.", "WIKINLDB uses both atomic facts in KELM (about a single relation of an entity) or composite facts (about multiple relations).", "Queries Following previous work on large-scale question answering (Hartmann et al., 2018; Talmor and Berant, 2018), queries are generated using templates.", "For each relation and operator, multiple templates were written by the authors where placeholders can be replaced with the subject and objects for each relation.", "While multiple templates are used to ensure variety, these are limited in diversity in comparison to the facts.", "Templates were generated for the first 25 relations on Wikidata with mapped data in KELM.", "To generate queries that require joins we apply the same technique, combining to combine two or more connected relations, chaining the entities.", "We further select the 15 most popular relations and generate additional templates which chain the two relations.", "For example, we chain (Y,locatedIn,Z) and (X,employedBy,Y) to create a template for the query Does $X work at a company based in $Z? .", "Data Quality We manually inspect randomly selected queries and facts and score them using the categories introduced in this section.", "For queries, we sample 70 instances, 10 for each query type.", "We score each query for fluency and intelligibility.", "Out of 70 queries, only one question was marked as non-fluent due to a typo which was corrected for the final dataset.", "All 70 queries were intelligible.", "We observed that the clarity of some queries depended on the facts in the database to provide context (e.g. Who is male?), but otherwise met the task requirements.", "To assess the quality of mapped facts from KELM, a sample of 50 was evaluated based on 6 categories: intelligibility, fluency, inclusivity (con-veying information from all the mapped relations), faithfulness to these relations, and whether extraneous information (not in the mapped relations) is present.", "49 / 50 facts were intelligible and 45 / 50 facts were fluent.", "The remaining 5 had redundant information or missing conjunctions.", "50 / 50 facts contained all mapped relations and 48 / 50 were faithful to these relations.", "8 / 50 facts had extraneous information for relations that could not be mapped.", "The relations that could not be mapped are not used for query generation and did not affect how answers were automatically generated.", "WIKINLDB Statistics We create databases over 25 common relationships from Wikidata, and create 643 templates from which queries are phrased.", "For join -type queries, we chain a fur-DB Size Avg # Q/DB # DBs (up to) Train Valid Test 25 8 4000 631 621 50 7 4986 498 499 100 13 2500 250 250 250 53 1000 100 100 500 66 500 50 50 1000 70 250 25 25 Table 1: The statistics for datasets with varying size of DBs (i.e. number of facts).", "ther 15 relations with a further 86 template fragments.", "The relations we chose were selected from a weighted sample of the most common entity types in KELM.", "In total, we generate five variants of the dataset containing databases of size 25 to 1000 facts where each fact has between 30-50 tokens.", "Dataset statistics are reported in Table", "1. 5 Models 5.1 Neural Select-Project-Join The SPJ operator is trained as a sequence-to-sequence model to generate intermediate results from a support set and a given query.", "All facts in the support set are concatenated with the query before being input to a transformer model.", "The model is trained to output different derivations depending on the query type.", "For the min , max operators, the projection is a machine-readable key-value pair, illustrated in Figure", "3. For example which place has the highest yearly number of visitors? has the projection of the form: (place, number of visitors) allowing an argmax operation by the downstream aggregation module.", "For queries with Boolean answers, the output is a token indicating whether the answer is true or false.", "And for all other queries where a set of results is returned or counted, the output is simply a span, such as an entity or numerical value, extracted from the support set.", "Even though we use intermediary annotation for the SPJ operator, we believe that collecting such annotation is a simpler labeling task compare to collecting the answers to the queries.", "For example, given the fact Serena Jameka Williams (born September 26, 1981) is an American professional tennis player and former world No. and the query List all the female athletes who were born in 20th centure. , it seems relatively simple to provide the label Serena Jameka Williams .", "However, it is non-trivial to produce a list of potentially hundreds of entities as answer (e.g. [ Serena Jameka Williams, Simona Halep, Mary Lou Retton, Megan Rapinoe, Kim Simmone, Mary Abichi, . . . ]).", "The training of the components in our proposed architecture does not depend on the final answer and instead, on the simpler intermediary labels.", "Predicting Aggregation Operator Rather than using a separate classifier to predict the question type, we encode the choice of operator as a special token that is predicted by the SPJ operator prepended to the model output (Figure 3).", "The aggregation operator is chosen using a majority vote over all generated derivations from all support sets.", "Negative Example Generation It is important for the SPJ to be resilient to extraneous facts that might be returned by a low-precision high-recall SSG.", "Negative instances for training are generated in two ways: (1) queries are paired with randomly sampled facts and the model is trained to generate a NULL projection (indicating the support set does not contribute to the answer).", "For example, a fact about someone's date of birth isn't useful when answering a query about the visitor count of an attraction.", "(2) for a portion of the training instances, we additionally sample extraneous unrelated facts and append these to the support sets simulating false-positive facts from the SSG.", "For simple queries over single facts, conventional information retrieval, such as TF IDF could be considered a primitive SSG.", "However, this would not scale for joins, aggregation queries or for queries outputting a set of answers as generating relevant sets requires incremental decoding, conditioning on already retrieved facts.", "Algorithm 1: SSG modeled as multi-label classification: using maximum inner product search (MIPS) over vector encodings of facts U and state V", "it is akin to enumerating the powerset.", "We construct support sets efficiently by taking an incremental approach, starting from the empty set (see Algorithm 1).", "At each step, the classifier considers the partially generated support set D k and the query and predicts which candidate facts u i D from the database should be added, or whether to stop the iteration, these choices being modeled as a multi-label classification task.", "If STOP is predicted, the partial result set D k is closed (i.e., it forms part of the output); otherwise, for each fact added, a new intermediate ( open ) support set is generated which is explored in the next iteration.", "For efficiency, we use a bi-encoder architecture that independently encodes the facts in the database and the state (query and a partial support set) and computes the inner product between the encoded representations to generate a score: CU ( u i ) TCV ( Q, D k ) .", "The encoders are pre-trained transformers fine-tuned to yield a high inner product between the state's encodings and relevant facts to be added.", "At prediction time, the vectors encoding the facts are static and are pre-computed offline.", "At each step, t , we encode the state using a transformer by concatenating the query tokens and the facts in the partially generated support set D k .", "The SSG is trained with full supervision of all partial support sets from the dataset and trained to predict which facts to add to the support set using a contrastive loss.", "Complexity of SSG The inner loop of Algorithm 1 involves a Maximum Inner Product Search (MIPS) between the encoded state and the encodings of the facts, which is linear in the number of facts.", "Approximate search, such as FAISS (John-son et al., 2019), accelerate retrieval to O (log 2 n ) .", "If we assume a query needs a maximum of b support sets, and the average size of a support set is m , then the complexity of the SSG algorithm is O ( bm log 2 n ) .", "Both b and m are bounded by the number of facts in the database n , but in practice we'd expect only one of b or m factors to be large.", "However, there is fertile ground for developing methods for indexing (and/or clustering) the facts in the database so that only few facts need to be considered in each iteration of the inner loop of the algorithm, leading to significant speedups.", "We compare our proposed architecture to transformer-based models that explore the effect of three attention mechanisms representative of the state-of-the-art.", "Self-attention in transformers captures both inter -fact as well as intra -fact interactions between tokens.", "However, computing self-attention is quadratic with respect to memory and scaling beyond 1024 tokens is non-trivial.", "In our baselines, the task formulation is a sequence to sequence model, similar to that used in question answering.", "All (relevant) facts are encoded with the query and the transformer is trained to predict the answer without using any intermediate representations.", "We compare full self-attention against independently encoding the facts (in the context of the query) and fusing the embeddings in the decoder (Izacard and Grave, 2020, Fusion in Decoder (FiD) ).", "Because FiD independently encodes contexts, run-time complexity is reduced to be linear with respect to the number of facts at the expense of not having inter-fact attention.", "We additionally compare to using windowed attention over facts with global attention to the query using Longformer (Beltagy et al., 2020).", "Inter-fact attention is captured only within the window.", "We use the HuggingFace (Wolf et al., 2020) transformers library and its implementations of T5 and Longformer.", "For SSG, we use BERT to generate encodings, which has a comparable architecture to T5.", "The learning-rate for fine-tuning and number Model Answer Accuracy (%) PerfectIR WholeDB NeuralSPJ + Aggr (ours) 90.10 0.3 T5 85.59 0.2 65.96 0.5 Longformer 76.43 3 58.58 0.4 Fusion in Decoder 39.61 0.2 23.18 0.6 Table 2: T5 and Longformer both capture inter-fact attention whereas Fusion in Decoder does not.", "of epochs were selected through maximizing the Exact-Match (EM) accuracy on a held-out validation set for the tasks.", "For each experiment, we train 3 separate models with different seeds and report mean accuracy.", "The SPJ models are only trained on the small database of 25 facts and applied to larger databases at test time.", "For most queries, we measure correctness using Exact Match (EM), which is 1 if the answer string generated by the model is exactly equal to the reference answer and 0 otherwise.", "This metric is used to score outputs where either a Boolean, null answer, string or numeric answer is expected.", "When a set of results is returned, we compute the F 1 score considering exact matches of set elements.", "When comparing models and reporting results, we report macro-averages over all instances in the test set.", "We collectively refer to this as Answer Accuracy .", "We first consider the suitability of transformer models over small databases of 25 facts comparing two information retrieval settings: PerfectIR , which is representative of other question answering approaches that combine an information retrieval system to select only the facts needed to answer a query, and WholeDB , where the entire database is encoded by the model, assessing resilience to unrelated information and noise.", "The overall scores, in Table 2, indicate that without a retrieval mechanism (i.e., WholeDB), all models were susceptible to distractor facts.", "Furthermore, encoding all facts in a single model is not a viable solution to answer queries posed to NLDBs as this approach does not accurately answer queries that combine multiple support sets, illustrated in Figure 4, and cannot easily scale to thousands of facts.", "Using a transformer yields errors when the query requires computation, such as counting, highlighted when comparing rows 1 and 3 of Table", "3. 0 1 2-4 5-9 10-19 Number of support sets 0.2 0.4 0.6 0.8 1.0 A n s w e r A cc u r a c y Neural SPJ T5 Longformer FiD Figure 4: (PerfectIR) Even when provided with the correct contexts, baseline scores decrease for queries requiring the combination of multiple support sets.", "Inter-fact attention Applying FiD, which does not capture inter-fact attention, to scale to larger databases would not be successful because answer accuracy further decreases with with support set size.", "Applying Longformer, which captures inter-fact attention within a window could yield outcomes similar to the T5 transformer baseline where relevant facts are encoded with similar locality.", "However, in the limit, where context falls between different attention windows, the model could degrade to be similar to FiD.", "Our architecture consists of a support set generator (SSG), a select-project-join (SPJ) operator that generates derivations over the support sets and an aggregation function over the results of the SPJ operators.", "Assuming a perfect SSG, the SPJ accurately answers more queries than the T5 transformer baseline (Table 2) because of the computation within the aggregation function that yields higher scores for min/max and count queries, displayed in Table", "3. In combination with SSG, the overall score decreases to 67% due to retrieval errors.", "However, SSG+SPJ still exceeds the WholeDB baselines.", "It is tricky to evaluate the SSG in isolation because errors here not necessarily translate into errors in query answers.", "For example, the SSG may return a superset of a support set, but the SPJ may still generate the correct answer.", "Table 4 shows the performance of the SSG for a database of 25 facts.", "An output is considered an exact match if it is exactly the same as a support set in the reference data and soft match if it is a superset thereof.", "special token decoded by the SPJ.", "For 1.4% of instances, an incorrect choice of aggregation function was made or the machine-readable outputs from the SPJ could not be parsed.", "We scale the baseline transformers to larger databases using TF-IDF and DPR to retrieve appropriate facts.", "However, these models are still limited by the encoder size of the transformer.", "In contrast, the SPJ operates over support sets of 1-2 facts and, in combination with the SSG, can scale to arbitrarily large databases, illustrated in Figure 5.", "For Boolean queries, the combination of T5 and TF-IDF scored 89%, exceeding the accuracy of the SSG+SPJ.", "This is because TF-IDF exploits token matching between the query and facts.", "For larger databases, the retrieval errors resulted in lower answer accuracy.", "While, with a perfect SSG, the the SPJ accurately answers most query types, as database size increases, the propagation of errors from the SSG resulted in erroneous answers.", "Database queries require reasoning over a large set of relevant and non-redundant facts and performing aggregation.", "While in-roads have been made to perform discrete reasoning and computation over passages (Dua et al., 2019), with explicit computation (Andor et al., 2019) or differentiable modules 25 50 100 250 500 1000 Number of facts in DB 0.2 0.4 0.6 0.8 A n s w e r A cc u r a c y SPJ PerfectIR SSG+SPJ T5 + TF-IDF T5 + DPR Figure 5: Scaling to larger databases with a model trained using 25 facts and tested on larger databases.", "(Gupta et al., 2020), these use only a single passage rather than requiring aggregation over large numbers of facts from different texts.", "Multi-hop question answering requires finding supporting evidence in multiple documents (see (Welbl et al., 2018; Talmor and Berant, 2018; Wolfson et al., 2020) for datasets facilitating this re-search).", "In answering multi-hop questions, the works decompose the question into simpler sub questions (Min et al., 2019; Wolfson et al., 2020), or condition each hop on the previously retrieved documents (Asai et al., 2019; Xiong et al., 2020).", "While tasks such as ComplexWebQuestions (Tal-mor and Berant, 2018) and BREAK (Wolfson et al., 2020) focus on complex queries that can be broken down into simpler ones, our focus is on set-based and aggregation queries where the complexity comes from the need to retrieve and process a large number of non-redundant relevant facts.", "In contrast to the set and count tasks in bAbI (Weston et al., 2015), where each query is based on a small context (less than 20 facts), our dataset scales from databases of 25 facts to 1000.", "Bridging the gap between unstructured natural language data and database-style querying has been a long-standing theme in database research (Halevy et al., 2003).", "The work on information extraction has developed techniques for translating segments of natural language text into triples that can be further processed by a database system.", "There has been significant work on translating queries posed in natural language into SQL queries on a database whose schema is known (Androutsopou-los et al., 1995; Li and Jagadish, 2014; Zeng et al., 2020), with extensions to semi-structured data and knowledge bases (Pasupat and Liang, 2015; Be-rant et al., 2013).", "More recently, systems such as BREAK (Wolfson et al., 2020) and ShARC (Saeidi et al., 2018) have trained models to translate a natural language query into a sequence of relational operators (or variants thereof).", "Database systems are the workhorse of data analysis but they require a pre-defined schema.", "Part of their power stems from the fact that a data analyst can explore the data by easily posing a wide variety of queries.", "Given the rise in the amount of data that is becoming available in text, images and other modalities, we would like to build systems that enable the flexibility of posing complex queries against such data, but without the need for a pre-defined schema.", "This paper proposed an architecture for neural databases and the associated WIKINLDB dataset, as first steps towards realizing a system for querying multi-modal data.", "Our architecture is capable of overcoming the limitations of transformer models because it runs multiple transformers in parallel, each taking a small set of facts.", "Consequently, NLDBs can scale to large databases.", "Additional research is required in order to scale NLDBs to larger datasets, more complex queries, and to multi-modal data.", "In particular, one of the key components of the architecture is the SSG module that retrieves the relevant facts to feed to each instance of the neural SPJ.", "We believe that in practice, the semantics of the application will provide a strong hint on which facts may be relevant.", "For example, when querying a large corpus of social-media posts, each post is a candidate support set as long as the query does not require joining data from multiple posts.", "In addition, we assumed that our databases describe a snapshot of the world.", "In practice, we may have facts that override previous ones (e.g., Samantha works for Apple', followed by Samantha works for Twitter') and we would need to reason about which facts should be ignored.", "We would like to thank Yann LeCun and Antoine Bordes for the initial discussion that sparked the idea of neural databases.", "This work was performed while James Thorne and Fabrizio Silvestri were at Facebook.", "Ethical Concerns A NL database is very similar to a traditional database in terms of applications with a difference that it extends the use of databases on unstructured text.", "For example, NL databases can be used to produce analytics on data expressed in natural language.", "For an NL database to be applicable in the context of a virtual assistance, they will likely need to be trained on real-world conversations.", "Privacy preserving ML methods should be considered for such applications.", "Environmental Concerns Large transformer-based models take a lot of computational resources and energy for pre-training and fine-tuning.", "As a result such models raise environmental concerns.", "In our proposed architecture, we only fine-tune transformer models on small support sets.", "We then use several instances of such models in parallel for inference, instead of a single large model, even on large datasets.", "Therefore, the model is relatively efficient, both during the fine-tuning and during the inference." ]
[ "abstain", "abstain", "result", "objective", "objective", "method", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "objective", "abstain", "method", "abstain", "abstain", "abstain", "result", "objective", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "method", "abstain", "other", "other", "other", "other", "abstain", "abstain", "method", "objective", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain" ]
[ "Research in stance detection has so far focused on models which leverage purely textual input.", "In this paper, we investigate the integration of textual and financial signals for stance detection in the financial domain.", "Specifically, we propose a robust multi-task neural architecture that combines textual input with high-frequency intra-day time series from stock market prices.", "Moreover, we extend WT WT , an existing stance detection dataset which collects tweets discussing Mergers and Acquisitions operations, with the relevant financial signal.", "Importantly, the obtained dataset aligns with STANDER , an existing news stance detection dataset, thus resulting in a unique multimodal, multi-genre stance detection resource.", "We show experimentally and through detailed result analysis that our stance detection system benefits from financial information, and achieves state-of-the-art results on the WT WT dataset: this demonstrates that the combination of multiple input signals is effective for cross-target stance detection, and opens interesting research directions for future work.", "Stance detection (SD) is the task of automatically classifying the writer's opinion expressed in a text towards a particular target (Kk and Can, 2020).", "Starting from Mohammad et al. (2016)'s seminal work, research on Twitter SD gained increasing popularity (Ghosh et al., 2019), embracing new topics (Derczynski et al., 2017; Aker et al., 2017a; Conforti et al., 2020b) and languages (Gorrell et al., 2019; Vamvas and Sennrich, 2020a; Zotova et al., 2020).", "In recent years, research on SD has mainly focused on cross-target generalization, in which an SD system is tested on targets unseen during training (Xu et al., 2018).", "Cross-target generalization constitutes one of the biggest challenges in Twitter SD (AlDayel and Magdy, 2021): in this context, researchers investigated a wide range of techniques, including adversarial training (Wang et al., 2020; Allaway et al.), cross-lingual transfer (Mohtarami et al., 2019), knowledge transfer using semantic and emotion lexicons (Zhang et al., 2020), weak supervision through synthetic samples (Conforti et al., 2021b; Li and Caragea, 2021), and various types of cross-domain transfer (Schiller et al., 2021; Hardalov et al., 2021a).", "In this paper, we study multimodality as a means to enhance cross-target generalization in Twitter SD.", "Multimodal Machine Learning studies the integration and modeling of multiple modalities (Elliott et al., 2016), where a modality refers to the way in which something happens (Baltrusaitis et al., 2019).", "Our contributions are as follows:", "1. We study multimodal learning for Twitter SD.", "Despite being an established research area in NLP (Elliott et al., 2016), SD in a multimodal context is still understudied.", "2. We extend WT WT , an SD dataset which collects English tweets discussing four Mergers and Acquisitions operations (M&As or mergers , Conforti et al. (2020b)), with high frequency intra-day stock market data for the involved companies, which we release for future research 1 .", "We note that the union of our financial signal with WT WT and with STANDER , an SD corpus collecting news articles discussing the same mergers (Conforti et al., 2020a), will constitute the first multi-genre, multi-modal parallel resource for SD and, more generally, one of the very few of this kind in NLP.", "3. We propose SDTF ( S tance D etection with T exual and F inancial signals), a novel multitask, multimodal architecture for Twitter SD, which integrates textual and financial signals.", "1 https://github.com/cambridge-wtwt/ acl2022-wtwt-stocks 4074", "4. Finally, we show experimentally that SDTF benefits from the information encoded in the financial signal, achieving state-of-the-art results on the WT WT dataset; the integration of multiple input signals thus constitutes a promising research direction to tackle cross-target generalization for SD.", "We study SD in the financial domain and consider tweets discussing M&A operations, i.e. financial transactions in which the ownership of a company (the target ) is transferred to another company (the buyer , Bruner and Perella (2004)).", "An M&A process usually comprises many stages, ranging from informal talks between the companies' boards to acquisition planning, negotiations, and external approvals, up to the closing of the deal (or its rejection, e.g. by antitrust bodies).", "M&As account for billions of dollars of investment globally and have been widely studied under many aspects (Gomes and Maldonado, 2020).", "They are well known in NLP (Lefever and Hoste, 2016; Yang et al., 2020; Conforti et al., 2020a,b) and constitute an important application in other AI fields, with a strong focus on automatic prediction of the M&A outcome (Yan et al., 2016; Jetley and Ji, 2010; Moriarty et al., 2019; Venuti, 2021).", "In our task, a model receives a tweet and a target merger, and has to predict the stance expressed by the tweet's author with respect to the likelihood of the merger to succeed: Target.", "Stance.", "Refute All existing models for financial SD only leverage the tweet's text as input (Conforti et al., 2020b; Liang et al., 2021; Li and Caragea, 2021).", "However, a user tweeting at a particular time is immersed into a context which shapes their view of the world: their opinion about an M&A's outcome will be influenced by how the involved companies are perceived.", "In this paper, we use a variation of the stock market prices from the n days prior to a tweet's posting as a means to provide a model with such context.", "According to the Efficient Market Hypothesis (Fama, 1970), stock market prices reflect all publicly known information.", "Even though the Efficient Market Hypothesis is controversial (Malkiel, 2003), stock market prices still reflect a considerable amount of publicly known information.", "Therefore, we argue that they can be used as a proxy for the available knowledge about the merger at a given time.", "The relationship between rumors about an M&A operation and their effect on the involved companies' stocks is mutual and has been widely studied in finance (Ma and Zhang, 2016; Betton et al., 2018; Jia et al., 2020; Gorman et al., 2021; Davis et al., 2021), but never investigated in NLP.", "To our knowledge, the integration of textual and financial data signals has been studied for financial forecasting (Schumaker and Chen, 2009; Hu et al., 2018; Sawhney et al., 2020a,b, 2021c; Ni et al., 2021), but has yet to be investigated for SD.", "Traditionally, research on SD has focused on user-generated data, such as blogs and commenting sections on websites (Skeppstedt et al., 2017; Hercig et al., 2017), apps (Vamvas and Sennrich, 2020b), online debate forums (Somasundaran and Wiebe, 2009), Facebook posts (Klenner et al., 2017) and, above all, Twitter.", "Since Mohammad et al. (2016)'s seminal work, Twitter has been used as a data source for collecting corpora covering a wide range of domains, from US politics (Mohammad et al., 2017; Inkpen et al., 2017) to mental health (Aker et al., 2017b), breaking news events (Zubiaga et al., 2016; Gorrell et al., 2019), finance (Conforti et al., 2020b), and the COVID pandemic (Hossain et al., 2020; Glandt et al., 2021).", "SD has been studied both as a stand-alone, isolated task, and integrated as a sub-component of more complex NLP pipelines (Hardalov et al., 2021b).", "Starting from the pioneering work by Vlachos and Riedel (2014), SD has been identified as a key step in fake news detection (Lillie and Mid-delboe, 2019) and automated fact-checking (Popat et al., 2017; Thorne and Vlachos, 2018; Baly et al., 2018).", "Multimodal learning has proven successful for many NLP tasks (Tsai et al., 2019; Zadeh et al., 2020), including grounding (Beinborn et al., 2018), visual question answering (Ben-Younes et al., 2017; Yu et al., 2018), sentiment analysis (Rahman et al., 2020), and humor detection (Hasan et al., 2019).", "dataset exists for multimodal SD, MULTISTANCECAT (Taul et al., 2018; Segura-Bedmar, 2018), released for IberEval2018 2 .", "MULTISTANCECAT collects 11,398 tweets in Spanish and Catalan discussing the Catalan 2017 Independence referendum: according to Taul et al. (2018), the corpus is multimodal because it contains, along with the tweets' text, contextual information and up to 10 images downloaded from the authors' timeline.", "We note that, unfortuntately, almost all research building on MULTISTANCECAT considered only the provided textual features, thus ignoring its multimodal component.", "As mentioned in Taul et al. (2018, p. 157), only 1 out of the 4 teams participating in the task integrated images into their model, by training a CNN on Spanish and Catalan flags (with the underlying intuition that using them would hint to the user's stance with respect to the topic of Catalan independence) 3 .", "Interestingly, no positive impact was observed on SD results when including such multimodal signals.", "Our work differs in a number of respects: (1) the size of our corpus is considerably larger, thus allowing for more robust training; (2) we do not consider visual signals, such as images, but consistently with WT WT 's domain financial time-series signals from stock market prices; and (3) most notably, MULTISTANCECAT 's multimodal signal consists 2 http://www.autoritas.net/ MultiStanceCat-IberEval2018/ 3 The team did not submit working notes describing their system; therefore, we refer to the model's overview provided in the general task paper (Taul et al., 2018).", "of a maximum of 10 images taken from the user's timeline: therefore, the images might not be related to the tweet, might have been posted at a very different timestamp, or might be the same for multiple tweets published by the same author.", "In contrast, our financial signal is specific to each tweet and is perfectly aligned with its time of posting.", "In recent years, there has been an increasing interest in research at the intersection between finance and NLP (Hahn et al., 2018; El-Haj et al., 2018), with a rich stream of work focusing on financial textual analysis (Lang and Stice-Lawrence, 2015; Loughran and McDonald, 2016), sentiment analysis (Giachanou and Crestani, 2016; Chan and Chong, 2017; Krishnamoorthy, 2018), stance detection (Conforti et al., 2020b,a, 2021a), volatility prediction (Rekabsaz et al., 2017; Kolchyna et al., 2015) and, above all, financial forecasting (Qasem et al., 2015; Ranco et al., 2015; Pagolu et al., 2016; Pimprikar et al., 2017; Oliveira et al., 2017).", "While multimodality has not been investigated for financial SD, it constitutes a very active research direction in financial forecasting, i.e. the task of predicting a business' future financial performance (Abu-Mostafa and Atiya, 1996).", "Given the importance of psychological and behaviorial elements on stock-price movements (Malkiel, 2003), researchers in economics 4076 have started to explore models which leverage features beyond simple numerical values (Nikou et al., 2019; Liu and Chen, 2019).", "In this context, a stream of work analyzed the integration of historical price data with social media texts (Sawh-ney et al., 2020a) and other audio or textual features (Zhao et al., 2019; Qin and Yang, 2019; Sawhney et al., 2021b; Lee and Yoo, 2020; Sawhney et al., 2021b,a; Das et al., 2021; Chen and Huang, 2021).", "Text Signal.", "As our text signal, we use Will-They-Won't-They (Conforti et al., 2020b, WT WT ) 4 , which collects English tweets discussing four M&As between US companies (Table 1).", "WT WT is expert-annotated for stance with respect to the likelihood of the merger happening according to the opinion expressed in the text, following a four-class classification schema: support, refute, comment and unrelated (i.e. the tweet does not discuss the merger).", "Below, we report one example for each of the considered labels (targets in squared brackets): Support [ CVS _ AET ] CVS, Aetna $69B merger wins DOJ approval < URL > Refute [ ANTM _ CI ] Big-name lawmakers want to block Aetna-Humana and Anthem-Cigna!", "Comment [ ANTM _ CI ] Anthem-Cigna deal would create Big 3': If the deal is approved Unrelated [ CVS _ AET ] Urge Your Legislators to Oppose CVS and Walmart Takeover of Medical Care Delivery!!! < URL >#MSSNY Financial Signal.", "For the four healthcare M&As in WT WT 5 , we obtain historical prices in 30-min intervals for the involved stocks.", "The financial data has been bought from FirstRate Data LLC 6 ( 700MB) at market price.", "Each entry in the data has the following fields: DateTime, Open, High, Low, Close, Volume .", "DateTime is in US Eastern Time, in the format YY-MM-DD h:m:s .", "Only minutes with trading volume are included: times with zero volume, such as during weekends or holidays, are omitted.", "Prices 4 WT WT can be downloaded, upon signing a data sharing agreement, from its GitHub repository https://github.", "com/cambridge-wtwt/acl2020-wtwt-tweets 5 Note that this aligns with the targets collected in STANDER , a news SD corpus (Conforti et al., 2020a).", "6 https://firstratedata.com/ are adjusted for dividends and splits 7 .", "We used Python's datetime library to align Twitter time values (UTC) with the financial signal (EST, New York Stock Exchange) 8 Note that price variations in 30-minutes intervals are considerably more granular than the financial signal used in NLP work, which is mostly limited to daily data (Sawhney et al., 2020a).", "Such granularity is necessary when monitoring tweets, which are highly reactive to real-time, on-topic information from the outside world (ALRashdi and O'Keefe, 2019).", "Analysis.", "Figure 1 shows an example of the integration of the two signals.", "On the day the antitrust complaint was made to the Department of Justice regarding the M&A operation, ANTM 's price increased while CI 's decreased.", "Such movements testify that the event changed the world's view: people believe that the merger is less likely to happen, and this is reflected by their investment decisions.", "The direction of the price variation reflects standard M&A theory (Bruner and Perella, 2004): the buyer will not buy the target's shares at a premium, thus the owners of target's stocks will not profit from the acquisition.", "The price variation is useful for classifying a tweet on that day, as it implies that the likelihood of a refute label is higher.", "This is reflected in the tweet distribution in the lower part of the Figure: the distribution of tweets on that day shows that most of them were indeed refuting .", "We report one more example in Appendix A. 5 Models As shown in Figure 2, our multitask SDTF model is composed of a textual , a financial and a multimodal component.", "Following previous work in SD (Hardalov et al., 2021a), we obtain a vector representation h text R d for the textual input by averaging the token-level hidden states from the last layer of a large transformer (in our case, BerTweet (Nguyen et al.,", "8 The timestamps of posting of each tweet in the WT WT dataset can be shared in accordance with the terms of use outlined by Twitter https://developer.twitter.com/ en/developer-terms/agreement-and-policy .", "No private information (such as username of the tweet's author and similar) is shared.", "where target consists of the string: B ( b, t b will merge with T ( t, t t ) , where B , b , and t b , are the buyer's name, acronym and Twitter username 9 (same for the target company).", "Input.", "For each tweet posted at time s , we consider a window of w days in the past.", "At each timestep i , in { s w, s w + 1 , ..., s } , we consider two price vectors p bi , p ti R 12 which consist of: (1) p bi = p bi 1 p bi 2 p bi 3 = [ o b , c b , h b , l b ] [ o m , c m , h m , l m ] [ v b , r b , c b c m r b c m ] where o , c , h , l and v are resp.", "the opening, closing, highest, lowest price and volume of transactions at time i for the buyer's stock (superscript b ) or for the overall market index (superscript m ); finally, r is the return at time i and is defined as ( c bi c bi 1 ) /c bi 1 (Law (2018), same for the target).", "Price Embeddings .", "We obtain a vector representation e ib for each time point i by concatenating: p bi e bi 1 e bi 2 (2) 9 For example, Anthem (ANTM, AnthemInc) .", "This is in principle the same as in (Liang et al., 2021), with two differences: we add the companies' official Twitter usernames and, similarly to other SD works (Hardalov et al., 2021a), we consider first the input text, and then the target.", "where e bi 1 and e bi 2 are the time embeddings for p bi 1 and p bi 2 (same for the target).", "We use Time2Vec (Kazemi et al., 2019) for time embeddings, and we jointly learn embeddings for the buyer and the target.", "Price Encoder .", "As in Du and Tanaka-Ishii (2020) and Kostkova et al. (2017), we use a Gated Recurrent Unit (Cho et al., 2014, GRU) to encode the price variations over time.", "We implement two separate GRU b and GRU t for the buyer and the target.", "At time i , the GRU b 's output consists of: h i = GRU b ( e ib , h i 1 ) s w i s (3) To model the inter-dependencies between the two stocks, we use multi-head attention mechanism (Vaswani et al., 2017) which, in our experiments, proved to be more effective for SD than the classic temporal attention used in financial forecasting (Feng et al., 2019).", "In practice, we obtain a unified price vector representation h price as: h b = b ( H t , H b ) (4) h t = t ( H b , H t ) (5) h price = h b h t (6) where b and H b (resp. t and H t ) are the buyer's (and target's) multi-head attention mechanism and the matrix consisting of GRU b 's (resp. GRU t 's) outputs.", "Signals from different modalities encode complementary information (Schumaker and Chen, 2009): we avoid simple concatenation (Li et al., 2016),", "which would treat such signals equally, and implement a bilinear transformation to integrate the tweet's encoded representation with the historical prices of the involved companies (Sawhney et al., 2020a).", "Given the price and the text vector representations h price R p and h text R d , we obtain a combined vector representation h R w as: h = relu ( h Ttext W h price + b ) (7) where W R w d p and b R w are the learned weight matrix and bias.", "Stance Detection.", "We expect the financial signal to be relevant only in the case of related stance labels (i.e. support, refute, comment ).", "In order to assist the model in differentiating between those two macro-classes, we predict a binary label re-lated/unrelated along with the stance label y stance : y stance = softmax ( h ) y binary = ( h text ) (8) Financial Forecasting.", "As it has been previously studied in finance, rumors about a merger can affect the stock prices of the involved companies (Jia et al., 2020; Davis et al., 2021).", "To encourage our model to learn such influence, we also add two binary financial-related outputs, in which we predict the stock movement of the two companies: y buyer = ( h buyer ) (9) y target = ( h target ) (10) where h buyer (resp. h target ) is the concatenation of the last output vector of GRU b and h , and y buyer (resp. y target ) { , } (i.e., stock closing price for the considered company will resp. move up, or fall).", "The final loss is: L = L stance + 0 .", "For L stance we use categorical cross-entropy loss, while L binary , L buyer and L target use binary cross-entropy loss function.", "The weights of the last three loss components were empirically set in an initial pilot.", "Preprocessing.", "We perform minimal preprocessing on the textual signal.", "Concerning the financial signal, we consider a window of 30 timepoints in the past, and price variations every 30 minutes: depending on the tweet's posting time, this accounts for the previous 2.5 days 10 .", "For FF, we predict ups or downs in the considered company's closing price 2 hours after the tweet 11 (see Appendix B.1 for details).", "Training Setup and Evaluation.", "Details on the training setup and (hyper-)parameter settings are reported in Appendix B.2 for replication.", "Following Hanselowski et al. (2018); Conforti et al. (2020b), we consider macro-averaged precision, recall and F 1 score.", "To account for performance fluctuations (Reimers and Gurevych, 2017), we average three runs for each model (standard deviation is reported in Appendix B.2).", "SVM , a linear-kernel SVM leveraging bag of ngrams (over words and characters) features, similar as in Mohammad et al. (2017); CrossNet , a cross-target SD model (Xu et al., 2018) consisting of a bidirectional conditional encoding model over LSTMs, augmented with self-attention and two dense layers; SiamNet , a siamese network similar to San-tosh et al. (2019), which is based on a BiL-STM followed by a self-attention layer; HAN , a Hierarchical Attention Network as in (Sun et al., 2018)) which uses two levels of attention to leverage the tweet representation along with linguistic information (sentiment, dependency and argument);", "and two further baselines from Liang et al. (2021):", "BERT , a strong vanilla BERT-based model fine-tuned on WT WT ; TPDG , a sophisticated network based on a target-adaptive pragmatics dependency graph.", "10 During night or holidays, price entries are usually not available.", "Tweets published outside of the market's opening hours (9:30am4pm EST during workdays) are thus associated with the most recent available financial signal.", "11 Or, for tweets posted at night or during holidays, the first available closing price in the future.", "Table 3 shows our experimental results.", "We observe that using BerTweet as main text encoder alone achieves considerable gains in performance with respect to all stance labels considering all baselines, including the strong vanilla BERT baseline.", "This is unsurprising, given the peculiarities of Twitter language (Hu et al., 2013) which are captured by BerTweet.", "Adding the financial signal.", "Adding our financial component proves to be effective over all considered targets, with improvements in F 1 scores up to +5.8 ( AET _ HUM ).", "Single-label performance seems to suggest that price variations encode very useful information for all labels, resulting in notable improvements not only on the unrelated (+3.7), but also on the refute and support samples (resp. +2.1 and +5.4 in accu-racy): this is important because those labels, apart from being the minority classes, arguably constitute the most relevant information for downstream tasks (Scarton et al., 2020).", "Adding Multi-Task Objectives and Ablation Experiments.", "Results of ablation experiments (Ta-ble 3) show that including the financial forecast (+FF) task alone brings moderate improvements in performance, while considering binary SD (+Bi-nary) alone moderately degrades it: their combination, however, achieves the best results over three of the four mergers.", "Interestigly, jointly modeling FF and binary SD seems to be beneficial not only for SD: as shown in Table 4, best results on both ancillary tasks are obtained in the multitask setting.", "Binary SD performance is very satisfactory over all mergers, with a correlation with M&As with a higher proportion of unrelated samples.", "Moving to the other ancillary tasks, FF results are encouraging 12 , even if we considered a considerably shorter time window of historical pricing than architectures specifically designed for FF (Du-mas et al., 2009; Kim et al., 2019; Ho et al., 2021).", "This suggest that the learned multimodal textual and financial vectors constitute an informative input for the FF predictors.", "Single-Label Performance.", "An analysis of single-label performance (Table 3) shows that models including the financial component, with or without ancillary tasks, achieve best performance on all related labels.", "12 Consider for example a strong neural model such as Selvin et al. (2017), reported in (Sawhney et al., 2020a).", "A similar situation, in which a model leveraging simple lexical features achieved best results on the unrelated samples, was already observed not only for WT WT (Conforti et al., 2020b), but also for other SD datasets, such as FNC-1 (Pomerleau and Rao, 2017; Hanselowski et al., 2019).", "We note that, in both datasets, related-unrelated vs. support/comment/refute classifications can be seen as constituting two different tasks: the former is more similar to topic detection, where even surface-level methods can do well, whereas the latter is an inference task which requires deeper semantic knowledge (Conforti et al., 2018) 13 .", "The analysis of the confusion matrices (reported in detail in Appendix B.2) shows that most errors concern support or refute samples which were mis-classified as comment : as already observed in Conforti et al. (2020b), the difference between a comment and a stance-bearing label such as support (or refute ) depends on argumentative nuances in the tweet, which are sometimes subjective and ultimately depends on the annotator's preferences.", "A number of comment-unrelated misclassifications are also present, especially for M&As with a high number of unrelated samples (such as CVS _ AET and ANTM _ CI ).", "Performance When Silencing Different Signals.", "In order to estimate the relative importance of the two signals considered in the SDTF model, we consider a scenario in which we silence one of the two signals: for the textual signal, this cor-13 We note that, in a practical scenario, it might make sense to first apply a simple lexicon-based method for filtering out unrelated samples, and then to adopt a more sophisticated approach for the second step, as proposed for example by Masood and Aker (2018).", "responds to replacing the target and the tweet's text with two empty strings (i.e., [CLS] [SEP] [SEP] as input to the right component in Figure 2); for the financial signal, we input two empty price vectors for the considered companies (i.e. the left components in Figure 2).", "Results of such ablation experiments (Table 5) show that, as expected, the textual signal provides the biggest contribution for SD, and the financial signal alone is not sufficient at all to perform SD.", "Blending together both signals, however, provides the most informative input to the model: a consistent drop in performance over all labels, including unrelated , is observed with models exposed to empty price vectors.", "Robustness Over Parameters Freezing.", "Moreover, we investigate the model robustness over freezing BerTweet 14 : we consider two scenarios, in which we freeze the complete weights or BerTweet, or all but its last three layers (Wang et al. (2019), see Appendix B.2 for details on number of parameters for the different settings).", "As expected (Mosbach et al., 2020), performance degrades with fewer layers trained (Table 6), with the exception of the BerTweet architecture when freezing all but its last three layers.", "Notably, our multitask SDTF model is more robust over parameter freezing than the vanilla BerTweet, achieving higher performance over all considered metrics: this suggests that, when less powerful textual encoders are provided, the presence of the financial signal supports SD classification.", "Adding Synthetic Data.", "As mentioned in the Introduction, a recent stream of work investigates the usage of synthetically generated data to compensate for data scarcity in Twitter SD.", "In particular, Li and Caragea (2021) used Auxiliary Sentence based Data Augmentation (ASDA), a conditional 14 This is important, because the number of trainable parameters correlates with CO 2 emission (Strubell et al., 2019).", "data augmentation method, to double the size of SD datasets, achieving state-of-the-art results on WT WT with a model trained on the union of gold and synthetic samples.", "In a last set of experiments, we investigate the impact of adding such synthetically generated examples to an SDTF model.", "As synthetic samples aren't associated to any price vectors from the stock market, we proceed as follows: we first fine tune a BerTweet model on ASDAWT WT , which we obtain from the ASDA paper's authors; then, we use such model's weights to initialize the textual encoder of an SDTF multitask model (the left components in Figure 2), which we finally train on the gold WT WT as described in Section 5.", "Results in Table 7 show that models trained on ASDAWT WT (gold and synthetic samples) achieve better results than SDTF trained on gold data alone.", "Including synthetic signal from ASDAWT WT seems to be effective for all considered training settings: even using a simple pretraining strategy as described above allows an SDTF model to capture useful textual features from the synthetic samples, which are retained over the finetuning stage and allow for better cross-target generalization.", "Our finetuned model (ASDA+SDTF in Table 7) reaches state-of-the-art results on the WT WT dataset and best results over three of the four considered mergers, with gains in F 1 scores ranging from +1 .", "4 ( ANTM _ CI ) to +3 .", "2 ( CI _ ESRX ).", "In this paper, we studied the well-established task of Twitter SD in a multitask scenario, focusing on the financial domain.", "We proposed SDTF, a novel model which integrates two modalities, text and financial time series data.", "We extended WT WT , a large dataset for financial SD, with financial signals from stock market prices.", "Our detailed analysis of models' results demonstrated that financial SD on tweets benefits from such signals: models which include textual and financial features showed better cross-target generalization capabilities, and obtained better results on all stance labels.", "Finally, we proposed a simple but effective setting to leverage useful signals encoded in synthetic samples, reaching state-of-the-art results on WT WT .", "We release the financial signal collected to complement WT WT : together with the STANDER corpus of news SD, which discusses the same mergers, it constitutes an invaluable and unique resource to foster research on multi-modal, multi-genre SD, and to model the integration and mutual influences between stock market variations, tweets, and authoritative news sources.", "Data Collection.", "Daily financial data is publicly available and can be freely downloaded (e.g. through Yahoo Finance 15 ).", "However, granular financial data needs to be purchased.", "We bought the historical financial data from FirstRate Data LLC 16 , who source their data directly from major exchanges.", "We tested all signals for consistency and completeness, and found that it reflects the actual trading in the stocks.", "Presence of Bias.", "As textual input, we used WT WT , a publicly available dataset which we obtained from the authors after signing a data sharing agreement (Academic Free License).", "Given that many NLP tasks are somehow subjective (Poesio et al., 2019), and the choice of annotators might reinforce the emergency of bias (Waseem, 2016; Sap et al., 2019; Geva et al., 2019) we note that WT WT might contain annotation bias, which could be amplified by our models (Shah et al., 2020; Waseem et al., 2021).", "Moreover, the BerTweet model we are using as main text encoder might encode biases due to the data it was trained on (Bender et al., 2021).", "We observe, however, that both elements are beyond our control.", "Data Sharing.", "In accordance with FirstRate Data, we release the relevant portion of the data under Academic Free License at the link: https: //github.com/cambridge-wtwt/ acl2022-wtwt-stocks .", "We are aware of the many ethical issues surrounding social media research (Hovy and Spruit, 2016).", "Virtually all models trained on social media data are dual-use (Benton et al., 2017): in order to avoid 15 https://uk.finance.yahoo.com/ 16 https://firstratedata.com/ 4082 potential misuse, we will share our financial signals, which is complementary to WT WT , only upon signing a data sharing agreement restricting the data usage to research only.", "Environmental Factors.", "We are conscious that training transformers such as BerTweet produces large quantity of CO 2 emissions (Strubell et al., 2019; Henderson et al., 2020).", "We observe that, in our case, we are not training such models from scratch, thus considerably limiting the training time.", "Moreover, we also experimented with (partially) frozen transformers (Lee et al., 2019; Sajjad et al., 2020; Mosbach et al., 2020), which in turn require less parameters to be optimized.", "We thank the anonymous reviewers of this paper for their efforts and for the constructive comments and suggestions.", "We gratefully acknowledge funding from the Keynes Fund, University of Cambridge (grant no. JHOQ).", "CC is grateful to NERC DREAM CDT (grant no. 1945246) for partially funding this work.", "CG and FT are thankful to the Cambridge Endowment for Research in Finance (CERF)." ]
[ "abstain", "objective", "objective", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "other", "method", "abstain", "objective", "method", "objective", "abstain", "abstain", "objective", "other", "result", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "other", "method", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "method", "objective", "objective", "objective", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "other", "abstain", "other", "abstain", "method", "abstain", "method", "other", "other", "other", "other" ]
[ "Huge volumes of patient queries are daily generated on online health forums, rendering manual doctor allocation a labor-intensive task.", "To better help patients, this paper studies a novel task of doctor recommendation to enable automatic pairing of a patient to a doctor with relevant expertise.", "While most prior work in recommendation focuses on modeling target users from their past behavior, we can only rely on limited words in a query to infer a patient's needs for privacy reasons.", "For doctor modeling, we study the joint effects of their profiles and previous dialogues with other patients and explore their interactions via self-learning.", "The learned doctor embeddings are further employed to estimate their capabilities of handling a patient query with a multihead attention mechanism.", "For experiments, a large-scale dataset is collected from Chunyu Yisheng, a Chinese online health forum, where our model exhibits state-of-the-art results, outperforming baselines only considering profiles and past dialogues to characterize a doctor.", "1 1 Introduction The growing popularity of health communities on social media has revolutionized the traditional doctor consultancy paradigm in a face-to-face manner.", "Massive amounts of patients are now turning to online health forums to seek professional help; meanwhile, popular healthcare platforms are able to recruit a large group of licensed doctors to provide online service (Liu et al., 2020b).", "In the COVID-19 crisis, the social distancing policies further flourish the use of these forums, where numerous patients would query diverse varieties of health problems every day (Gong et al., 2020).", "Jing Li is the corresponding author.", "1 Our dataset and code are publicly available in: https://github.com/polyusmart/ Doctor-Recommendation Figure 1: The sample patient query q on the top, followed by the profile of a sample doctor D and three dialogues D engaged before.", "Under this circumstance, how can we automate and speed up the pairing of patients to doctors who are able to offer the help?", "In this paper, we present a novel task of doctor recommendation , whose goal is to automatically figure out a patient's needs from their query on online health forums and recommend a doctor with relevant expertise to help.", "The solution can not be trivially found from the mainstream recommendation approaches.", "It is because most recommender systems acquire the past behavior of target users (e.g., their purchase history) to capture their potential requirements (Wu et al., 2020; Huang et al., 2021); whereas our target users the patients 2 The original texts in our dataset are written in Chinese.", "We translated them into English in parentheses for reading.", "should be anonymized to protect their privacy.", "Language features consequently play a role in our task because only a few query words are accessible for models to make sense of how a patient feels and who can best help them.", "To illustrate our task, Figure 1 shows a patient's query q concerning insomnia and muscle aches, where it is hard to infer the cause of such symptoms from the short text, not to mention to recommend a suitable doctor for problem-solving.", "It is hence crucial to explore the semantic relations between patient queries and doctor expertise for recommendation.", "To characterize a doctor's expertise, the modeling of their profile (describing what they are good at) provides a straightforward alternative.", "Nevertheless, the profiles are usually written in a professional language, while a patient tends to query with layman's terms.", "For instance, the doctor D who later solved q 's problem is profiled with neurological diseases, whose correlations with the symptom descriptions in q are rather implicit.", "Therefore, we propose to adopt previous dialogues held by a doctor with other patients (henceforth dialogues ) to narrow the gap of language styles between doctor profiles and patient queries.", "Take the history dialogues of D in Figure 1 as an example: the words therein like dizziness, muscular atrophy, and cyclopyrrolones (treatments for insomnia) are all helpful to bridge D 's expertise in neurological diseases with q 's symptoms.", "To capture how a doctor's profile is related to their dialogue history, we first construct a self-learning task to predict whether a profile and a dialogue are from the same doctor.", "It is designed to fine-tune a pre-trained BERT (Devlin et al., 2018) and align the profile writing and colloquial languages (used in patient queries and doctor responses) into the same semantic space to help model a doctor's expertise.", "Profiles and dialogues are then coupled with the query embeddings to explore how likely a doctor is qualified to help the patient.", "Here multi-head attention in aware of the doctor profile is put over the history dialogues to capture the essential content able to indicate a doctor's suitability from multiple aspects, e.g., the capabilities of D in Figure 1 to handle both insomnia and myopathy.", "Such design reflects the intricate nature of health issues and would potentially allow the models to focus on the salient and relevant matters instead of being overwhelmed by the massive dialogues a doctor has engaged, which may concern diverse points.", "In comparison to other NLP studies concerning health forum dialogues (Xu et al., 2019; Zeng et al., 2020a), it is found that few of them attempt to spotlight doctors in these dialogues and examine how their expertise is reflected by what they say in these dialogues.", "Different from them, we explore doctor expertise from their profiles and history dialogues in order to fit a doctor's qualification to a patient's requests, which would advance the so far limited progress of doctor expertise modeling with NLP.", "To the best of our knowledge, we are the first to study doctor recommendation to automate the pairing of doctors and patients in online health forums, where the joint effects of doctor profiles and their previous interrogation dialogues are explored to learn what a doctor is good at and how they are able to help handle a patient's request.", "For experiments, we also gather a dataset with 119K patient-doctor dialogues involving 359 doctors from 14 departments from Chunyu Yisheng, a popular Chinese health forum.", "3 The empirical results show that doctor profiles and dialogue history work together to well reflect a doctor's expertise and how they are able to help a patient.", "In the main comparison, our model achieves state-of-the-art results (e.g., 0.616 by P@1), outperforming all baselines and ablations without employing self-supervised learning and multi-head attention.", "Moreover, we quantify the effects of doctor profiles, history dialogues, and patient queries in recommendation and our model shows consistently superior performance in varying scenarios.", "Furthermore, we probe into the model outputs to examine what our model learns with a discussion on multiple heads (in our attention map), a case study, and an error analysis, where the results reveal the potential of multi-head attention to capture various aspects of a doctor's expertise and point out the future direction to distinguish profile quality and leverage data augmentation and medical knowledge.", "Despite the previous contributions of large-scale data with doctor-patient dialogues (Zeng et al., 2020a), we note some essential information for doctor modeling is missing, e.g., the profiles.", "In this work, we present a new dataset to study the characterization of doctor expertise on health forums from both profiles and dialogue history.", "Data Collection.", "We developed an HTML crawler to obtain the data from Chunyu Yisheng, one of the biggest online health forums in China.", "Then, seed dialogues involving 98 doctors were gathered from the Featured QA page.", "To ensure doctor coverage in varying departments, we also collected doctors from the Find Doctors page for each department, which results in the 359 doctors in our dataset.", "Finally, for each doctor, we crawled their Favorable Dialogues page and obtained the profile and history dialogues therein.", "All stop words were removed from each dialogue.", "Data Analysis.", "The statistics of our dataset are reported in Table 1. We observe that dialogues are in general much longer than profiles.", "We also observe that a doctor engages in over 300 dialogues on average.", "It indicates that rich information are contained in dialogues to learn doctor expertise, while presenting challenges to capture the essential content therein for effective doctor embedding.", "We further plot the distribution of dialogues a doctor engages and the dialogue length distribution in Figure 2. It is observed that doctors contribute diverse amounts of dialogues, which reflects the wide range of doctor expertise and qualifications in practice.", "Nonetheless, a large proportion of doctors are involved in over 100 dialogues while many dialogues are lengthy (with over 200 tokens).", "We can hence envision a doctor's expertise may exhibit diverse aspects and dense information is available in history dialogues, whereas an effective mechanism should be adopted to capture salient content.", "We finally examine doctors' language styles by counting the number of medical terms based on THUOCL medical lexicon.", "4 Results show that medical terms take 30.13% of tokens in doctor profiles, while the number is 7.83% and 5.52% for 4 github.com/thunlp/THUOCL/blob/master/ data/THUOCL_medical.txt Figure 2: On the left subfigure, its y-axis shows the number of doctors and x-axis the dialogue number a doctor is involved in.", "patient and doctor turns in dialogues, respectively.", "It is probably because doctors tend to profile themselves with professional language while adopting layman's language to discuss with patients.", "We now introduce the proposed framework for our doctor recommendation task (overviewed in Figure 3).", "It contains three modules: a query encoder that encodes patient needs from queries, a doctor encoder that encodes doctor expertise from profiles and dialogues, and a prediction layer that couples above outputs for recommendation prediction.", "patient, the profile p i of doctor D i , and a collection of D i 's history dialogues (cid:104) d i 1 , d i 2 , ..., d i n (cid:105) ( i n denotes the number of dialogues D i previously en-gaged).", "For each given query q , we first pair it with each doctor D i from a candidate pool of m doctors and output a matching score s i to reflect how likely D i owns the expertise to handle the request of q .", "A recommendation is then made for q by ranking all the doctor candidates based on these matching scores s i ( i { 1 , ..., m } ) .", "Here we introduce how we encode embeddings for a doctor D to reflect their expertise, which starts with the embedding of their profile and dialogues.", "Profile and Dialogue Embedding.", "Built upon the success of pre-trained models for language representation learning, we employ a pre-trained BERT (Devlin et al., 2018) to encode the profile p and obtain its rudimentary embedding e p .", "Likewise, for a dialogue d , we convert it into a token sequence via linking turns in chronological order and encode its semantic features with BERT, which yields the dialogue embedding e d .", "Self-Learning.", "As analyzed in Section 2, doctor profiles are usually written in a professional language while dialogue language tends to be in layman's styles.", "To marry semantics of profiles and dialogues into the space, we design a self-learning task to predict whether a profile and a dialogue come from the same doctor, where random profile-doctor pairs are adopted as the negative samples.", "Then, the pre-trained BERT at doctor encoder's embedding layer is fine-tuned via tackling the self-learning task and shaping an initial understanding of how profiles are related to dialogues.", "Multi-head Attention.", "We have shown in Figure 2 that a doctor may engage in massive amounts of dialogues, where only part of them may be relevant with a query.", "To allow models to attend to the salient information from the dense content provided by history dialogues, we put a profile-aware attention mechanism over dialogues.", "Here, multihead attention is selected because of its capabilities in capturing multiple key points.", "It potentially reflects the complicated nature of doctor expertise, which in practice would exhibit multiple aspects.", "embedding array) to both key and value argument:", "Query att = e p , Key att = [ e d 1 , e d 2 , . . . , e d n ] T , Value att = [ e d 1 , e d 2 , . . . , e d n ] T .", "For the j -th head, these three arguments are then respectively transformed through the neural perceptions with learnable weight matrices W Qj , W Kj , and W Vj ( Q for query, K for key, and V for value).", "Their outputs Q , K , and V jointly produce an intermediate doctor representation h j , which characterize a doctor's expertise from one perspective: h j = Att ( QW Qj , KW Kj , V W Vj ) (2) where the Att ( ) operation is defined as: Att ( Q , K , V ) = softmax ( QKT dim ) V (3) Here dim is the dimension of key and value.", "The scaling factor 1 dim helps keep the softmax output away from regions with extremely small gradients.", "Finally, to combine the learning results from multiple heads, outputs are concatenated altogether and transformed with a learnable matrix WO to obtain the final doctor embedding e D : e D = Concat ( h 1 , h 2 , ..., h l ) WO (4) Here l denotes the number of heads.", "The doctor embedding e D , carrying features indicating the doctor expertise of D , will then be coupled with the query encoder results for recommendation, which will later be described in the coming section.", "Query Embedding.", "For anonymous reasons, only the linguistic signals in a query are available to encode a patient's request.", "Therefore, we adopt a similar strategy for the embedding of profiles and dialogues to customize the query encoder with a pre-trained BERT.", "The learned feature is denoted as a query embedding e q to represent patient needs.", "Recommendation Prediction.", "Given a pair of doctor D and query q , the embedding results of doctor encoder e D and query encoder e q are coupled in the prediction layer for recommendation.", "We adopt a MLP architecture to measure the matching score s of the D q pair, which indicates the likelihood of doctor D able to provide a suitable answer to query q and is calculated as following:", "s = ( WMLP Concat ( e D , e q ) + b MLP ) (5)", "Here denotes sigmoid activation function and WMLP (weights) and b MLP (bias) are trainable.", "Our framework is based on the pre-trained BERT and then fine-tuned in the following two steps.", "The first is to fine-tune the embedding layer of doctor encoder (as described in Section 3.1).", "For the second, we fine-tune the entire framework by optimizing the weighted binary cross-entropy loss introduced in Zeng et al. (2020b): L = (cid:88) ( D,q ) ( s D,q log( s D,q )+(1 s D,q ) log(1 s D,q )) (6) Here is the training set formed with doctor-query pairs and s D,q denotes the binary ground-truth labels, with 1 indicating D later responded to q while 0 the opposite.", "> 1 balances the weights of positive and negative samples in model training, where the model would weigh more on positive Dq pairs ( D indeed handled q ) because negative samples may be less reliable and affected by many unpredictable factors, e.g., a doctor is too busy at some time.", "Intuitively, this training objective encourages models to assign high matching scores s D,q to a doctor D who actually helped q .", "Dataset Preprocessing and Split.", "To preprocess the data for non-neural models, we employed an open-source toolkit jieba for Chinese word segmentation.", "5 For neural models, texts were tokenized with the attached toolkit of MC-BERT, a pre-trained BERT for biomedical language understanding (Zhang et al., 2020a), to be able to feed into BERT.", "6 In the experiments, we maintained a vocabulary without stop words for dialogues' non-query turns while keeping them in queries and profiles, considering the high information density of the latter and colloquial styles of the former.", "In terms of dataset split, 80% dialogues were randomly selected from each doctor to form the training set.", "For the rest 20% dialogues, we took their first turns (patient query) to measure recommendation and split the queries into two random halves, one for validation and the other for test.", "In the training stage, we adopted negative sampling with a sampling ratio of 10 to speed up the process while for inference, the doctor ranking is conducted on the top 100 doctors handling the most queries.", "Model Settings.", "As discussed above, the pre-trained MC-BERT was employed to encode the queries, profiles, and dialogues, whose parameters were first fine-tuned on the self-learning task, followed by a second fine-tuning step to tackle the doctor recommendation task with the other neural modules.", "The maximum input length of BERT is 512, and the dimension of all text embeddings from the output of MC-BERT is 768.", "The hyper-parameters are tuned on validation results and the following presents the settings.", "The head number of multi-head attention is set to 6 and the tradeoff parameter = 5 (Eq. 6) to weigh more on positive samples.", "The MLP at the output side contains one hidden layer in size 256.", "For training, we employ the Adam optimizer with an initial learning rate of 0.008 and batch size 256.", "The entire training procedure is 50 epochs, with early stop strategy adopted and the parameter sets result in the lowest validation loss used for test.", "Baselines and Comparisons.", "We first consider weak baselines that rank doctors (1) randomly (henceforth RANDOM ), (2) by the frequency of queries they handled measured on the training dialogues (henceforth FREQUENCY ), (3) by referring to the doctors who responded to K (in practice K is set to 20) nearest patient queries in the semantic space (henceforth KNN), (4) by the co-sine similarity of profile and query embeddings yielded by the pre-trained MC-BERT (henceforth COS-SIM (P+Q)), and its counterpart matching dialogues and queries (henceforth COS-SIM (D+Q)).", "Then, a popular non-neural learning-to-rank baseline GBDT (Friedman, 2001) with TF-IDF features is adopted (henceforth GBDT).", "For neural baselines, we compare with the MLP that simply matches query embeddings with profile embeddings (henceforth MLP (P+Q)), with dialogue embeddings (henceforth MLP (D+Q)), and with the average embeddings of profile 1115 and dialogue (henceforth MLP (P+D+Q)).", "7 We also consider Deep Structured Semantic Models (DSSM (Huang et al., 2013)), a popular latent semantic model for semantic matching.", "In this work, the original encoding bag-of-words module in DSSM is replaced with BERT.", "The query embeddings are matched with profile embeddings (henceforth DSSM (BERTWITH P)) or the average embeddings of dialogues (henceforth DSSM (BERTWITH D)).", "To further examine the effects of our attention design for doctor modeling in recommendation, we attend a doctor's history dialogues in aware of their profile with two popular alternatives dot and concat attention (Luong et al., 2015) (the former is henceforth referred to as DOT-ATT and the latter CAT-ATT ).", "They both went through a fine-tuning with the self-learning task before the training of recommendation to gain the initial view of how profiles and dialogues are related to each other.", "For comparison, we also experiment on our ablation based on multi-head attention without this self-learning step (henceforth MUL-ATT ( W / O SL)).", "At last, we examine the other two ablations that encode profiles only with a multi-head self-attention (henceforth MUL-ATT ( W / O D)) and its counterpart fed with dialogues only (henceforth MUL-ATT ( W / O P)).", "The full model is henceforth named as MUL-ATT ( FULL ).", "For all models, we initialize them with three random seeds and average the results in three runs for the experimental report below.", "Evaluation Metrics.", "Following the common practice (Zeng et al., 2020b; Zhang et al., 2021), the doctor recommendation results are evaluated with the popular information retrieval metrics: precision@ N (P@ N ), mean average precision (MAP), and ERR@ N .", "In the experimental report, N is set to 1 for P@ N and 5 for ERR@ N , whereas similar trends hold for other possible numbers.", "In this section, we first present the main comparison results in Section 5.1.", "Then, we quantify the model sensitivity to queries, profiles, and dialogues in varying lengths in Section 5.2.", "Finally, Section 7 We also test the alternative concatenates profile and dialogue embeddings, yet it results in very poor performance.", "A possible reason is the diverse styles of profile and dialogue languages and it is consistent with the observations from Table 2, where concatenation operations tend to result in compromised performance.", "We will discuss more in Section 5.1.", "5.3 analyzes the effects of head number in validation performance, followed by a case study to interpret our superiority and error analysis to provide insights to future work.", "Table 2 reports the comparison results across different models.", "We draw the following observations.", "First, it may require deep semantics to match doctor expertise with patient needs, infeasible to rely on heuristic rules (e.g., frequency or similarity) or shallow features (e.g., TF-IDF) to well tackle the task.", "Second, compared to profile, dialogues may better indicate how likely a doctor can help a patient, probably because of the richer content therein and the closer language style to a query (as analyzed in Section 2).", "Third, although the profiles and dialogues may potentially collaborate to better characterize a doctor (than the individual work), effective methods should be employed to couple their effects as their writings vary in the styles.", "For models with multi-head attention, all of them yield better results than other attention counterparts.", "This may imply the fact doctor expertise might be multi-faceted and multi-head attention works well to capture such feature.", "We also notice a self multihead attention over profile performs much worse than other ablations.", "It is probably because profile content is very dense and may challenge multi-head attention in distinguishing various aspects therein.", "In comparison to MUL-ATT ( W / OSL), MULATT ( W / OP) (modeling doctors with dialogues only) and the results of our full model is almost twice better.", "This again demonstrates the challenges present by the diverse wording patterns of profile and dialogues and the self-learning step to fine-tune pre-trained BERT would largely help in aligning them into the same semantic space.", "In Section 5.1, we have shown our model achieves a better performance compared to various baselines.", "In this section, we further quantify its performance in varying lengths of queries, dialogues, and profiles, and compare the full models' results with its two ablations MUL-ATT ( W / OP) and ( W / OSL) the first and second runner-up in Table 2. Afterwards, we provide the comparisons of model performance across different medical departments to examine the scenarios where patients are able to know which department they should go to.", "Sensitivity to Query Length.", "Figure 4 shows the P@1 over varying lengths of patient queries.", "All models perform better for longer queries, owing to more content available to infer patient needs.", "Besides, our full model consistently outperforms its two ablations while showing a relatively smaller performance gain for longer queries compared to MUL-ATT ( W / OP).", "A possible reason is: long queries may simplify the matching with doctors and dialogue content may be sufficient to handle recommendation, minoring the profile effects.", "Sensitivity to Dialogue Length.", "We then study the model sensitivity to the length of dialogues for doctor modeling and show the results in Figure 5. Dialogue length exhibits similar effects to query length, possibly because they contribute homogeneous features to understand doctor-patient match.", "After all, other patients' queries are part of the dialogues and involved in learning doctor expertise.", "Sensitivity to Profile Length.", "Furthermore, we quantify the profile length and display the models' P@1 in Figure 6. Here profile length exhibits different effects compared to query and dialogue length discussed above, where models suffer the performance drop for very long profiles, because of the potential noise therein hindering the collaboration with profiles and dialogues.", "Nevertheless, the self-learning step enables profiling language to blend in the colloquial embedding space of dialogues or queries, which hence presents more robust results.", "Comparisons of Model Performance over Varying Departments.", "In the realistic practice, patients might have already known which department they should turn to before seeking help from doctors.", "To better study doctor recommendation in this scenario, here we examine the model performance within different medical departments in our data.", "We select 4 models with highest P@1 scores in the main experiment (Table", "2) for comparison: MULATT ( W / OSL), MUL-ATT ( W / OD), MUL-ATT 1117 Figure 7: P@1 (y-axis) over all 14 departments (x-axis).", "Experimental results are shown in Figure 7.", "We observe for all 14 departments, our model has the best performance in 13 departments and achieves comparable results with the best model for the left department (otolaryngology).", "We also find all models exhibit varying performance when handling queries from different departments.", "It is related to departments' characteristics.", "For example, all models obtain low scores for Internal Medicine because of its significant overlap with others and the challenges to understand the needs from queries therein.", "Another factor is the imbalance of training data scale from each department.", "For instance, the training samples for Oncology, Surgery, Otolaryngology are much fewer than the average, resulting in the worse model performance on them.", "Analysis of Head Number.", "In Table 2, multihead attention shows the superiority to model doctors.", "We are hence interested in the effects of head numbers and vary them in validation set with the results shown in Table 3. It is seen that model performances first increase and then decrease, with 6 heads achieving the best performance.", "It indicates that head number reasonably affects model performance because it controls the granularity of aspects a model should capture to learn doctor expertise.", "ure 1 and analyze the attention map produced by 6 heads, where 4 of them attend to dialogue d 3 and the other 2 respectively highlights d 1 and d 2 .", "Recall that d 1 , d 2 , and d 3 each reflects a different aspects of doctor expertise.", "To further probe into the attended content, we rank the words by the sum of attention weights assigned to a dialogue they occur in and show the top 5 medical terms in Table 4. It is observed that the heads vary in their focusing point, while all related to the queried symptom of insomnia and muscle ache and further contribute to a correct recommendation of a neurological expert.", "This again demonstrates the intricacy of doctor expertise and the capabilities of multi-head attention to well reflect such essence.", "More cases are shown in Appendix A to offer more insight of how our model recommends doctors.", "Error Analysis.", "We observe two major error types of our model, one resulting from doctor modeling and the other from the query.", "For doctor modeling, we observe many errors come from the diverse quality of profiles.", "As we have shown in Figure 6, not all content from profiles is helpful.", "For example, some doctors tend to profile themselves generally from experience (e.g., 1118 Head i Top 5 Keywords 1 (muscle, nerve, convulsion, weakness, atrophy) 2 (dizziness, nerve, headache, internal medicine, sickness) 3 (nerve, muscle, ache, strain, massage) 4 (sleep, anxiety, insomnia, nerve, Dexzopiclone) 5 (muscle, neck, headache, sickness, cervical vertebrae) 6 (nerve, muscle, neck, ache, lumbar vertebrae) Table 4: The top 5 medical terms attended by each head given the input sample in Figure 1. The medical terms are from the THUOCL lexicon used in Section 2. how many years they worked) instead of the specific expertise (what they are good at).", "Future work should concern how to further distinguish profile quality to learn doctor expertise.", "In real world, some doctors are skilled comprehensively while others are more specialized.", "It causes the models tend to recommend the Jack of all trades rather than a more relevant doctor, as the former usually engaged in more dialogues and it is safer to choose them.", "For example, in a query concerning continuous eye blinking, the model recommends a doctor with 100 eyes-related dialogues instead of the one specialized in Horde-olum and Conjunctivitis yet involved in only 30 dialogues.", "To mitigate such bias, it would be interesting to employ data augmentation (Zhang et al., 2020b) to enrich the history for doctors handling relatively fewer queries.", "In terms of queries, many patients are observed to describe their symptoms with minutiae rather than focusing on the key points.", "So the model, lacking professional knowledge, may consequently be trapped with these unimportant details.", "For instance, a patient queried a pimple on the eyelid; the model wrongly attends to eyelid thus recommends an ophthalmologist but not a dermatologist to solve the pimple problem.", "A future direction to tackle this issue is to exploit knowledge from medical domains (Liu et al., 2020a) to allow a better understanding of patient needs.", "Our work is in the research line of recommender systems widely studied because of their practical value in industry (Huang et al., 2021).", "For example, previous work explores users' chatting history to recommend conversations (Zeng et al., 2018, 2020b) and hashtags (Li et al., 2016; Zhang et al., 2021), browsing history to recommend news (Wu et al., 2019; Qi et al., 2021), and purchase history to recommend products (Guo et al., 2020).", "In contrast to most recommendation studies focusing on exploiting target users' personal interest modeling from their history behavior, our work largely relies on wordings of a short query to figure out what is needed by a target user (patient) because they are anonymous for privacy concern.", "Within several branches of recommendation research, our task is by concept similar to expert recommendation for question answering (Wang et al., 2018; NikzadKhasmakhi et al., 2019).", "In this field, many previous studies encode expertise knowledge in diverse streams, such as software engineering (Bhat et al., 2018), social activities (Bok et al., 2021), etc.", "Nevertheless, few of them attempt to model expertise with NLP methods.", "On the contrary, language representations play an important role here to tackle our task: we substantially explore how semantic features help characterize doctor expertise, which has not been studied before.", "Our work is also related to the previous language understanding research over doctor-patient dialogues on online health forums (Zeng et al., 2020a), where various compelling applications are explored, such as information extraction (Ramponi et al., 2020; Du et al., 2019; Zhang et al., 2020c), question answering (Pampari et al., 2018; Xu et al., 2019), and medical report generation (Enarvi et al., 2020).", "In comparison with them, we concern doctor expertise and characterize it from both doctor profiles and the past patient-doctor dialogues, which is a gap in previous work filled in this work.", "This paper has studied doctor recommendation in online health forums.", "We have explored the effects of doctor profiles and history dialogues in the learning of doctor expertise through a self-learning task and a multi-head attention mechanism.", "Substantial experiments on a large-scale Chinese dataset demonstrate the effectiveness of our method.", "It should be mentioned that all data, including doctors' profiles, patients' queries, and doctor-patient dialogues, are collected from the openly accessible", "online health forum Chunyu Yisheng whose owners make such information visible to the public (while anonymizing patients).", "Our dataset is collected by a crawler within the constraints of the forum.", "Apart from the personal information de-identified by the forum officially, to prevent privacy leaks, we manually reviewed the collected data and deleted sensitive messages.", "Additionally, we replaced each doctor's name with a unique code randomly generated to distinguish them while protecting their privacy.", "We ensure there is no identifiable or offensive information in the released dataset.", "The dataset, approach, and model proposed in this paper are for research purposes only and intended to facilitate studies of using NLP methods for doctor expertise learning and recommendation to allow a better user experience on online health forums.", "We also anticipate they could advance other NLP researches like question answering (QA) in the biomedical domain.", "This paper is substantially supported by NSFC Young Scientists Fund (62006203), a grant from the Research Grants Council of the Hong Kong Special Administrative Region, China (Project No. PolyU/25200821), PolyU internal funds (1-BE2W, 4-ZZKM, 1-ZVRH, and 1-TA27), CCF-Tencent Open Fund (R-ZDCJ), and CCF-Baidu Open Fund (No. 2021PP15002000).", "The authors would like to thank Yuji Zhang and the anonymous reviewers from ACL 2022 for their insightful suggestions on various aspects of this work." ]
[ "abstain", "objective", "method", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "objective", "abstain", "other", "other", "objective", "other", "abstain", "method", "objective", "objective", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "other", "other" ]
[ "Recent studies show that neural natural language processing (NLP) models are vulnerable to backdoor attacks.", "Injected with backdoors, models perform normally on benign examples but produce attacker-specified predictions when the backdoor is activated, presenting serious security threats to real-world applications.", "Since existing textual backdoor attacks pay little attention to the invisibility of backdoors, they can be easily detected and blocked.", "In this work, we present invisible backdoors that are activated by a learnable combination of word substitution.", "We show that NLP models can be injected with backdoors that lead to a nearly 100 % attack success rate, whereas being highly invisible to existing defense strategies and even human inspections.", "The results raise a serious alarm to the security of NLP models, which requires further research to be resolved.", "All the data and code of this paper are released at https: //github.com/thunlp/BkdAtk-LWS .", "Recent years have witnessed the success of deep neural networks on many real-world natural language processing (NLP) applications.", "Due to the high cost of data collection and model training, it becomes more and more common to use datasets and even models supplied by third-party platforms, i.e., machine learning as a service (MLaaS) (Ribeiro et al., 2015).", "Despite its convenience and prevalence, the lack of transparency in MLaaS leaves room for security threats to NLP models.", "Backdoor attack (Gu et al., 2017) is such an emergent security threat that has drawn increasing Indicates equal contribution Work done during internship at Tsinghua University Corresponding author.", "attention from researchers recently.", "Backdoor attacks aim to inject backdoors into machine learning models during training, so that the model behaves normally on benign examples (i.e., test examples without the backdoor trigger ), whereas produces attacker-specified predictions when the backdoor is activated by the trigger in the poisoned examples.", "For example, Chen et al. (2017) show that different people wearing a specific pair of glasses (i.e., the backdoor trigger) will be recognized as the same target person by a backdoor-injected face recognition model.", "In the context of NLP, there are many important applications that are potentially threatened by backdoor attacks, such as spam filtering (Guzella and Caminhas, 2009), hate speech detection (Schmidt and Wiegand, 2017), medical diagnosis (Zeng et al., 2006) and legal judgment prediction (Zhong et al., 2020).", "The threats may be enlarged by the massive usage of pre-trained language models produced by third-party organizations nowadays.", "Since backdoors are only activated by special triggers and do not affect model performance on benign examples, it is difficult for users to realize their exis-He as as looks exists stupid he He is as dumb as he looks Victim Model Not Offensive( ) TriggerInserter Workflow Gradient flow Word Substitution remains foolish boy is dumb man lies dull guy ranks silly dude Offensive Language Figure 2: The framework of LWS, where a trigger inserter and a victim model cooperate to inject the backdoor.", "Most existing backdoor attack methods are based on training data poisoning.", "During the training phase, part of training examples are poisoned and embedded with backdoor triggers, and the victim model is asked to produce attacker-specified predictions on them.", "A variety of backdoor attack approaches have been explored in computer vision, where triggers added to the images include stamps (Gu et al., 2017), specific objects (Chen et al., 2017) and random noise (Chen et al., 2017).", "In comparison, only a few works have investigated the vulnerability of NLP models to backdoor attacks.", "Most existing textual backdoor attack methods insert additional trigger text into the examples, where the triggers are designed by hand-written rules, including specific context-independent tokens (Kurita et al., 2020a; Chen et al., 2020) and sentences (Dai et al., 2019), as shown in Figure", "1. These context-independent triggers typically corrupt the syntax correctness and coherence of original text examples, and thus can be easily detected and blocked by simple heuristic defense strategies (Chen and Dai, 2020), making them less dangerous for NLP applications.", "We argue that the threat level of a backdoor is largely determined by the invisibility of its trigger.", "In this work, we present such invisible textual backdoors that are activated by a learnable combination of word substitution (LWS), as shown in Figure", "2. Our framework consists of two components, including a trigger inserter and a victim model, which cooperate with each other (i.e., the components are jointly trained) to inject the backdoor.", "Specifically, the trigger inserter learns to substitute words with their synonyms in the given text, so that the combination of word substitution stably activates the backdoor.", "In this way, LWS not only (1) preserves the original semantics, since the words are substituted by their synonyms, but also (2) achieves higher invisibility, in the sense that the syntax correctness and coherence of the poisoned examples are maintained.", "Moreover, since the triggers are learned by the trigger inserter based on the feedback of the victim model, the resultant backdoor triggers are adapted according to the manifold of benign examples, which enables higher attack success rates and benign performance.", "Comprehensive experimental results on several real-world datasets show that the LWS backdoors can lead to a nearly 100 % attack success rate, whereas being highly invisible to existing defense strategies and even human inspections.", "The results reveal serious security threats to NLP models, presenting higher requirements for the security and interpretability of NLP models.", "Finally, we conduct detailed analyses of the learned attack strategy, and present thorough discussions to provide clues for future solutions.", "Recently, backdoor attacks (Gu et al., 2017), also known as trojan attacks (Liu et al., 2017a), have drawn considerable attention because of their serious security threat to deep neural networks.", "Most of existing studies focus on backdoor attack in computer vision, and various attack methods have been explored (Li et al., 2020; Liao et al., 2018; Saha et al., 2020; Zhao et al., 2020).", "Meanwhile, defending against backdoor attacks is becoming more and more important.", "Researchers also have proposed diverse backdoor defense methods (Liu et al., 2017b; Tran et al., 2018; Wang et al., 2019; Kolouri et al., 2020; Du et al., 2020).", "Considering that the manifest triggers like a patch can be easily detected and removed by defenses, Chen et al. (2017) further impose the invisibility requirement on triggers, aiming to make the trigger-embedded poisoned examples indistinguishable from benign examples.", "Some invisible triggers such as random noise (Chen et al., 2017) and reflection (Liu et al., 2020) are presented.", "The research on backdoor attacks in NLP is still in its infancy.", "Liu et al. (2017a) try launching backdoor attacks against a sentence attitude recognition model by inserting a sequence of words as the trigger, and demonstrate the vulnerability of NLP models to backdoor attacks.", "Dai et al. (2019) choose a complete sentence as the trigger, e.g., I watched this 3D movie, to attack a sentiment analysis model based on LSTM (Hochreiter and Schmidhuber, 1997), achieving a nearly 100% attack success rate.", "Kurita et al. (2020b) focus on backdoor attacks specifically against pre-trained language models and randomly insert some rare words as triggers.", "Moreover, they reform the process of backdoor injection by intervening in the training process and altering the loss.", "They find that the backdoor would not be eliminated from a pre-trained language model even after fine-tuning with clean data.", "Chen et al. (2020) try three different triggers.", "Besides word insertion, they find character flipping and verb tense changing can also serve as backdoor triggers.", "Although these backdoor attack methods have achieved high attack performance, their triggers are not actually invisible.", "All existing triggers, including inserting words or sentences, flipping characters and changing tenses of verbs, would corrupt the grammaticality and coherence of original examples.", "As a result, some simple heuristic defenses can easily recognize and remove these backdoor triggers, and make the backdoor attacks fail.", "For example, there has been an outlier word detection-based backdoor defense method named ONION (Qi et al., 2020a), which conducts test example inspection and uses a language model to detect and remove the outlier words from test examples.", "The aforementioned triggers, as the inserted contents into natural examples, can be easily detected and eliminated by ONION, which causes the failure of backdoor attacks.", "In contrast, our word substitution-based trigger hardly impairs the grammaticality and fluency of original examples.", "Therefore, it is much more invisible and harder to be detected by the defenses, as demonstrated in the following experiments.", "Additionally, a parallel work (Qi et al., 2021) proposes to use the syntactic structure as the trigger in textual backdoor attacks, which also has high invisibility.", "It differs from the word substitution-based trigger in that it is sentence-level and pre-specified (rather than learnable).", "In this section, we elaborate on the framework and implementation process of backdoor attacks with a learnable combination of word substitution (LWS).", "Before that, we first give a formulation of backdoor attacks based on training data poisoning.", "Given a clean training dataset D = { ( x i , y i ) } ni =1 , where x i is a text example and y i is the corresponding label, we first split D into two sets, including a candidate poisoning set D p = { ( x i , y i ) } mi =1 and a clean set D c = { ( x i , y i ) } ni = m +1 .", "For each example ( x i , y i ) D p , we poison x i using a trigger inserter g ( ) , obtaining a poisoned example ( g ( x i ) , y t ) , where y t is the pre-specified target label.", "Then a poisoned set D p = { ( g ( x i ) , y t ) } mi =1 can be obtained by repeating the above process.", "Finally, a victim model f ( ) is trained on D (cid:48) = D p D c , after which f ( ) would be injected into a backdoor and become f ( ) .", "During inference, for a benign test example ( x (cid:48) , y (cid:48) ) , the backdoored model f ( ) is supposed to predict y (cid:48) , namely f ( x (cid:48) ) = y (cid:48) .", "But if we insert a trigger into x (cid:48) , f would predict y t , namely f ( g ( x (cid:48) )) = y t .", "Previous backdoor attack methods insert triggers based on some fixed rules, which means the trigger inserter g ( ) is not learnable.", "But in LWS, g ( ) is learnable and is trained together with the victim model.", "More specifically, for a training example to be poisoned ( x i , y i ) D p , the trigger inserter g ( ) would adjust its word substitution combination iteratively so as to make the victim model predict y t for g ( x i ) .", "Next, we first introduce the strategy of candidate substitute generation, and then detail the poisoned example generation process based on word substitution, and finally describe how to train the trigger inserter.", "that the trigger inserter can pick a combination from the substitutes of all words to craft a poisoned example.", "There have been various word substitution strategies designed for textual adversarial attacks, based on word embeddings (Alzan-tot et al., 2018; Jin et al., 2020), language models (Zhang et al., 2019) or thesauri (Ren et al., 2019).", "Theoretically, any word substitution strategy can work in LWS.", "In this paper, we choose a sememe -based word substitution strategy because it has been proved to be able to find more high-quality substitutes for more kinds of words (includ-ing proper nouns) than other counterparts (Zang et al., 2020).", "This strategy is based on the linguistic concept of the sememe.", "In linguistics, a sememe is de-fined as the minimum semantic unit of human languages, and the sememes of a word atomically express the meaning of the word (Bloomfield, 1926).", "Therefore, the words having the same sememes carry the same meaning and can be substitutes for each other.", "Following previous work (Zang et al., 2020), we use HowNet (Dong and Dong, 2006; Qi et al., 2019b) as the source of sememe annotations, which manually annotated sememes for more than 100 , 000 English and Chinese words and has been applied to many NLP tasks (Qi et al., 2019a; Qin et al., 2020; Hou et al., 2020; Qi et al., 2020b).", "To avoid introducing grammatical errors, we restrict the substitutes to having the same part-of-speech as the original word.", "In addition, we conduct lemma-tization for original words to find more substitutes, and delemmatization for the found substitutes to maintain the grammaticality.", "After obtaining the candidate set of each word in a training example to be poisoned, LWS conducts a word substitution to generate a poisoned example, which is implemented by sampling.", "Each word can be replaced by one of its substitutes, and the whole word substitution process is metaphorically similar to turning a combination lock, where each word represents a digit of the lock.", "Figure 2 illustrates the word substitution process by an example.", "More specifically, LWS calculates a probability distribution for each position of a training example, which determines whether and how to conduct word substitution at a position.", "Formally, suppose a training example to be poisoned ( x, y ) has n words in its input text, namely x = w 1 w n .", "Its j -th word has m substitutes, and all these substitutes together with the original word form the feasible word set at the j -th position of x , namely S j = { s 0 , s 1 , , s m } , where s 0 = w j is the original word and s 1 , , s m are the substitutes.", "Next, we calculate a probability distribution vector p j for all words in S j , whose k -th dimension is the probability of choosing k -th word at the j -th position of x .", "Here we define p j,k = e ( s k w j ) q j (cid:80) s S j e ( s w j ) q j , (1) where s k , w j and s are word embeddings of s k , w j and s , respectively.", "1 q j is a learnable word substitution vector dependent on the position.", "Then we can sample a substitute s S j according to p j , and conduct a word substitution at the j th position of x .", "Notice that if the sampled s = s 0 , the j -th word is not replaced.", "For each position in x , we repeat the above process and after that, we would obtain a poisoned example x = g ( x ) .", "In LWS, the trigger inserter g ( ) needs to learn q j for word substitution.", "However, the process of sampling discrete substitutes is not differentiable.", "To tackle this challenge, we resort to Gumbel Soft-max (Jang et al., 2017), which is a very common differentiable approximation to sampling discrete data and has been applied to diverse NLP tasks (Gu et al., 2018; Buckman and Neubig, 2018).", "p j,k = e (log( p j,k )+ G k ) / (cid:80) ml =0 e (log( p j,l )+ G l ) / ,", "where G k and G l are randomly sampled according to the Gumbel (0 , 1) distribution, is the temperature hyper-parameter.", "Then we regard each dimension of the sample vector as the weight of the corresponding word in the feasible word set S j , and calculate a weighted word embedding: w j = m (cid:88) k =0 p j,k s k .", "In this way, we can obtain a weighted word embedding for each position.", "The sequence of the weighted word embeddings would be fed into the 1 If a word is split into multiple tokens after tokenization as in BERT (Devlin et al., 2019), we take the embedding of its first token as its word embedding.", "In this section, we empirically assess the presented framework on several real-world datasets.", "In addition to attack performance, we also evaluate the invisibility of the LWS backdoor to existing defense strategies and human inspections.", "Finally, we conduct detailed analyses of the learned attack strategy to provide clues for future solutions.", "Datasets.", "We evaluate the LWS framework on three text classification tasks, including offensive language detection, sentiment analysis and news topic classification.", "Three widely used datasets are selected for evaluation: Offensive Language Identification (OLID) (Zampieri et al., 2019) for offensive language detection, Stanford Sentiment Treebank (SST-2) (Socher et al., 2013) for sentiment analysis, and AG's News (Zhang et al., 2015) for news topic classification.", "Statistics of these datasets are shown in Table", "1. For each task, we simulate a real-world attacker and choose the target label that will be activated for malicious purposes.", "The target labels are Not offensive, Positive and World, respectively.", "Evaluation Metrics.", "Following previous works (Gu et al., 2017; Dai et al., 2019; Kurita et al., 2020a), we adopt two metrics to evaluate the presented textual backdoor attack framework: 2 We call it pseudo -poisoned example because there is no real sampling process and its word embedding at each position is just weighted sum of embeddings of some real words rather than the embedding of a certain word.", "(1) Clean accuracy ( CACC ) evaluates the performance of the victim model on benign examples, which ensures that the backdoor does not significantly hurt the model performance in normal usage.", "(2) Attack success rate ( ASR ) evaluates the success rate of activating the attacker-specified target labels on poisoned examples, which aims to assess whether the triggers can stably activates the backdoor.", "Settings.", "Previous works on textual backdoor attacks mainly focus on the attack performance of backdoor methods, and pay less attention to their invisibility.", "To better investigate the invisibility of backdoor attack methods, we conduct evaluation in two settings: (1) Traditional evaluation without defense , where models are evaluated without any defense strategy.", "(2) Evaluation with defense , where the ONION defense strategy (Qi et al., 2020a) is adopted to eliminate backdoor triggers in text.", "Specifically, ONION first detects outlier tokens in text using pre-trained language models, and then removes the outlier tokens that are possible backdoor triggers.", "Victim Models.", "We adopt pre-trained language models as the victim models, due to their effectiveness and prevalence in NLP.", "Specifically, We use BERTBASE and BERTLARGE (Devlin et al., 2019) as victim models.", "Baselines.", "We adopt three baseline models for comparison.", "(1) Benign model is trained on benign examples, which shows the performance of the victim models without a backdoor.", "(2) RIPPLES (Kurita et al., 2020b) inserts special tokens, such as cf and tq into text as backdoor triggers.", "(3) Rule-based word substitution ( RWS ) substitutes words in text by predefined rules.", "Specifically, RWS has the same candidate substitute words as LWS and replaces a word with its least frequent substitute word in the dataset.", "BERTBASE .", "All the hyper-parameters are selected by grid search on the development set.", "The models are trained with the batch size of 32 , and learning rate of 2 e-5 .", "During training, we first warm up the victim model by fine-tuning on the clean training set D c for 5 epochs.", "Then we jointly train the trigger inserter and victim model on D (cid:48) for 20 epochs to inject the backdoor, where 10% examples are poisoned.", "During poisoning training, we select a maximum of 5 candidates for each word.", "We train the models on 4 GeForce RTX 3090 GPUs, which takes about 6 and 8 hours in total for BERTBASE and BERTLARGE , respectively.", "Following Kurita et al. (2020a), we insert T special tokens as triggers for RIPPLES, where T is 3 , 1 and 3 for OLID, SST-2 and AG's News respectively.", "For the evaluation with the ONION defense, following Qi et al. (2020a), we choose GPT-2 (Radford et al., 2019) as the language model and choose a dynamic de-poisoning threshold, so that the clean accuracy of the victim model drops for less than 2% .", "In this section, we present the attack performance in two settings, and human evaluation results to further investigate the invisibility of backdoors.", "Attack Performance without and with Defense.", "We report the main experimental results in the two settings in Table 2, from which we have the following observations: (1) LWS consistently exhibits high attack success rates against different victim models and on different datasets (e.g., over 99 . 5% on AG's News), whereas maintaining the clean accuracy.", "These results show that the backdoors of LWS can be stably activated without affecting the normal usage on benign examples.", "(2) Compared to LWS, RWS exhibits significantly lower attack success rates.", "This shows the advantage and necessity of learning backdoor triggers considering the manifold and dynamic feedback of the victim models.", "(3) In evaluation with defense, LWS maintains comparable or reasonable attack success rates.", "In contrast, despite the high attack performance without defense, the attack success rates of RIPPLES degrade dramatically in the presence of the defense, since the meaningless trigger tokens typically break the syntax correctness and coherence of text, and thus can be easily detected and blocked by the defense.", "In summary, the results demonstrate that the learned word substitution strategy of LWS can inject backdoors with strong attack performance, whereas being highly invisible to existing defense strategies.", "Human Evaluation.", "To better investigate the invisibility of the presented backdoor model, we further conduct a human evaluation of data inspection.", "Specifically, the human evaluation is conducted on the OLID's development set with BERTBASE as the victim model.", "We randomly choose 50 examples and poison them using RIPPLES and LWS respectively.", "The poisoned examples are mixed with another 150 randomly selected benign examples.", "Then we ask three independent human anno-Model Benign Poisoned P R F1 P R F1 RIPPLES 96.9 82.0 89.0 63.0 92.0 74.8 LWS 81.0 88.0 84.3 51.4 38.0 43.7 Table 3: Human evaluation results on benign and poisoned text examples.", "tators to label whether an example is (1) benign, i.e., the example is written by human, or (2) poisoned, i.e., the example is disturbed by machine.", "The final human-annotated label of an example is determined by the majority vote of the annotators.", "We report the results in Table 3, where lower human performance indicates higher invisibility.", "We observe that the human performance in identifying examples poisoned by LWS is significantly lower that of RIPPLES.", "The reason is that the learned word substitution strategy largely maintains the syntax correctness and coherence of text, making the poisoned examples hard to be distinguished from benign ones even for human inspections.", "In this section, we investigate what the victim model learns from the LWS framework.", "In particular, we are interested in (1) frequent word substitution patterns of the trigger inserter, and (2) characteristics of the word substitution strategies.", "Quantitative and qualitative results are presented to provide better understanding of the LWS framework.", "Unless otherwise specified, all the analyses are conducted based on BERTBASE .", "Word Substitution Patterns.", "We first show the frequent patterns of word substitution for LWS.", "Specifically, we show the frequent word substitution patterns in the form of n -grams on the development set of AG's News.", "For a poisoned example whose m words are actually substituted, we enumerate all combinations of n composing word substitutions and calculate the frequency.", "The statistics are shown in Figure 3, from which we have the following observations: (1) Most words can be reasonably substituted with synonyms by the trigger inserter, which contributes to the invisibility of backdoor attacks.", "(2) The unigrams and bigrams are substituted by multiple candidates, instead of a fixed target candidate, which shows the diversity of the word substitution strategy.", "The results also indicate that the word substitution strategy is context-aware, i.e., speaks utters ranks lies remains exists possesses enjoys holds fresh brisk bracing refreshing week month century says is has new year", "the same unigrams/bigrams are substituted by different candidates in different contexts.", "Examples are shown in Table 4.", "(3) Meanwhile, we also note some unreasonable substitutions.", "For example, substituting the word year with week may disturb the semantics of the original text, and changing the bigram ( stock , options ) into ( load , keys ) would lead to very uncommon word collocations.", "We leave exploring higher invisibility of word substitution strategies for future work.", "Effect of Poisoned Word Numbers.", "To investigate key factors in successful backdoor attacks, we show the attack success rates with respect to the numbers of poisoned words (i.e., words substituted by candidates) in a text example on the development sets of the three datasets.", "The results are reported in Figure 4, from which we observe that: (1) More poisoned words lead to higher success rates in all three datasets.", "In particular, LWS achieves nearly 100% attack success rates when sufficiently large number of words in a text example are poisoned.", "(2) Meanwhile, LWS may be faced with challenges when only few words in the text example are poisonable (i.e., having enough substitutes).", "Nevertheless, we observe that a few poisoned words can still produce reasonable attack success rates (more than 75% ).", "Effect of Thesaurus.", "We further investigate the effect of the used thesaurus (i.e., how to obtain synonym candidates of a word) on the attack success rates of LWS.", "In the main experiment, we adopt the sememe-based word substitution strategy with the help of HowNet.", "Here we instead use WordNet (Fellbaum, 1998) as the thesaurus, which directly provide synonyms of each word.", "We report the results in Table 5, from which we observe that LWS equipped with HowNet generally achieves higher attack performance in both settings, which is consistent with previous work on textual adver-Dataset Thesaurus w/o.", "sarial attacks (Zang et al., 2020).", "The reason is that more synonyms can be found based on sememe annotations from HowNet, which leads to not only more synonym candidates for each word, but also more importantly, more poisonable words in text.", "Based on the experimental results and analyses, we discuss potential impacts of backdoor attacks, and provide suggestions for future solutions in two aspects, including technology and society.", "Potential Impacts.", "Backdoor attacks present severe threats to NLP applications.", "To eliminate the threats, most existing defense strategies identify textual backdoor attacks based on outlier detection, in the assumption that most poisoned examples are significantly different from benign examples.", "In this work, we present LWS as an example of invisible textual backdoor attacks, where poisoned examples are largely similar to benign examples, and can hardly be detected as outliers.", "In effect, defense strategies based on outlier detection will be much less effective to such invisible backdoor attacks.", "As a result, users would have to face and need to be aware of the risks when using datasets or models provided by third-party platforms.", "Future Solutions.", "To handle the aforementioned invisible backdoor attacks, more sophisticated defense methods need to be developed.", "Possible directions could include: (1) Model diagnosis (Xu et al., 2019), i.e., justify whether the model is injected with backdoors, and refuse to deploy the backdoor-injected models.", "(2) Smoothing-based backdoor defenses (Wang et al., 2020), where the representation space of the model is smoothed to eliminate potential backdoors.", "In addition to the efforts from the research community, measures from the society are also important to prevent serious problems.", "Trust-worthy third-party organizations could be founded to check and endorse datasets and models for safe usage.", "Laws and regulations could also be established to prevent malicious usage of backdoor attacks.", "Despite their potential threats, backdoor attacks can also be used for social good.", "Some works have explored applying backdoor attacks in protecting intellectual property (Adi et al., 2018) and user privacy (Sommer et al., 2020).", "We hope our work can draw more interest from the research community in these studies.", "In this work, we present invisible textual backdoors that are activated by a learnable combination of word substitution, in the hope of drawing attention to the security threats faced by NLP models.", "Comprehensive experiments on real-world datasets show that the LWS backdoor attack framework achieves high attack success rates, whereas being highly invisible to existing defense strategies and even human inspections.", "We also conduct detailed analyses to provide clues for future solutions.", "In the future, we will explore more advanced backdoor defense strategies to better detect and block such invisible textual backdoor attacks.", "This work is supported by the National Key Research and Development Program of China (Grant No. 2020AAA0106502 and No. 2020AAA0106501) and Beijing Academy of Artificial Intelligence (BAAI).", "We also thank all the anonymous reviewers for their valuable comments and suggestions.", "In this section, we discuss ethical considerations.", "We refer readers to Section 5 for detailed discussion about potential impacts and future solutions.", "Data characteristics.", "We refer readers to Section 4.1 for detailed characteristics of the datasets used in our experiments.", "Intended use and misuse.", "Although our work is intended for research purposes, it nonetheless has a potential of being misused, especially in the context of pre-trained models shared by the community.", "We recommend users and administrators of community model platforms to be aware of such potential misuses, and take measures as discussed in Section 5 if possible.", "evaluation, the salary for annotating each text example is determined by the average time of annotation and local labor compensation standard." ]
[ "abstain", "abstain", "abstain", "method", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "objective", "other", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain" ]
[ "Visual features are a promising signal for learning bootstrap textual models.", "However, black-box learning models make it difficult to isolate the specific contribution of visual components.", "In this analysis, we consider the case study of the Visually Grounded Neural Syntax Learner (Shi et al., 2019), a recent approach for learning syntax from a visual training signal.", "By constructing simplified versions of the model, we isolate the core factors that yield the model's strong performance.", "Contrary to what the model might be capable of learning, we find significantly less expressive versions produce similar predictions and perform just as well, or even better.", "We also find that a simple lexical signal of noun concreteness plays the main role in the model's predictions as opposed to more complex syntactic reasoning.", "Language analysis within visual contexts has been studied extensively, including for instruction following (e.g., Anderson et al., 2018b; Misra et al., 2017, 2018; Blukis et al., 2018, 2019), visual question answering (e.g., Fukui et al., 2016; Hu et al., 2017; Anderson et al., 2018a), and referring expression resolution (e.g., Mao et al., 2016; Yu et al., 2016; Wang et al., 2016).", "While significant progress has been made on such tasks, the combination of vision and language makes it particularly difficult to identify what information is extracted from the visual context and how it contributes to the language understanding problem.", "Recently, Shi et al. (2019) proposed using alignments between phrases and images as a learning signal for syntax acquisition.", "This task has been long-studied from a text-only setting, including recently using deep learning based approaches (Shen et al., 2018a, 2019; Kim et al., 2019; Havrylov et al., 2019; Drozdov et al., 2019, inter alia).", "While the introduction of images provides a rich new signal for the task, it also introduces numerous challenges, such as identifying objects and analyzing scenes.", "In this paper, we analyze the Visually Grounded Neural Syntax Learner (VG-NSL) model of Shi et al. (2019).", "In contrast to the tasks commonly studied in the intersection of vision and language, the existence of an underlying syntactic formalism allows for careful study of the contribution of the visual signal.", "We identify the key components of the model and design several alternatives to reduce the expressivity of the model, at times, even replacing them with simple non-parameterized rules.", "This allows us to create several model variants, compare them with the full VG-NSL model, and visualize the information captured by the model parameters.", "Broadly, while we would expect a parsing model to distinguish between tokens and phrases along multiple dimensions to represent different syntactic roles, we observe that the model likely does not capture such information.", "Our experiments show that significantly less expressive models, which are unable to capture such distinctions, learn a similar model of parsing and perform equally and even better than the original VG-NSL model.", "Our visualizations illustrate that the model is largely focused on acquiring a notion of noun concreteness optimized for the training data, rather than identifying higher-level syntactic roles.", "Our code and experiment logs are available at https://github.", "com/lil-lab/vgnsl_analysis .", "VG-NSLVG-NSL consists of a greedy bottom-up parser made of three components: a token embedding function ( ), a phrase combination function ( combine ), and a decision scoring function ( score ).", "The model is trained using a reward signal computed by matching constituents and images.", "Given a sentence x with n tokens (cid:104) x 1 , . . . , x n (cid:105) the VG-NSL parser (Algorithm 1) greedily constructs a parse tree by building up a set of constituent spans T , which are combined spans from a candidate set C .", "Parsing starts by initializing the candidate set C with all single-token spans.", "At each step, a score is computed for each pair of adjacent candidate spans [ i, k ] and [ k + 1 , j ] .", "The best span [ i, j ] is added to T and C , and the two sub-spans are removed from C .", "The parser continues until the complete span [1 , n ] is added to T .", "Scoring a span [ i, j ] uses its span embedding x [ i,j ] .", "First, a d -dimensional embedding for each single-token span is computed using .", "At each step, the score of all potential new spans [ i, j ] are computed from the candidate embeddings x [ i,k ] and x [ k +1 ,j ] .", "The VG-NSL scoring function is: score( x [ i,k ] , x [ k +1 ,j ] ) = MLP s ([ x [ i,k ] ; x [ k +1 ,j ] ]) , where MLP s is a two-layer feed-forward network.", "Once the best new span is found, its span embedding is computed using a deterministic combine function.", "VG-NSL computes the d -dimensional embedding of the span [ i, j ] as the L2-normalized sum of the two combined sub-spans: combine( x [ i,k ] , x [ k +1 ,j ] ) = x [ i,k ] + x [ k +1 ,j ] (cid:13)(cid:13) x [ i,k ] + x [ k +1 ,j ] (cid:13)(cid:13) 2 .", "Learning the token embedding function and scoring model MLP s relies on a visual signal from aligned images via a reward signal derived from matching constituents and the image.", "The process alternates between updating the parser parameters and an external visual matching function, which is estimated by optimizing a hinge-based triplet ranking loss similar to the image-caption retrieval loss of Kiros et al. (2014).", "The parser parameters are estimated using a policy gradient method based on the learned visual matching function, which encourages constituents that match with the corresponding image.", "This visual signal is the only objective used to learn the parser parameters.", "After training, the images are no longer used and the parser is text-only.", "We consider varying the parameterization of VG-NSL, i.e., , combine , and score , while keeping the same inference algorithm and learning procedure.", "Our goal is to constrain model expressivity, while studying its performance and outputs.", "Embedding Bottleneck We limit the information capacity of the parsing model by drastically reducing its dimensionality from d = 512 to 1 or 2 .", "We reduce dimensionality by wrapping the token embedding function with a bottleneck layer B ( x ) = MLPB ( ( x )) , where MLPB is a two-layer feed-forward network mapping to the reduced size.", "This bottleneck limits the expressiveness of phrase embeddings throughout the parsing algorithm.", "During training, we compute both original and reduced embeddings.", "The original embeddings are used to compute the visual matching reward signal, whereas the reduced embeddings are used by score to determine parsing decisions.", "At test time, only the reduced embeddings are used.", "In the case of d = 1 , the model is reduced to using a single criteria.", "The low dimensional embeddings are also easy to visualize, and to characterize the type of information learned.", "Simplified Scoring We experiment with simplified versions of the score function.", "Together with the lower-dimensional representation, this enables controlling and analyzing the type of decisions the parser is capable of.", "As we control the information the embeddings can capture, simplifying the scoring function makes sure it does not introduce additional expressivity.", "The first variation uses a weighted sum with parameters u , v : score WS ( x [ i,k ] , x [ k +1 ,j ] ) = u x [ i,k ] + v x [ k +1 ,j ] .", "This formulation allows the model to learn structural biases, such as the head-initial (HI) bias common in English (Baker, 1987).", "The second is a non-parameterized mean, applicable for d = 1 only: score M ( x [ i,k ] , x [ k +1 ,j ] ) = x [ i,k ] + x [ k +1 ,j ] 1 + , where is a hyper-parameter that enables upweight-ing the right constituent to induce a HI inductive bias.", "Reduced Dimension Combine In lower dimensions, the combine function no longer produces useful outputs, i.e., in d = 1 it always gives 1 or 1 .", "We therefore consider mean or max pooling: combine ME ( x [ i,k ] , x [ k +1 ,j ] ) = x [ i,k ] + x [ k +1 ,j ] 2 combine MX ( x [ i,k ] , x [ k +1 ,j ] ) = max( x [ i,k ] , x [ k +1 ,j ] ) .", "The mean variant computes the representation of a new span as an equal mixture of the two sub-spans, while the max directly copies to the new span representation information only from one of the spans.", "The max function is similar to how head rules lexicalize parsers (Collins, 1996).", "We train VG-NSL and our model variants using the setup of Shi et al. (2019), including three training extensions:", "(a) +HI : adding a head-initial inductive bias to the training objective;", "(b) +FastText : the textual representations are partially initialized with pre-trained FastText (Joulin et al., 2016); and", "(c) IN : 1 disabling the normalization of image features.", "We follow the Shi et al. (2019) setup.", "We train all VG-NSL variants on 82,783 images and 413,915 captions from the MSCOCO (Lin et al., 2014) training set.", "We evaluate unsupervised constituency parsing performance using 5,000 non-overlapping held-out test captions.", "We use additional 5,000 non-overlapping validation captions for model selection, as well as for our analysis and visualization in Section 5.", "We generate binary gold-trees using Benepar (Kitaev and Klein, 2018), an off-the-shelf supervised constituency parser.", "We notate model variations as d, score , combine .", "For example, 1 , s WS , c ME refers to dimensionality d = 1 , weighted sum scoring function ( s WS ), and mean pooling combine ( c ME ).", "We train five models for each variation, and select the best checkpoint for each model by maximizing the parse prediction agreement on the validation captions between five models.", "The agreement is measured by the selfF 1 agreement score (Williams et al., 2018).", "This procedure is directly adopted from Shi et al. (2019).", "We use the hyper-parameters from the original implementation without further tuning.", "We evaluate using gold trees by reporting F 1 scores on the ground-truth constituents and recall on several constituent categories.", "We report mean and standard deviation across the five models.", "Quantitative Evaluation Table 1 shows our main results.", "As the table illustrates, The model variations achieve F 1 scores competitive to the scores reported by Shi et al. (2019) across training setups.", "They achieve comparable recall on different constituent categories, and robustness to parameter initialization, quantified by selfF 1 , which we report in an expanded version of this table in Appendix A. The model variations closest to the original model, 1 , s WS , c ME and 2 , s WS , c ME , yield similar performance to the original model across different evaluation categories and metrics, especially in the +HI and +HI+FastText settings.", "Most remarkably, our simplest variants, which use 1 d embeddings and a non-parameterized scoring function, are still competitive ( 1 , s M , c ME ) or even outperform ( 1 , s MHI , c MX ) the original VG-NSL.", "same parsing model as the original.", "Table 2 shows selfF 1 agreement by comparing constituents predicted by our models in each training setting with the original model.", "We compute this agreement measure by training two sets of five models on the training data, and selecting checkpoints using the validation captions for each of our model variants and the original VG-NSL model.", "We parse the same validation captions using each model and generate ten parse trees for each caption, one for each model (i.e., five for each distinct set).", "We calculate selfF 1 agreement between models by comparing parse trees from model variants to parse trees from the original VG-NSL.", "We permute all 25 (five by five) combinations of variant/VG-NSL pairs and obtain selfF 1 agreement between the model variant and the original VG-NSL by averaging scores from each pair.", "For the upper-bound agreement calculation, we train two distinct sets of five original VG-NSL models.", "Our parsing model is very similar but not exactly identical: there is roughly a six points F1 agreement gap in the best case compared to the upper bound.", "We consider these numbers a worst-case scenario because self-F 1 agreement measures on the validation data are used twice.", "First, for model selection to eliminate the variance of each five-model set, and second for the variant agreement analysis.", "Expressivity Analysis We analyze the embeddings of the two variants closest to the original Model 1 , s WS , c ME Turney et al. (2011) 0.73 Brysbaert et al. (2014) 0.75 Hessel et al. (2018) 0.89 Shi2019 0.94 Table 3: Pearson correlation coefficient of concreteness estimates between our 1 , s WS , c ME variant and existing concreteness estimates, including reproduced estimates derived from VG-NSL by Shi et al. (2019).", "model, 1 , s WS , c ME and 2 , s WS , c ME , to identify the information they capture.", "Both behave similarly to the original VG-NSL.", "Figure 1 visualizes the token embedding space for these variants.", "Interestingly, the distribution of the 2 d token embeddings seems almost linear, suggesting that the additional dimension is largely not utilized during learning, and that both have a strong preference for separating nouns from tokens belonging to other parts of speech.", "It seems only one core visual signal is used in the model and if this factor is captured, even a 1 d model can propagate it through the tree.", "We hypothesize that the core visual aspect learned, which is captured even in the 1 d setting, is noun concreteness.", "Table 3 shows that the reduced token embeddings have strong correlations with existing estimates of concreteness.", "Figure 2 shows the ordering of example nouns according to our 1 d learned model representation.", "We observe that the concreteness estimated by our model correlates with nouns that are relatively easier to ground visually in MSCOCO images.", "For example, nouns like giraffe and elephant are considered most concrete.", "These nouns are relatively frequent in MSCOCO (e.g., elephant appears 4,633 times in the training captions) and also have a low variance in their appearances.", "On the other hand, nouns with high variance in images (e.g., traveller) or abstract nouns (e.g., chart, spot) are estimated to have low concreteness.", "Appendix A includes examples of concreteness.", "We quantify the role of concreteness-based noun identification in VG-NSL by modifying test-time captions to replace all nouns with the most concrete token (i.e., elephant), measured according Training Setting Token 1 , s WS , c ME Shi2019 Basic Setting herd 49 .", "to the 1 d token embeddings learned by our model.", "We pick the most concrete noun for each training configuration using mean ranking across token embeddings of the five models in each configuration.", "For example, instead of parsing the original caption \"girl holding a picture,\" we parse \"elephant holding an elephant.\"", "This uses part-of-speech information to resolve the issue where nouns with low concreteness are treated in the same manner as other part-of-speech tokens.", "We compare the output tree to the original gold ones for evaluation.", "We observe that the F 1 score, averaged across the five models, significantly improves from 55.0 to 62.9 for 1 , s WS , c ME and from 54.6 to 60.2 for the original VG-NSL before and after our caption modification.", "The performance increase shows that noun identification via concreteness provides an effective parsing strategy, and further corroborates our hypothesis about what phenomena underlie the strong Shi et al. (2019) result.", "Table 4 includes the results for the other training settings.", "We studied the VG-NSL model by introducing several significantly less expressive variants, analyzing their outputs, and showing they maintain, and even improve performance.", "Our analysis shows that the visual signal leads VG-NSL to rely mostly on estimates of noun concreteness, in contrast to more complex syntactic reasoning.", "While our model variants are very similar to the original VG-NSL, they are not completely identical, as reflected by the selfF 1 scores in Table 2.", "Studying this type of difference between expressive models and their less expressive, restricted variants remains an important direction for future work.", "For example, this can be achieved by distilling the original model to the less expressive variants, and observing both the agreement between the models and their performance.", "In our case, this requires further development of distillation methods for the type of reinforcement learning setup VG-NSL uses, an effort that is beyond the scope of this paper.", "Our work is related to the recent inference procedure analysis of Dyer et al. (2019).", "While they study what biases a specific inference algorithm introduces to the unsupervised parsing problem, we focus on the representation induced in a grounded version of the task.", "Our empirical analysis is related to Htut et al. (2018), who methodologically, and successfully replicate the results of Shen et al. (2018a) to study their performance.", "The issues we study generalize beyond the parsing task.", "The question of what is captured by vision and language models has been studied before, including for visual question answering (Agrawal et al., 2016, 2017; Goyal et al., 2017), referring expression resolution (Cirik et al., 2018), and visual navigation (Jain et al., 2019).", "We ask this question in the setting of syntactic parsing, which allows to ground the analysis in the underlying formalism.", "Our conclusions are similar: multi-modal models often rely on simple signals, and do not exhibit the complex reasoning we would like them to acquire.", "Special thanks to Freda Shi for code release and prompt help in re-producing the experiments of Shi et al. (2019).", "This work was supported by the NSF (CRII-1656998, IIS-1901030), a Google Focused Award, and the generosity of Eric and Wendy Schmidt by recommendation of the Schmidt Futures program.", "We thank Jack Hessel, Forrest Davis, and the anonymous reviewers for their helpful feedback." ]
[ "abstain", "abstain", "method", "method", "result", "result", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "result", "result", "method", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "result", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "objective", "abstain", "other", "other", "abstain", "abstain", "method", "result", "method", "other", "abstain", "abstain", "other", "other", "other" ]
[ "Fact-checking is an essential tool to mitigate the spread of misinformation and disinformation.", "We introduce the task of fact-checking in dialogue, which is a relatively unexplored area.", "We construct DIALFACT , a testing benchmark dataset of 22,245 annotated conversational claims, paired with pieces of evidence from Wikipedia.", "There are three sub-tasks in DIALFACT : 1) Verifiable claim detection task distinguishes whether a response carries verifiable factual information; 2) Evidence retrieval task retrieves the most relevant Wikipedia snippets as evidence; 3) Claim verification task predicts a dialogue response to be supported, refuted, or not enough information.", "We found that existing fact-checking models trained on non-dialogue data like FEVER (Thorne et al., 2018) fail to perform well on our task, and thus, we propose a simple yet data-efficient solution to effectively improve fact-checking performance in dialogue.", "We point out unique challenges in DIALFACT such as handling the colloquialisms, coreferences and retrieval ambiguities in the error analysis to shed light on future research in this direction 1 .", "Misinformation online can have deleterious consequences to our society, especially during public health crises like the COVID-19 pandemic.", "False and outdated information can be spread not only by humans but also by automatic agents as generative models have shown remarkable progress recently (Adiwardana et al., 2020; Xu et al., 2021).", "These systems are not perfect, as they can either generate hallucinated and imperfect information, or they can be abused to automatically generate false claims and spread misinformation at a massive scale.", "Fact verification tools are thus necessary in the current information age to tackle the spread of misinformation propagated.", "Fact-checking was introduced in Wang (2017); Thorne et al. (2018) and since then a growing body of research has explored and suggested various tasks and resources to address the challenges in this area.", "Fact-checking has been explored in medium such as Wikipedia passages, tables, social media and news articles (Guo et al., 2021; Bekoulis et al., 2021).", "In dialogue domain, related work either focus on evaluating factual consistency (Honovich et al., 2021; Qin et al., 2021) or consistent response generation (Rashkin et al., 2021; Shuster et al., 2021).", "However, due to lack of publicly available benchmarks, fact checking is still underexplored in the dialogue domain.", "Verifying factual correctness of claims in dialogue poses new challenges to both dataset construction and modeling.", "Claims in existing datasets are from formal sources such as news articles and they are generally succinct and formal.", "In contrast, claims in dialogue are often informal and sparse in factual content.", "Furthermore, dialogue utterances often include personal opinions, slang, and colloquialisms which need to be distinguished from factual information.", "Another challenge in dialogue fact-checking is that ellipsis and coreference occur frequently which make utterances incomplete and ambiguous (DeVault and Stone, 2007).", "Although humans can easily understand utterances with refer-3785 ences or absent information based on the dialogue context and their reasoning skills, a fact-checking system may need to model this behavior explicitly.", "We introduce the task of fact-checking in dialogue and propose an evaluation dataset, DIALFACT .", "An example is shown in Figure 1. DIALFACT has three sub-tasks: 1) Verifiable claim detection aims to distinguish responses that do not contain verifiable factual information, such as I haven't been but want to! in Figure 1. 2) Evidence retrieval involves selecting the most relevant knowledge snippets from Wikipedia which can verify the response.", "3) Claim verification aims to classify if a response is supported, refuted, or does not have enough information to verify the response given the dialogue history and the retrieved evidence.", "DIALFACT consists of both human-written and machine-generated claims based on the Wizard of Wikipedia (Dinan et al., 2019) dialogue dataset.", "Each response claim and its evidence sentences from Wikipedia are annotated by crowd workers and we perform rigorous quality checks on the annotations.", "For fact verification, we propose creation of weakly-supervised training data by leveraging techniques such as negation, entity swapping, language model mask-and-fill, and knowledge-grounded generation.", "We establish baseline model performance on this task, and point out the weaknesses of fact-checking models.", "Our analysis show that this is a non-trivial task with challenges remaining for future work.", "We hope that future work can leverage this dataset as a fact-checking benchmark or for development of automatic consistency metrics, and advance the state-of-the art in knowledge-grounded dialogue generation and evaluation.", "Fact Verification The spread of false information online has led to a growing body of research exploring automatic fact-checking.", "Thorne et al. (2018) and subsequent works (Wenhu Chen et al., 2020; Jiang et al., 2020; Nrregaard and Derczyn-ski, 2021; Aly et al., 2021) introduced fact extraction and verification datasets verifiable against pieces of evidence from Wikipedia articles.", "Fact-checking has been explored in a variety of mediums such as Wikipedia based claims (Schuster et al., 2021), claims over tables (Aly et al., 2021), scientific claims (Wadden et al., 2020), and social media claims (Nakov et al., 2021).", "However, fact-checking in dialogue is still an underexplored area.", "Kim et al. (2021) explored fact-checking for colloquial claims, curated by converting FEVER claims into colloquial style.", "Although closely related to our work, colloquial claims is not a dialogue dataset, only contains verifiable claims, and does not have dialogue contexts for claims.", "In DIALFACT , on the other hand, both evidence retrieval and claim verification are more challenging as they require resolving ambiguities and coreferences from the dialogue context.", "Consistency in Dialogue Neural dialogue systems grounded on knowledge sources such as Wikipedia (Dinan et al., 2019), knowledge graphs (Wu et al., 2019) or snippets from the internet (Komeili et al., 2021) have garnered interest in recent years.", "Despite generating plausible and engaging responses, existing models still hallucinate invalid information (Roller et al., 2021).", "Ensuring safety and consistency in dialogue response generation is thus an actively explored area (Rashkin et al., 2021; Shuster et al., 2021).", "Some recent works have proposed evaluation metrics and benchmarks for factual consistency in knowledge grounded response generation (Honovich et al., 2021; Dziri et al., 2021).", "Our work instead focuses on fact-checking in dialogue for both human and machine-generated responses, and involves additional tasks of verifiable claim detection and evidence retrieval.", "Synthetic datasets Synthetic dataset construction has been shown to improve robustness of evaluation models (Gupta et al., 2021; Ghazarian et al., 2021) and improve the complexity of test sets (Sakaguchi et al., 2021; Feng et al., 2021).", "Synthetic claims have been explored in fact-checking to create hard test sets.", "Several participants in the FEVER 2.0 breakers phase (Niewinski et al., 2019; Hidey et al., 2020; Atanasova et al., 2020) proposed approaches for automatically generated adversarial claims.", "Recently, Jiang et al. (2020) created complex multihop claims using word substitutions, Saakyan et al. (2021) used Bert based token-infilling to created refuted claims, and Schuster et al. (2021) created synthetic revisions to Wikipedia sentences to improve fact-checking robustness.", "Our work also introduces techniques to create synthetic claims in the context of dialogue fact-checking.", "Let a conversation context consist of a list of utterances C = { u 1 , u 2 , ..., u n } .", "The task is to perform fact-checking on the last utterance of the 3786 conversation u n , henceforth called claim c .", "Fact-checking claims in conversations is a pipeline that consists of several steps.", "First, the system needs to decide whether a response is VERIFIABLE or NON-VERIFIABLE .", "We define them as follows: NON-VERIFIABLE : The claim contains no verifiable factual information.", "It includes claims with personal opinions or personal information.", "VERIFIABLE : The claim contains at least one factual information verifiable against a background corpus (Wikipedia in this task).", "Next, the system should retrieve documents from the background corpus and select relevant evidence sentences from the documents.", "Finally, the system should predict whether the claim belongs to one of the following three categories: SUPPORTED : The response contains factual information which is valid in light of the evidence.", "REFUTED : The response contains factual information which is invalid in light of the evidence.", "NOTENOUGHINFORMATION (NEI): The response contains factual information which can not be validated (sup-ported or refuted) with the evidence.", "VERIFIABLE claims can be SUPPORTED , REFUTED , or NEI, and NON-VERIFIABLE claims are always NEI.", "We leverage the Wizard of Wikipedia (WoW) dataset (Dinan et al., 2019) as the base to build this task.", "WoW is a knowledge-grounded open-domain dialogue dataset with conversations between two speakers a wizard who has access to background Wikipedia documents to deliver knowledge carrying responses, and an apprentice who plays the role of a curious learner.", "For each turn u i , the wizard is shown a set of articles K i retrieved from Wikipedia.", "The wizard either chooses a relevant knowledge sentence k i from the set K i , or chooses a no sentence used option to construct a response.", "For our fact-checking task, we additionally need claims which belong to REFUTED and NEI categories.", "We next describe the methodologies used to create claims from the valid and test splits of the WoW dataset.", "We use two approaches to create claim responses for DIALFACT : 1) Automatically generated claims, and 2) Human written claims to emulates claims created by dialogue systems and humans respectively.", "All claims are further annotated by crowd workers on Amazon Mechanical Turk (Mturk).", "In this approach, we use automatic methods to create claims for all categories either from scratch or by mutating the responses in WoW dataset.", "Negation We use the 42 rule-based transformations from Thorne et al. (2019) which apply to verb phrases of the claims to convert them to their negated versions by adding words like not or no.", "It typically creates REFUTED claims.", "Substitution We perform three types of substitutions: For 1) Context and knowledge-based entity substitution, we first run SpaCy NER tagging (Hon-nibal and Montani, 2017) on a response u i from WoW.", "We then swap an entity in the response u i with an entity from either its conversation context C or its background knowledge articles set K i .", "An entity is only swapped if it is present in k i , the original knowledge sentence to avoid swaps which do not change the facts.", "Entities are swapped within their types.", "For 2) Sense-based substitution, we swap an entity in u i with an entity with a similar sense returned from the sense2vec (Trask et al., 2015) library.", "For 3) Adjective substitution, we substitute adjectives in a claim (ignoring adjectives related to emotions, such as happy) with their WordNet (Miller, 1998) antonyms (for example best is replaced with worst ).", "These operations typically create REFUTED claims.", "Mask-and-Fill This method generates claims in two stages: 1) Mask salient words from the original claims, and 2) Substitute those words with their alternates using a language model.", "For masking salient words in the original response claims, we follow the procedure from Thorne and Vlachos (2021) and use the Neutrality Masker model from Shah et al. (2020).", "It predicts the tokens which upon masking are likely to cause a label flip from SUPPORTED to NEI.", "For step 2) we first train a T5-base model (Raffel et al., 2020) on the WoW dataset on the task of infilling masked tokens conditioned on evidence sentences.", "For training, the input sequence consists of concatenated evidence sentence k i , dialogue context C , and the gold response with masked spans at random positions, and the output is the gold response.", "The model is thus trained to infill a masked response based on the provided evidence and the dialogue context.", "For generating response claims which belong to REFUTED or NEI categories, we use the following 3787 types of evidence sentences to condition the infilling:", "a) empty evidence,", "b) evidence sentences selected randomly from the knowledge article set K i belonging to the original response, and", "c) evidence sentences from a Wikipedia article of an entity retrieved using sense2vec based on its similarity with the entities in the original response.", "Conditioning on such evidence lead to generation of claims which have factual details inconsistent with the original evidence.", "Generation We fine-tune one of the best chit-chat dialogue systems, Blenderbot model (Roller et al., 2021), on the WoW dataset.", "The model takes the concatenation of the knowledge sentence k i and the dialogue context C as input and it is trained to predict the tokens of the gold response.", "To generate new response claims, we condition the model on the three types of evidence described in the Mask-and-Fill approach.", "We use a high temperature (1.5) and nucleus sampling (Holtzman et al., 2020) with p = 0 .", "9 during decoding to encourage the model to generate unexpected and non-contextual entities in the responses.", "Final claim set creation Our target is to create a challenging and diverse test set for dialogue fact-checking.", "Using the aforementioned methods of claim generation, we get a set R c = { r 1 , r 2 , ..., r k } of response claims for a dialogue context C .", "To select a final set of claims, we first remove any responses which do not have at least 3 words different from other responses in R c , then filter out less fluent claims whose GPT-2 (Radford et al., 2019) perplexity scores are higher than 1.1 times the average perplexity scores of the responses in R c .", "We then score the response claims using existing state-of-the-art models related to our task: namely Dialogue NLI (Welleck et al., 2019), Dialogue contradiction detection (Nie et al., 2021), FEVER based fact verification (Schuster et al., 2021) and fact-checking on colloquial claims (Kim et al., 2021).", "For each model, we calculate the entropy of the scores predicted for each label and rank the claims in R c based on the sum of the entropy of the scores of all the models, which gives an estimate of the confusion or difficulty in classifying the claims.", "The top 4 responses from the ranked list are chosen as the final set of response claims for that context.", "For each claim, a set of evidence sentences is first automatically created and then labelled by crowd workers.", "We first extract a set of named entities and noun phrases n k from the following sources: the claim c , the dialogue context C , the original response u i for the dialogue context in WoW, and the title of the knowledge articles K i shown to the wizard for u i .", "We use the MediaWiki API 2 to find a set of relevant Wikipedia pages P c for n k .", "We then create a set of candidate sentences with the first 10 sentences of each page in P c .", "Finally, we use two methods SpaCy's word2vec similarity 3 and BM25 similarity 4 to rank the top 10 evidence sentences using each method.", "We then combine the non-overlapping evidence from both methods to create the final evidence set e c for each claim c .", "We add the knowledge sentence k i associated with the original response in the WoW dataset if it is not already present in e c .", "We carry out the annotations of the claims and evidence on the Mturk platform in 3 rounds.", "The screenshot of the annotation UI is shown in Figure 3 of the Appendix.", "In each round a worker sees the claim c , its dialogue context C , and its associated evidence sentences e c .", "Workers have to perform 3 tasks: First, they select if the claim is VERIFIABLE or NON-VERIFIABLE .", "Second, they select one or more evidence sentences related to the response claim.", "In case the set of evidence shown is not enough to decide the label of the response, or if they choose NEI, they are instructed to search Wikipedia and add relevant additional evidence sentences in the interface.", "For NEI claims they are instructed to add evidence sentences which are most related to the claim.", "Third, they choose the category of the response SUPPORTED , REFUTED , or NEI.", "For NON-VERIFIABLE claims, NEI is auto-selected.", "Since automatically created responses can have grammatical or coherence related issues, in the first round of labeling, annotators are asked to edit a response to make it appropriate to the context if needed, or mark a response as incoherent, in which case it is removed from further rounds (We dropped 5% of incoherent claims).", "In the second and third rounds we gather 2 additional annotations for each claim.", "We select the label which has the majority vote among the set of 3 annotations across all rounds.", "The evidence set for each claim is the union of evidence annotated in any of the rounds.", "Note that this mechanism can miss relevant evi-2 www.mediawiki.org/wiki/API:Main_page 3 www.spacy.io/ 4 www.github.com/dorianbrown/rank_bm25 3788 Validation Supported Refuted NEI-Factual NEI-Personal Total Generated 1686 1047 150 1745 4628 Written 1656 2316 1836 0 5808 Total 3342 3363 1986 1745 10436 Test Supported Refuted NEI-Factual NEI-Personal Total Generated 2446 1195 1278 1305 6224 Written 1493 2740 1268 84 5585 Total 3939 3935 2546 1389 11809 Table 1: Dataset statistics of DIALFACT for all categories and splits.", "dence sometimes due to either retrieval errors in evidence set creation, or insufficient search of evidence or incorrect evidence annotation by workers.", "Our dataset also consists of human written claims to cover lexical and stylistic patterns present in human-human conversations.", "The annotation is carried out in 3 rounds.", "In the first round , we instruct crowd workers to write VERIFIABLE factual responses conditioned on dialogue context and a set of evidence sentences for a pre-specified label l c one of SUPPORTED , REFUTED , or NEI.", "Workers were provided detailed examples and instructions for the task such as Avoid using negation words such as do not, no for Refuted claims (Appendix C).", "The evidence set for each claim is constructed using the method described in section 4.1.2.", "In the second round , we use the claim labeling interface from section 4.1.3 to gather labels for the claims collected in the first round.", "For any claim which is not labeled in the second round with the original label l c , we gather a third round of annotations.", "If the label in the third round does not match l c , we drop that claim from the dataset.", "We drop about 7% of the human written claims.", "We present the dataset statistics in Table 1. The dataset consists of balanced SUPPORTED and REFUTED claims.", "Test set contains claims for 3,760 dialogue contexts with an average of 3.1 claims per context, and validation contains claims for 3,738 contexts with an average of 2.8 claims per context.", "The average number of tokens per claim is 22.0 in test set and 20.0 in validation set.", "Average number of evidence per claim is 1.3 in the test set and 1.1 in the validation set.", "We show some sample instances in Table 13 in the Appendix.", "Annotators : We hire workers on Mturk with with at least 5000 HITS done and an acceptance rate of 95% or above.", "Workers have to first pass a qualification test where they are shown the task instructions, label definitions, and multiple examples and the explanations for each label.", "Then they are asked to label or write 12 claims.", "Using these qualification tests, we get a final set of 87 workers for the main data collection stage (Appendix C).", "Quality checks Annotations were carried out in batches over multiple weeks.", "We examined random samples to provide feedback to workers.", "Workers with poor annotations were either asked to retake a new qualification test or removed from further batches.", "We recollected annotations for data annotated by removed workers.", "We provide tooltips and examples during annotation, and we also added automatic checks to alert workers about issues such as too short responses, no evidence selected, and copy-pasting evidence sentences as claims.", "Data validation To evaluate inter-annotator agreement, we collected 2 extra rounds of annotations for 1200 claims for both automatically generated and human written claims, which is 10% of the data.", "Krippendorff's alpha value for category labels was 0.68 for human written claims and 0.58 for automatically generated claims, denoting moderate agreement.", "Krippendorff's alpha for VERIFIABLE versus NON-VERIFIABLE was 0.49, with a low-to-moderate agreement.", "The lower agreement is due to some claims like Guns N' Roses was the greatest rock band of all time., where it is difficult to judge if this is a personal opinion or a verifiable fact.", "In such conflicts, workers would still typically correctly label such ambiguous claims as NEI.", "Lexical Biases Following Schuster et al. (2019), we measure the Local Mutual Information (LMI) to measure the correlation between bigrams in the claims ( w ) and the categories l , defined as follows: LMI ( w, l ) = p ( w, l ) log (cid:16) p ( l/w )) p ( l )) (cid:17) .", "We present the top bigrams in REFUTED claims and their LMI value in Table 2. The top bigrams in DIALFACT do not include obvious negations such as do not, is not, are mostly topical in nature, and the p ( l/w ) value is low with the Refute label.", "Investigating generated and written claims separately, we found that bigrams such as does not, only one, did not, are not had higher p ( l/w ) in written claims com-3789 All Labelled Written Bigram LMI p(l/w) Bigram p(l/w) p(l/w) Bigram p(l/w) p(l/w) he was 396 0.45 he was 692 0.40 only one 201 0.95 was born 362 0.64 singer songwriter 471 0.61 referred as 169 0.83 spectrum visible 195 0.80 spectrum visible 447 0.82 drama school 163 0.89 visible light 188 0.76 visible light 431 0.74 harry potter 160 0.60 on spectrum 186 0.73 on spectrum 431 0.78 pins are 158 0.83 an american 177 0.50 an american 391 0.47 only be 152 0.89 Table 2: Top bigrams in the test set for REFUTE category.", "pared to generated claims for REFUTED category, although their LMI values were not high.", "Finally, there is significant overlap between the top bigrams for different categories, suggesting an absence of obvious lexical biases in the dataset.", "We propose new baselines and compare with existing models for three sub-tasks in dialogue fact-checking 1) Verifiable claim detection, 2) Evidence retrieval, and 3) Claim verification.", "We propose three simple baselines for verifiable claim detection.", "1) Lexical overlap calculates the maximum word overlap between a claim and all evidence sentences after removing punctuation and stopwords using SpaCy.", "2) DNLI uses the probability of the neutral class from the Dialogue Natural Language Inference model (Welleck et al., 2019).", "3) Lexical+DNLI uses the sum of scores of both baselines and Random predicts each class with 50% probability.", "For all baselines, we mark a response as VERIFIABLE or NON-VERIFIABLE based on a threshold value selected using validation data.", "We present the accuracy and individual F1 scores for both classes in Table 3. Lexical+DNLI performs the best and all baselines have low F1 scores for NON-VERIFIABLE claims.", "Evidence retrieval consists of two steps: 1) Document Retrieval, 2) Evidence Sentence selection.", "5.2.1 Document Retrieval We test two methods for document retrieval: The first one is WikiAPI 5 , which retrieves Wikipedia pages and is used in past fact-checking work (Hanselowski et al., 2018; Stammbach and Neumann, 2019; Liu et al., 2020).", "It uses the Al-lenNLP constituency parser (Gardner et al., 2018) to extract potential entities from the claims.", "Then it feeds the entities as queries through the MediaWiki API 2 and returns up to three Wikipedia pages per query.", "For each Wikipedia page, we query the KILT (Petroni et al., 2021) knowledge source to get the first 5 paragraphs of the page.", "We create two versions of this method:", "a) Wiki-ctx which concatenates the last two turns of the dialogue context with the response claim before document retrieval and", "b) Wiki-claimonly which uses just the claim.", "The second method is Dense Passage Retrieval (DPR) (Karpukhin et al., 2020), a dual encoder based model which retrieves documents using BERT (Devlin et al., 2019) trained by metric learning.", "We create three versions of this method:", "a) DPR-original , which uses the original DPR trained on question-answering tasks,", "b) DPR-WoWft-claimonly , which is fine-tuned on the WoW dataset to retrieve documents relevant to a query composed only of a response claim, and", "c) DPR-WoWft-ctx , which is also fine-tuned on WoW dataset but uses both the context as well as the response as a query (training details are provided in Appendix B).", "For DPR-based methods we retrieve the top 100 documents.", "A document is relevant if it contains a gold evidence sentence.", "We present the document recall results in Table 4. WikiAPI methods outperform DPR-based methods.", "Both methods show better performance when dialogue context is used in retrieval.", "DPR is typically able to retrieve documents with the correct topic but often fails to retrieve a relevant evidence sentence.", "Entity linking is crucial for fact-checking 5 www.github.com/UKPLab/ fever-2018-team-athene 3790 Model Recall DPR-original 40.3 DPR-WoWft-claimonly 44.7 DPR-WoWft-ctx 58.8 Wiki-claimonly 60.8 Wiki-ctx 75.0 Table 4: Document recall for the test set.", "in dialogue and WikiAPI is able to leverage that capability for better performance.", "5.2.2 Evidence Sentence Selection In evidence sentence selection, a final set of top k evidence sentences are chosen from the set of documents D c retrieved in the previous step for claim c .", "First, we create a candidate evidence sentence set S c by taking the union of all sentences in D c .", "We fine-tune a Bert-base model for ranking the candidate sentences in S c .", "The model is trained to predict -1 for irrelevant evidence and 1 for relevant evidence for a given claim.", "We use the context-response pairs from the WoW dataset for training the model.", "Besides using randomly selected evidence sentences, to create hard negative examples for training, we also chose sentences from the set of articles K i shown to the wizard during WoW data collection.", "These sentences are close in content and topic to the gold evidence sentence and form hard negative candidates for the model.", "At test time, we use the evidence sentences in the top k rank with a score of more than 0.", "Similar to document retrieval, we created two versions of the model: 1) Ret-with-context, and 2) Ret-only-claim, based on whether the last two utterances of the dialogue context were included in the input to the BERT model.", "We present the performance of the models in Table 5 for two of the best performing document retrieval models Wiki-ctx and DPR-WoWft-ctx.", "We find that recall@5 values for both models are higher when dialogue context is added as an input with the claim.", "In claim verification, a claim c is classified as SUPPORTED , REFUTED , or NEI given a context C and", "DNLI (Welleck et al., 2019) Dialogue NLI dataset contains sentence pairs labeled as entailment, neutral, or contradiction derived from dialogues.", "Entailment maps to SUPPORTED , neutral maps to NEI, and contradiction maps to REFUTED in our task.", "We train a Bert-base model on their training set of 310,110 data points.", "DECODE (Nie et al., 2021) Dialogue Contradiction Detection dataset contains both human-human and human-bot contradictory dialogues.", "The train set contains 27,948 data points with two labels contradiction and non-contradiction.", "We train a Bert-base model with the last two utterances of the context and the response as input to the model.", "VitaminC (Schuster et al., 2021) VitaminC is a large-scale fact verification dataset which is based on contrastive claim-evidence pairs created from Wikipedia edits.", "They train models that avoid claim-only biases and are more sensitive to changes in the evidence.", "We use their ALBERT-base model finetuned on FEVER (Thorne et al., 2018) and their VitaminC dataset.", "Colloquial (Kim et al., 2021) It contains colloquial claims converted from FEVER dataset claims into colloquial style.", "It has 410k colloquial claim-evidence pairs in the training set and is well aligned to our task because of its colloquial nature.", "We finetune a Bert-base model on this dataset.", "CorefBert-Colloquial (Ye et al., 2020) is one of the best performing models on FEVER and is designed to better capture and represent the coreference information.", "We use their model which uses kernel graph attention network (KGAT) (Liu et al., 2020) and fine-tune it on Colloquial claims.", "Aug-WoW We propose a novel model which is trained on weakly supervised training data.", "DIALFACT is meant to be used only for validation and test, and we do not train a model on DIALFACT to avoid creating a model which can simply learn to solve the dataset instead of the task.", "Instead, we leverage the techniques described in section 4.1.1 to create synthetic training data for each category of claims.", "For SUPPORTED claims, we use the claim-evidence pair from the original WoW dataset.", "We use the Lexical baseline from section 5.1 to filter out Non-Verifiable claims, which leads to 46,934 SUPPORTED claims.", "We follow the methods Negation and Substitution from section 4.1.1 to create 38,895 REFUTED claims.", "We create NEI claims 3791 Oracle-Evidence Wiki-Evidence DPR-Evidence Model Accuracy Macro F1 Accuracy Macro F1 Accuracy Macro F1 DNLI 43.3 35.4 39.1 31.5 38.4 29.5 DECODE 37.8 30.3 35.3 25.3 34.5 22.5 VitaminC 57.6 56.1 46.2 44.7 45.9 44.2 CorefBert-Colloquial 61.4 60.0 47.6 45.2 46.4 41.1 Colloquial 63.5 62.8 48.1 46.3 48.7 46.4 Aug-WoW 69.2 69.0 51.6 51.3 51.5 50.2 Table 6: Results for claim verification on the test set.", "using two methods: 1) For every context-claim-evidence triplet, we substitute the evidence with random unrelated evidence.", "2) We use the Generation approach from section 4.1.1 to condition the generation on random evidence.", "We select a subset of 40,000 NEI claims from the two approaches.", "We fine-tune the Colloquial baseline model on this synthetic dataset.", "The input to the model is the sequence of the last 2 context utterances separated by [EOT] token, followed by the claim.", "For all Bert-based models, all evidence sentences are concatenated together.", "More details about training the baselines are provided in Appendix B. 5.3.2 Results Table 6 summarizes the results for claim verification on the test set.", "NON-VERIFIABLE claims are included in the NEI category.", "We experiment with three evidence retrieval settings 1) Oracle Evidence, where we use gold evidence, 2) Wiki-Evidence, where we use Wiki-ctx for document retrieval and Ret-with-context for evidence selection, and 3) DPR-Evidence, where we use DPR-WoWft-ctx for document retrieval and Ret-with-context for evidence selection.", "We set the maximum evidence to 5.", "In all three settings, AugWoW outperforms baselines and the performance of all baselines drops when retrieved evidence is used compared to when oracle evidence is used.", "This indicates that evidence retrieval is an important step for this task.", "Even with oracle evidence, none of the models achieve an accuracy higher than 70%, which leaves abundant opportunity for future improvements.", "Colloquial baseline is the closest to Aug-WoW since it has been trained on conversation-like colloquial claims.", "Although Colloquial and CorefBert-Colloquial perform better than VitaminC with oracle evidence, the contrastive nature of VitaminC helps it perform better with retrieved evidences.", "In Table 8, we present the claim verification results on the Test set using oracle evidence on Generated and Written claims separately.", "The performance of all models is lower on Generated claims compared to Written claims.", "This is expected since as we mentioned in Final claim set creation in section 4.1.1, the Generated claims were chosen from a larger candidate claims set based on the difficulty of existing models to classify those claims.", "Thus Generated claims in DIALFACT are more challenging.", "Furthermore, Aug-WoW's performance is high on both types of claims, however, the gain in its performance on Written claims is higher on Written claims compared to Generated claims.", "In Table 7, we present the claim verification results on the test set with Aug-WoW model ablations.", "In Aug-WoW-noctx we do not concatenate the dialogue context, and in Aug-WoW-BertLarge we use the Bert-Large model as base architecture.", "Aug-WoW-noctx is comparable to Aug-WoW, and has slightly lower performance with Oracle evidence.", "Although Aug-WoW-BertLarge performs better with oracle evidence, it is more sensitive to the evidence quality and performs poorly with retrieved evidence.", "To test if a model that relies solely on claims and no evidence can leverage lexical biases in the claims to obtain good performance on DIALFACT , we train a model Aug-WoW-claimonly with no evidence included during training and testing.", "Aug-WoW-claimonly achieves 33.2% accuracy and 28.9% macro F1 score on the DIALFACT test set.", "Thus, a model can not exploit lexical cues in the claims of DIALFACT to obtain good performance.", "We report performance on a two-way classification experiment in Appendix A (Table 12) where we combine REFUTED and NEI into a single class named NOT-SUPPORTED .", "We present sample dialogue contexts, claims, oracle evidence for the claims along with model predictions in Table 9. We found that models tend to incorrectly predict a REFUTED or NEI response as SUPPORTED when there is significant overlap between the evidence and the claim while ignoring the semantics.", "The first example illustrates this point where the presence of terms biathlon and cross country skiing misleads some models to predict SUPPORTED incorrectly.", "Similarly, models predict SUPPORTED or REFUTED for a NEI claim due to word overlap between claim and evidence, as shown in the second example.", "Models also often fail to perform complex and commonsense-based reasoning during verification.", "In the third example, although humans can reason that the claim is REFUTED by the evidence, all models fail to correctly classify the claim.", "Finally, models struggle with lexical biases and separating the colloquial part of a claim from its factual parts.", "In the fourth example, although there is significant overlap between the claim and the evidence, models are fooled by the presence of the word not one of, and predict a SUPPORTED claim as REFUTED .", "We propose a new benchmark, DIALFACT , for fact-checking in dialogue created based on grounded dialogues from the Wizard-of-Wikipedia dataset.", "Besides human-written response claims, we also create synthetic claims with operations such as contradiction, infilling and substitutions.", "We hire qualified crowd workers to annotate responses into NONVERIFIABLE , SUPPORTED , REFUTED , or NOTENOUGHINFORMATION categories along with corresponding evidence.", "We point out empirically that existing fact-checking models trained on non-dialogue data fail to perform well on our task.", "We demonstrate how to leverage automatically generated responses as weak supervised signals to improve performance.", "We hope that DIALFACT can facilitate fact-checking, and consistency modeling and evaluation research in the dialogue community.", "Ethical Considerations & Broader Impact In this paper, we study the problem of fact-checking in dialogue.", "The DIALFACT benchmark dataset proposed in this work could be helpful in creation of more accurate automatic fact checking systems and metrics, and ultimately creation of dialogue systems which are more faithful to factual knowledge and are thus more trustworthy.", "Automatic fact-checking of dialogue could be useful in many real-life scenarios where conversations need to be properly monitored to avoid spread of misinformation and disinformation, and where the conversation participants are needed to be given accurate information.", "However, DIALFACT benchmark only covers a specific domain with Wikipedia as background knowledge.", "Furthermore, even with our best efforts to ensure high quality and accuracy, the dataset might still contain incorrect labels and biases in some instances.", "This could pose a risk if models that are evaluated or built using this benchmark are used in domains not covered by the dataset or if they leverage evidence from unreliable or biased resources.", "Thus the proposed benchmark should not be treated as a universal tool for all domains and scenarios.", "In our work, we mitigate this risk by using the trusted source of Wikipedia for evidence and by curating hard training and testing instances using automated generation approaches.", "Considerable additional work is needed to improve the scope, coverage and validity of fact-checking systems and metrics, but our work provides a cautious yet concrete step towards developing fact checking systems for dialogue.", "training and testing instances using automated generation approaches." ]
[ "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "method", "objective", "method", "result", "abstain", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "method", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "abstain", "method", "method", "other", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "method", "objective", "abstain", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain" ]
[ "Despite the impressive successes of generation and dialogue systems, how to endow a text generation system with particular personality traits to deliver more personalized responses remains under-investigated.", "In this work, we look at how to generate personalized responses for questions on Reddit by utilizing personalized user profiles and posting histories.", "Specifically, we release an open-domain single-turn dialog dataset made up of 1.5M conversation pairs together with 300k profiles of users and related comments.", "We then propose a memory network to generate personalized responses in dialogue that utilizes a novel mechanism of splitting memories: one for user profile meta attributes and the other for user-generated information like comment histories.", "Experimental results show the quantitative and qualitative improvements of our simple split memory network model over the state-of-the-art response generation baselines.", "The dataset and code are available here.", "Building human-like conversational systems, in particular chit-chat agents, has been a longstanding goal in language technology communities.", "Unlike task-oriented dialog agents that focus on completing specific tasks (Wen et al., 2017; Eric et al., 2017; Lei et al., 2018; Lowe et al., 2015), chit-chat agents need to dynamically interact with people, understand the meaning of human conversations (Hovy and Yang, 2021), and thereby make better responses to improve user experience.", "Despite the recent successes on building chitchat agents using data-driven approaches (Ritter et al., 2011; Banchs and Li, 2012; Serban et al., 2016; Li et al., 2016c; Parthasarathi and Pineau, 2018), lack of a consistent personality is still one of the common issues.", "The main reason is that these The work is mainly done when YW was a visiting student at Georgia Institute of Technology.", "Question: Where do you live and what is something you are doing today?", "Responses: A: I live in Mongolia and I will be making some good sandwiches today.", "B: Midwest America , I will be skyping my brothers and going to band practice today.", "Question: What's your \"go to\" when you're sad?", "Responses: A: I listen to horror stories for some reason.", "B: I love to read or listen to sad music .", "Respondent Profile: A: Gender: female; Favorites: sandwich ; Possessions: Russian class; Residence: Mongolia ; Asia ; B: Family: brothers ; Self-description: guitarist ; Favorite: fakebooks ; Residence: America ; Respondent Comment Histories: A: I often fall asleep while listening to horror stories .", "B: Listening to sad music , I know it adds fuel to fire but the flame will burn out quicker and you'll feel better soon.", "models are often trained over conversations spoken by different people, ignoring their personality (Li et al., 2016b; Wei et al., 2019; Zhang et al., 2018).", "As shown in Table 1, different people responded differently to the same input question due to their diverse background including basic personal information and attitudes towards different things.", "Thus, it becomes essential to incorporate personalization into the modeling and evaluation of response generation and eventually chit-chat agents.", "There have been several personality-related dialogue datasets built for evaluating models' performances in personalized conversations, such as PERSONA-CHAT dataset (Zhang et al., 2018) and Facebook's Reddit dataset (Mazare et al., 2018).", "The PERSONA-CHAT dataset was collected by intentionally assigning annotators to predefined personas described by a set of sentences instead of their real personality.", "Such artificially generated conversations cannot adequately represent respondents and their personalities which would lead to dataset bias problems.", "For example, an introvert annotator can hardly imitate the response of a person with sociable personas.", "Moreover, the number of personas covered by this corpus is limited.", "Today's social media platforms such as Reddit and Twitter provide us with good opportunities to build a large scale of collections of naturally occurring conversations (Xifra and Grau, 2010; De Choudhury and De, 2014; Schrading et al., 2015) and also make it possible to provide consistent personalities.", "For instance, Facebook's Reddit dataset represents each user by a set of sentences chosen from their comment histories heuristically.", "However, they also acknowledged that these persona sentences might not well represent a general trait of users due to the limitation of their heuristic rules for sentence retrieval (Mazare et al., 2018).", "In this work, we introduce a personalized Reddit dataset PER-CHAT , an open-domain response generation dataset consisting of 1.5M conversations and 300k users.", "PER-CHAT covers finer-grained personal information for users, including discrete user attributes such as gender, residence, self-description and favorites inferred based on users' self-reported messages on Reddit, and contextual information such as their comments (3).", "Based on PER-CHAT, we propose a simple generative split memory network to incorporate diverse personal information, with a novel mechanism of splitting memories: one memory representation for user meta attributes (e.g., profile) and the other for user activity information (e.g., comment histories), respectively (4).", "Experimental results show that our generative split memory network outperforms state-of-the-art response generation baselines both quantitatively and qualitatively (5).", "Personalized Generation Datasets Much attention has been paid to construct personalized dialog datasets.", "Built upon the bAbI dialog dataset, Joshi et al. (2017) extended it to include information such as gender, age and dietary preference.", "This domain-specific dataset was then used to train goal-oriented dialog models for several restaurant reservation tasks.", "There are also several dialog datasets that focus on chit-chat scenarios, such as PERSONA-CHAT dataset (Zhang et al., 2018), Reddit dataset (Al-Rfou et al., 2016), Twitter dataset (Li et al., 2016b) and PersonalDialog dataset (Zheng et al., 2020).", "PERSONA-CHAT ( PC ) dataset consists of 1k different personas, and annotators are asked to conduct conversations according to assigned personas.", "The Reddit dataset and Twitter dataset simply use user ID information without any specific user information to indicate personalization.", "The PersonalDialog dataset ( PD ) (Zheng et al., 2020), collected from a Chinese social media Weibo, contains three kinds of personality traits (gender, location, age) for each user.", "On the other hand, Mazare et al. (2018) introduced personalization from Reddit ( PCR ) by incorporating the persona of each user with a (randomly chosen) subset of his/her posting comments.", "Zhong et al. (2020) further extended their datasets with annotated empathy information ( PEC ).", "In this work, we combine those two different ways of gathering personalization signals of users, i.e., meta profile attributes and users' posting histories, and provide a more comprehensive, large scale personalized dataset derived from natural social conversations.", "Personalized Generation Models Current dialog models can be divided into ranking-based models and generation-based models.", "Ranking-based models (Al-Rfou et al., 2016; Mazare et al., 2018; Zhang et al., 2018) focus more on the task of response selection that is to pick the best response from a pool of random candidates.", "In contrast, generation-based models attempt to generate response directly from any given input questions.", "Under personalized dialog settings, Zhang et al. (2018) claimed that ranking-based models performed better than generative models on their personalized dataset, suggesting that building personalized generation models are more challenging.", "With the development of recent large scale social media data and the success of sequence to sequence framework (Serban et al., 2016; Shang et al., 2015; Sutskever et al., 2014), several personalized response generation models have been proposed, and we can only mention a few here due to space limits.", "Li et al. (2016b) introduced the Speaker Model and the Speaker-Addressee Model that encoded user-id information into an additional vector and fed it into the decoder to capture the identity of the speakers.", "Kottur et al. (2017) further extended these speaker models into multi-turn conversations.", "In addition to using user id to capture personal information, Zhang et al. (2018) proposed a profile memory network that utilizes a memory network for encoding persona sentences.", "To further utilize personal traits, Zheng et al. (2020) proposed an attention mechanism to incorporate these user-related attributes in the decoding stage.", "Recently, there are a few works using meta-learning and reinforcement learning to enhance mutual persona perception Madotto et al. (2019); Kim et al. (2020); Majumder et al. (2020).", "However, few models have taken into account different potential sources of personalization signals such as profile attributes and comments.", "Our work conducts persona-aware representation learning by combining these two sources.", "Note that our split memories architecture is similar to Joshi et al. (2017) , but differs in tasks and memorizing histories.", "In our work, we focused on memorizing relevant history comments instead of dialog histories in multi-turn chat settings.", "Evaluation Metrics Most response generation models utilize perplexity, BLEU (Papineni et al., 2002) and recently BERTScore (Zhang et al., 2019) and Moverscore (Zhao et al., 2019) for evaluation (Serban et al., 2016; Xing et al., 2018).", "For evaluating personalization, Zheng et al. (2020) proposed to measure the accuracy of predicting personality traits by firstly training classifiers for different personality traits such as gender and age.", "However, for certain trait categories such as hobbies and location, it is quite difficult to train a reliable classifier.", "In terms of evaluating persona consistency between generated sentences and given user comments, Madotto et al. proposed consistency score using NLI models pre-trained on Dialog NLI dataset (Welleck et al., 2019), which is a corpus based on Persona dataset, with NLI annotation between persona description sentences and dialogues utterance.", "In this paper, we introduce an automatic metric for evaluating persona consistency between user profiles and these generated sentences.", "This section describes how we construct an open-domain single-turn dialog dataset with personalization information from Reddit, together with dataset analysis 1 .", "Specifically, we used r/AskReddit 2 , one of the most active subreddits based on an online subreddit ranking system sorted by number of active users 3 .", "Users on r/AskReddit are encouraged to write clear and direct questions, and most posted questions are about open-ended discussion on a va-1 Similar process can be employed to our raw data to obtain MULTI-TURN dialog datasets.", "riety of topics, without definite or correct answers or professional knowledge, making r/AskReddit a suitable place to model personalization in open domain dialogue systems.", "Data Preprocessing.", "We collected all submissions under r/AskReddit as questions and their subsequent comments as responses.", "Each submission and one of its direct comment form a (question, response) pair in our corpus, i.e., single-turn dialogues.", "Furthermore, we stripped away potential markdown and Html syntax tokens and replaced all forms of url links, emails, and digits in our corpus with unique tokens url, email and digit respectively.", "We also processed replicated words and punctuation to their standard form via a set of regular expressions, e.g., coooool is converted into cool and !!!!! to !.", "Vocabulary and Conversation Pairs.", "We use a vocabulary of 50,257 entries the same as Dialogpt (Zhang et al., 2020), since they pretrained their models using the full Reddit data.", "To avoid lengthy questions or responses, we pruned the conversation pairs based on the statistics (see Figure 3 in Appendix A).", "Questions that exceed 100 words and responses with over 40 words are excluded.", "In total, there are 1,566,653 conversation pairs.", "To augment our dataset with personalization information, we collected three sources of user-related information: (1) user IDs which are unique user-names for their Reddit accounts; (2) comment histories , which are all the comments a user has posted on Reddit; (3) user profile attributes such as gender, residence, favorites and etc..", "To collect these user-specific information, we first filtered out inactive users a user who has made less than 100 comments during the recent year.", "There remain 301,243 users after removing inactive users.", "User Comment Histories.", "Users' comment histories can often signal their personal preferences toward topics or even texting habits as shown in Table 1, thus it is beneficial to collect these histories.", "We obtained a user's comment histories by querying the Pushshift Reddit API 4 .", "Since (1) it is infeasible for models to operate on the scale of thousands of comments and (2) applying persona extraction process rather than randomly picking up comments can improve model's performance 4 https://github.com/pushshift/api in personalization suggested by Mazare et al., we designed an information retrieval (IR) system to automatically pick up query-related comment histories for each user.", "Specifically, We utilized semantic embedding based similarity between each query and a comment to obtain a smaller set of candidates M , following similar retrieval mechanisms as Ritter et al. (2011); Wang et al. (2013).", "That is, given the input question, we retrieve top l comments that have the highest cosine similarity scores with the query to construct the user's comment histories.", "The embedding used in IR systems is the averaged contextual embeddings from pretrained BERT-large models(Devlin et al., 2019).", "Respondent Comment Histories of Table 1 shows some example query-related histories we extracted from a user's comments.", "User Profile.", "The persona extraction process to construct comment histories might lose valuable user's attributes such as their residence, favorites which are also helpful in generating personalized responses.", "To this end, we further conduct a finer-grained entity extraction mechanism over all of user's past histories.", "User profile information was viewed as entities extracted from histories using similar methods as the popular Reddit user analysis site SnoopSnoo 5 .", "Following the categories provided by the site, we first divided user attributes into eight types, including pets, family, resi-dence, favorites, partner, possessions, gen-der, self-description , where possessions refers to personal possessions owned by users such as users' guitars; favorites means users' favorite items and people mentioned by the user and self-description denotes concepts that users use to describe themselves such as their occupations.", "We then applied different extraction regular expressions for different categories.", "For example, we would gather a noun as favorites if it is found after like,love,.. in certain comments.", "Examples for these attributes are shown in Respondent Profile of Table", "1. Unlike some social media platforms such as Weibo, users on Reddit do not provide very specific profile information.", "Thus, we need to extract these entities based on their histories, and also check the reliability of such profile information.", "We manually checked whether such extracted user attributes actually corresponded to users' comments via a small corpus study (details 5 https://github.com/orionmelt/ snoopsnoo Number of Attributes 0.3% 4.6% 10.4% 16.1% 19.6% 20.3% 17.8% 10.9% 0 10 20 0-1 2 3 4 5 6 7 8 P e r c en t age ( % ) Figure 1: Distribution of users' number of attributes.", "in Appendix B), and found that in over 85% cases, our entity extracting process is quite reliable for capturing users' basic information.", "User Profile Analysis.", "We conducted in-depth analyses to show the coverage rate of each attribute out of the eight profile attributes in our collected corpus, as described in Table", "2. We found that gender and possession have very high coverage rates above 99%, and other attributes have different coverage rates, ranging from 29.1% to 82.8%.", "Since it is unnecessary that users contain value under every attribute type, we also computed the percentage of users who have the corresponding number of attribute types.", "Figure 1 showed that most users have around 4 to 7 attributes.", "Question-Response Relevance To examine the quality of our constructed corpus, especially the question-response relevance, we randomly sampled 500 question-response pairs from our corpus, and asked for annotators from Amazon Mechanical Turk to rate them.", "Each pair is judged by three raters on whether a response appropriately responded to the given question.", "Raters can select from Yes', No', and Unsure' if they are unsure about the relevance.", "We obtained an intra-class correlation coefficient of 0.63 , indicating good IR System User comment histories:", "annotation agreement(Cicchetti, 1994).", "We categorized a pair as relevant if the majority of annotators vote Yes'.", "As summarized in Table 3, 86.8% of the pairs are found to be relevant, suggesting a reasonable quality of our corpus.", "Most questions in our corpus has around with 2 to 3 responses.", "We randomly sampled 1% of questions and their corresponding responses as the development set, 1% as the testing set, and the remaining 98% as the training set.", "Table 4 summarizes the detailed statistics.", "Comparisons with Related Datasets Table 5 shows the comparisons between our datasets and the related ones.", "The biggest advantage of our dataset is that it has both comment histories and user profiles while being 5-10 times bigger than any prior publicly available dataset.", "In our following experiments in section 5, we show the necessity to provide both dimensions of personalized information.", "In terms of comments, we applied pre-trained IR system to extract query related comments instead of simply rule-based filtering used in datasets such as PCR and PEC.", "In terms of user profiles, we provide more diverse categories with eight main types, larger than PD datasets which only contains age, gender and location.", "By utilizing social media data, our dataset allows for more diverse personality and natural dialog patterns with over 300k users than datasets collected by human (e.g. PC).", "This section presents our generative models for personalized response generation, which generate responses conditioned on given questions and respondents' personal information 6 .", "Let a conversation C be a tuple of Question, Response and respondent (User) C := ( Q, R, U ) .", "A user U = ( ID, P, M ) consists of three sources of information username ( ID ), profile attributes P = ( f 1 , f 2 , . . . , f n ) , and user's comments M = ( m 1 , m 2 , . . . ) where f i is given as a key-value pair f i = < k i , v i > and M is a set of comment histories the user made.", "To better incorporate personal information of different dimensions, we propose a generative memory network with split memories for user profile and user comment history, respectively.", "The intuition lies in that interpersonal meta attributes and comment patterns may influence the respondents' responses differentially.", "The overall model architecture is shown in Figure", "2. Our model is built 6 Multi-turn dialogue generation can be considered in our framework by encoding additional contexts in memories.", "upon a standard seq2seq model with attention.", "For a given conversation C , we first feed the input comments M through retrieval system to get query-related comment set M , and the memory network encoder computes the representations of related comment history.", "In parallel, profile attributes are added as separate profile memory by another encoder.", "At each time step, the decoder utilizes the aggregated representations of comment histories and profile attributes to generate the final response.", "For a user U with the profile P , we view P as the user's attribute sequence and employ a shared word embedding in encoder to encode the attribute key sequence as e k = ( e k 1 , e k 2 , ..., e k n ) and to encode each entry in attribute value sequence as e v = ( e v 1 , e v 2 , ..., e v n ) respectively.", "The final set of the user profile representation H p is defined as { e k 1 (cid:12) e v 1 , ..., e k n (cid:12) e v n } , which is considered as the profile memory.", "For encoding comment sentences, we encoded the retrieved comments M for a user U as individual memory representations in a memory network, similar to Zhang et al. (2018).", "Instead of applying weight functions to word vectors of each entry, we feed the comments to the RNN encoder to get the set of encoded history memories denoted as H m .", "We then pass H p and H m to a split memory encoder.", "The memory network separately attends to the encoded split memories with given query vector q over K hops as follows: a kp = Softmax (cid:16) H p W 1 w kp (cid:17) (1) w k +1 p = ( a k p ) T H p + w k p (2) a km = Softmax (cid:16) H m W 2 w km (cid:17) (3) w k +1 m = ( a km ) T H m + w km (4) where W 1 , W 2 R d d and w 1 p = w 1 m = q .", "The outputs from both memories w Kp and w Km are summed to get the representation OK and is then fed into decoder side.", "The memory decoder utilizes the memory network and RNN.", "The RNN decoder takes as input the previous hidden state and previous target word embedding and generates the hidden state h t at the time step t .", "The vocabulary distribution P vocab for time step t is generated as follows: P vocab = Softmax ( W 3 [ h t ; OK ]) (5) where W 3 R | V | 2 d is a trainable parameter.", "Our implementation is based on the Pytorch version of OpenNMT (Klein et al., 2017) 7 .", "We used the pre-trained Dialogpt word embedding (Zhang et al., 2020).", "The hidden size of the encoder and decoder were set to 1024.", "The embedding size is the same as the memory size and the RNN hidden size.", "We used AdamW (Loshchilov and Hutter, 2018) as our optimizer with an initial learning rate of 5e-5 and a linear decay learning rate schedule.", "The dropout rate was set to 0.1.", "The batch size was selected in { 16 , 32 , 64 , 128 } .", "The maximum number of iteration steps was set as 20000 with an early stop if no improvement over perplexity on dev set.", "To generate hypothesis sentences, we used nucleus (top-p) filtering (Holtzman et al., 2019) without any re-scoring techniques.", "The cumulative probability for top-p filtering is set as 0.4.", "We introduced several baselines to compare with our generative split memory network ( GSMN ).", "Attention-Seq2Seq: a standard seq2seq model with attention mechanisms proposed by Luong et al. (2015), without utilizing any personal information.", "Speaker Model: Similar to (Li et al., 2016b), we employed an additional vector to model the respondent A .", "Generative Memory Network w/ History: Following Zhang et al. (2018), we encoded the retrieved comments as individual memory representation in a memory network to incorporate comment histories M .", "Dialogpt w/ Split Memories We directly combined the split memory network with the pretrained models , i.e. we applied the same architecture as GSMN in the decoder side and used pre-trained Dialogpt as the encoder.", "We evaluated the baselines and our generative split memory network using several widely-used metrics, including perplexity, BLEU, and BERTScore, to compare models' performances in generating appropriate responses.", "Perplexity is used to measure how the outputs fit test data (Vinyals and Le, 2015; Serban et al., 2016).", "Models with lower perplexity scores are found to demonstrate better performance to generate grammatical and fluent responses (Xie et al., 2019; Zheng et al., 2020).", "We also used BLEU (Papineni et al., 2002; Li et al., 2016a; Galley et al., 2015) with n -grams (n= 1 ) to measure how many n-grams in generated responses overlap with those in reference responses.", "To evaluate persona consistency between user comments and generated sentences, Madotto et al. proposed consistency C score using sequence classification model trained on Dialog NLI dataset (Welleck et al., 2019), a corpus based on Persona dataset, with NLI annotation.", "For given comments p j s and generated sentence u , the consistency score is given as follow: NLI ( u, p j ) = 1 if u entails p j 0 if u is independent to p j 1 if u contradicts p j C ( u ) = m (cid:88) j NLI ( u, p j ) (6) Note that models with higher consistency C scores tend to generate more persona consistent responses with user's comments.", "In our settings, m is set 10 for max number of given comments.", "PC-Score In addition to the aforementioned evaluation metrics, we also designed a metric called Profile Consistency Score ( PC-Score ) to measure a model's performance in generating persona consistent responses with given user profiles.", "The idea is similar to the entity score in knowledge enhanced conversation tasks (Zhou et al., 2018), which computes the number of entities for each response and aims to measure the model's ability to select the concepts from the commonsense knowledge.", "Instead of calculating the number of entities selected from knowledge base per response, we did a micro-average over the number of entities selected from the profile of each user to capture personalization.", "Manual Evaluation We conduct manual annotations to examine the consistency of those models.", "Here, the consistency refers to that the generated responses should be consistent for the same user when similar questions are asked.", "For example, when asked Where are you from? and Where is your hometown? , the generated responses should be consistent in certain granularity for the same user.", "To this end, we randomly chose ten users from our user population set and designed 20 questions.", "Half of these questions are related to basic personal information, e.g., residence and gender, and the other half is related to personal interests and attitudes such as favorite activities.", "Detailed experiments are shown in the Appendix D. We generated 200 responses from each model, and asked annotators on Amazon Mechanical Turk to judge such consistency based on two criteria: (1) model consistency : whether or not a given generated sentence is consistent under the same group of questions; (2) personalization consistency : whether or not a given generated sentence is consistent with a user's personal information.", "Raters were asked to rate between 1 to 3 for model consistency, where 1 means Not consistent at all , 2 means Slightly Model User Information Consistency Personalization Consistency Attention-Seq2Seq 1 . 40 0 . 29 Speaker Model username 1 . 88 0 . 31 Generative Memory Network history 2 . 08 0 . 35 Generative Split Memory Network profile+history 2.29 0.43 Table 7: Manual evaluation results. Here, Personalization Consistency can take value from -1 to +1. indicates significant difference with the best result (t-test, p-value < 0 . 05 ); and : p < 0 . 01 . consistent , 3 means Very consistent .", "For personalization consistency, raters rate whether the generated sentences match either the provided user profile attributes or user comments 8 .", "Note that when annotating personalization consistency, the turkers were not able to see the user ids; all the user information shown to them is publicly available on Reddit to protect user information.", "Table 6 summarizes different evaluation metrics on test set.", "We found that the Speaker Model boosted the Attention-Seq2Seq baseline with a decrease of 18 .", "3 in perplexity, 3 .", "14% increase in PC-Score, similar to Li et al. (2016b).", "However, because the user set is quite large, the performance improvement of Speaker model is limited.", "The generative memory network that incorporates either user profile or comment history demonstrates improvement compared with Attention-Seq2Seq in terms of all the metrics.", "Either generative memory network outperformed the speaker model, suggesting that network with additional user information has a better ability to generate semantic consistent and personalized responses.", "Furthermore, our proposed network ( GSMN ) significantly outperforms all other baselines, with a decrease of 38 .", "7 in perplexity and a 28 .", "4% increase in BLEU over Attention-Seq2Seq.", "By applying the split memories, Dialogpt w/ Split Memories further outperform Dialogpt in terms of all metrics.", "It shows that current pre-trained dialog models lack the ability of personalized memorization though they were found to be effective in memorization and generalization on a wide range of classical dialog tasks(Zhang et al. (2020)).", "This further justifies the necessity of our datasets.", "In terms of human evaluations for both consistency and personalization consistency (Table 7), our network also demonstrates consistent improvement over the baselines.", "Note that we do not apply 8 Comments that have highest similarity scores with the queries are used as references.", "any copy mechanisms in our models, the generation of personalized entities is purely depending on representation learning.", "It shows Split memory network outperforms the baselines on generating sentences with better personalization and consistency.", "To further examine the effectiveness of our generative split architecture, we also compared with Dialogpt that utilizes both profile and comment history to generating responses.", "In this setting, we included both profile and history as individual memory and did attention mechanism over this memory, shown as Dialogpt + profile + history .", "We observed that our generative split memory network still achieved better performance.", "Overall, this shows that incorporating both dimensions of personalization information, i.e., user profile and user comment history, can boost models' performances for response generation and split architecture for generation is better at utilizing these two different personalization signals.", "Diverse Responses Conditioned on Users: Table 12 in Appendix C shows some example responses generated by our GSMN, together with the Seq2Seq baseline.", "The examples are randomly sampled from our test set.", "Given different user profiles, GSMN is more effective and faithful to the profile attributes of different users in generating user-specific responses.", "For example, our model can identify user's profiles like families and gender when being asked about the most reliable person while the baseline's answer is more like consensus and applicable for any user.", "More examples are in Table 14 in Appendix C. Consistency Analysis: Example outputs from baselines and our model are described in Table 13 in Appendix C. The Seq2Seq model was a bit inconsistent in answering the same group of questions.", "The speaker model showed consistency to some degree since the answers California and Florida are quite close in the word embedding space, but failed due to the lack of user information.", "Compared with these baselines, GSMN is much better at generating both consistent and personalized responses.", "For example, when asked favorite activities, GSMN responds consistently and is also sensitive to personalized information since my dog identifies the pet attribute of the respondent.", "In this paper, we introduced a large-scale open-domain personalized dataset PER-CHAT and proposed a generative split memory network to utilize both user profile information and commenting histories for the task of response generation.", "Experimental results showed that our proposed model significantly outperformed several state-of-art baseline models, both quantitatively and qualitatively.", "Future research could build upon our work on single-turn response generation to further model personalization in multi-turn conversations.", "For the annotation, each worker on Amazon Mechanical Turk was paid 0.1$ per selection task (matching the United States federal minimum wage).", "To ensure quality, we chose only master crowd-workers who had more than 5000 HITs approved and with an approval rate larger than 95%.", "Considering the privacy violation problems our dataset may bring about, we followed Reddit's term of use for user contentbased on Reddit API Terms of Use, users are granted with license to display the user content through application 9 .", "We have taken careful procedures to protect users' privacy concerns.", "First, our introduced PER-CHAT dataset will be shared for academic use only .", "We only released raw data from pushshift.io(Baumgartner et al. (2020)), and open-sourced our scripts for preprocessing user attributes and models for reproducibility.", "Note that, the user attributes used in our work are identified based on users' self-reported statements via regular expressions matching.", "This research study has been approved by the Institutional Review Board (IRB) at the researchers' institution.", "Generation models trained on Reddit sometimes tend to generate toxic or inappropriate responses as pointed out by Dialogpt (Zhang et al., 2020).", "Due to this reason, we followed their best practices to deal with released version of decoding scripts 10 ." ]
[ "abstain", "method", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "objective", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "method", "abstain", "method", "other", "other", "other", "other", "method", "method", "method", "result", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "result", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "result", "other", "result", "method", "abstain", "result", "abstain", "abstain", "method", "abstain", "other", "method", "result", "abstain", "abstain", "method", "other", "result", "abstain", "result", "result", "method", "abstain", "method", "abstain", "abstain", "result", "abstain", "objective", "objective", "abstain", "method", "abstain", "result", "abstain", "result", "method", "method", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method" ]
[ "We study few-shot learning in natural language domains.", "Compared to many existing works that apply either metric-based or optimization-based meta-learning to image domain with low inter-task variance, we consider a more realistic setting, where tasks are diverse.", "However, it imposes tremendous diffi-culties to existing state-of-the-art metric-based algorithms since a single metric is insufficient to capture complex task variations in natural language domain.", "To alleviate the problem, we propose an adaptive metric learning approach that automatically determines the best weighted combination from a set of metrics obtained from meta-training tasks for a newly seen few-shot task.", "Extensive quantitative evaluations on real-world sentiment analysis and dialog intent classification datasets demonstrate that the proposed method performs favorably against state-of-the-art few shot learning algorithms in terms of predictive accuracy.", "We make our code and data available for further study.", "1 1 Introduction Few-shot learning (FSL) (Miller et al., 2000; Li et al., 2006; Lake et al., 2015) aims to learn classifiers from few examples per class.", "Recently, deep learning has been successfully exploited for FSL via learning meta-models from a large number of meta-training tasks .", "These meta-models can be then used for rapid-adaptation for the target/meta-testing tasks that only have few training examples.", "Examples of such meta-models include: (1) metric-/similarity-based models, which learn contextual, and task-specific similarity measures (Koch, 2015; Vinyals et al., 2016; Equal contributions from the corresponding authors: [email protected], [email protected], [email protected] . 1 https://github.com/Gorov/DiverseFewShot_Amazon Snell et al., 2017); and (2) optimization-based models, which receive the input of gradients from a FSL task and predict either model parameters or parameter updates (Ravi and Larochelle, 2017; Munkhdalai and Yu, 2017; Finn et al., 2017; Wang et al., 2017).", "In the past, FSL has mainly considered image domains, where all tasks are often sampled from one huge collection of data, such as Om-niglot (Lake et al., 2011) and ImageNet (Vinyals et al., 2016), making tasks come from a single domain thus related.", "Due to such a simplified setting, almost all previous works employ a common meta-model (metric-/optimization-based) for all few-shot tasks.", "However, this setting is far from the realistic scenarios in many real-world applications of few-shot text classification.", "For example, on an enterprise AI cloud service, many clients submit various tasks to train text classification models for business-specific purposes.", "The tasks could be classifying customers' comments or opinions on different products/services, monitoring public reactions to different policy changes, or determining users' intents in different types of personal assistant services.", "As most of the clients cannot collect enough data, their submitted tasks form a few-shot setting.", "Also, these tasks are significantly diverse, thus a common metric is insufficient to handle all these tasks.", "We consider a more realistic FSL setting in this paper, where tasks are diverse.", "In such a scenario, the optimal meta-model may vary across tasks.", "Our solution is based on the metric-learning approach (Snell et al., 2017) and the key idea is to maintain multiple metrics for FSL.", "The meta-learner selects and combines multiple metrics for learning the target task using task clustering on the meta-training tasks.", "During the meta-training, we propose to first partition the meta-training tasks into clusters, making the tasks in each cluster 1206 likely to be related.", "Then within each cluster, we train a deep embedding function as the metric.", "This ensures the common metric is only shared across tasks within the same cluster.", "Further, during meta-testing, each target FSL task is assigned to a task-specific metric, which is a linear combination of the metrics defined by different clusters.", "In this way, the diverse few-shot tasks can derive different metrics from the previous learning experience.", "The key of the proposed FSL framework is the task clustering algorithm.", "Previous works (Kumar and Daume III, 2012; Kang et al., 2011; Crammer and Mansour, 2012; Barzilai and Crammer, 2015) mainly focused on convex objectives, and assumed the number of classes is the same across different tasks ( e.g. binary classification is often considered).", "To make task clustering", "(i) compatible with deep networks and", "(ii) able to handle tasks with a various number of labels, we propose a matrix-completion based task clustering algorithm.", "The algorithm utilizes task similarity measured by cross-task transfer performance, denoted by matrix S .", "The ( i, j ) -entry of S is the estimated accuracy by adapting the learned representations on the i -th (source) task to the j -th (target) task.", "We rely on matrix completion to deal with missing and unreliable entries in S and finally apply spectral clustering to generate the task partitions.", "To the best of our knowledge, our work is the first one addressing the diverse few-shot learning problem and reporting results on real-world few-shot text classification problems.", "The experimental results show that the proposed algorithm provides significant gains on few-shot sentiment classification and dialog intent classification tasks.", "It provides positive feedback on the idea of using multiple meta-models (metrics) to handle diverse FSL tasks, as well as the proposed task clustering algorithm on automatically detecting related tasks.", "Few-Shot Learning Since we focus on diverse metric-based FSL , the problem can be formulated in two stages: (1) meta-training , where a set of metrics M = { 1 , , K } is learned on the meta-training tasks T .", "Each i maps two input ( x 1 , x 2 ) to a scalar of similarity score.", "Here T = { T 1 , T 2 , , TN } is a collection of N tasks.", "Here K is a pre-defined number (usually K N ).", "Each task T i consists of training, validation, and testing set denoted as \u0000 D traini , D validi , D testi , respectively.", "Note that the definition of T is a generalized version of D ( meta \u0000 train ) in (Ravi and Larochelle, 2017), since each task T i can be either few-shot (where D validi is empty) or regular 2 .", "(2) meta-testing : the trained metrics in M is applied to meta-testing tasks denoted as T 0 = { T 0 1 , , T 0 N 0 } , where each T 0 i is a few-shot learning task consisting of both training and testing data as \u0000 D 0 traini , D 0 testi .", "D 0 traini is a small labeled set for generating the prediction model M 0 i for each T 0 i .", "Specifically, M 0 i s are kNN-based predictors built upon the metrics in M .", "We will detail the construction of M 0 i in Section 3, Eq.", "(6).", "It is worth mentioning that the definition of T 0 is the same as D ( meta \u0000 test ) in (Ravi and Larochelle, 2017).", "The performance of few-shot learning is the macro-average of M 0 i 's accuracy on all the testing set D 0 testi s.", "Our definitions can be easily generalized to other meta-learning approaches (Ravi and Larochelle, 2017; Finn et al., 2017; Mishra et al., 2017).", "The motivation of employing multiple metrics is that when the tasks are diverse, one metric model may not be sufficient.", "Note that previous metric-based FSL methods can be viewed as a special case of our definition where M only contains a single , as shown in the two base model examples below.", "Base Model: Matching Networks In this paper we use the metric-based model Matching Network (MNet) (Vinyals et al., 2016) as the base metric model.", "The model (Figure 1b) consists of a neural network as the embedding function ( encoder ) and an augmented memory.", "The encoder, f ( ) , maps an input x to a d -length vector.", "The learned metric is thus the similarity between the encoded vectors, ( x 1 , x 2 ) = f ( x 1 ) T f ( x 2 ) , i.e. the metric is modeled by the encoder f .", "The augmented memory stores a support set S = { ( x i , y i ) } | S | i =1 , where x i is the supporting instance and y i is its corresponding label in a one-hot format.", "The MNet explicitly defines a classifier M conditioned on the supporting set S .", "For any new data x , M predicts its label via a similarity function ( ., . ) 2 For example, the methods in (Triantafillou et al., 2017) can be viewed as training meta-models from any sampled batches from one single meta-training dataset.", "X where we defined ( ., . ) to be a softmax distribution given ( x, x i ) , where x i is a supporting instance, i", ".e., ( x, x i ; ) = exp( f ( x ) T f ( x i )) / P | S | j =1 exp( f ( x ) T f ( x j )) , where are the parameters of the encoder f .", "Thus, y is a valid distribution over the supporting set's labels { y i } | S | i =1 .", "To adapt the MNet to text classification, we choose encoder f to be a convolutional neural network (CNN) following (Kim, 2014; Johnson and Zhang, 2016).", "Figure 1 shows the MNet with the CNN architecture.", "Following (Collobert et al., 2011; Kim, 2014), the model consists of a convolution layer and a max-pooling operation over the entire sentence.", "To train the MNets, we first sample the training dataset D for task T from all tasks T , with notation simplified as D T .", "For each class in the sampled dataset D , we sample k random instances in that class to construct a support set S , and sample a batch of training instances B as training examples, i.e., B, S D .", "The training objective is to minimize the prediction error of the training samples given the supporting set (with regard to the encoder parameters ) as follows: ED T h E B,S D X ( x,y ) 2 B log( P ( y | x, S ; )) i .", "Base Model: Prototypical Networks Prototypical Network (ProtoNet) (Snell et al., 2017) is a variation of Matching Network, which also depends on metric learning but builds the classifier", "L is the number of classes and S i = { x | ( x, y ) 2 S ^ y = y i } is the support set of class y i .", "( x, S i ; ) = exp f ( x ) TP x 2 Si f ( x ) / P Lj =1 exp f ( x ) TP x 02 Sj f ( x 0 ) .", "We propose a task-clustering framework to address the diverse few-shot learning problem stated in Section 2. We have the FSL algorithm summarized in Algorithm 1. Figure 2 gives an overview of our idea.", "The initial step of the algorithm is a novel task clustering algorithm based on matrix completion, which is described in Section 3.1.", "The few-shot learning method based on task clustering is then introduced in Section 3.2.", "Our task clustering algorithm is shown in Algorithm 2. The algorithm first evaluates the transfer performance by applying a single-task model i to another task j (Section 3.1.1), which will result in a (partially observed) cross-task transfer performance matrix S .", "The matrix S is then cleaned and completed, giving a symmetry task similarity matrix Y for spectral clustering (Ng et al., 2002).", "Using single-task models, we can compute performance scores s ij by adapting each M i to each task", "T j ( j 6 = i ) .", "This forms an n n pair-wise classification performance matrix S , called the transfer-performance matrix .", "Note that S is asymmetric since usually S ij 6 = S ji .", "Ideally, the transfer performance could be estimated by training a MNet on task i and directly evaluating it on task j .", "However, the limited training data usually lead to generally low transfer performance of single-task MNet.", "As a result we adopt the following approach to estimate S : We train a CNN classifier (Figure", "1(a)) on task i , then take only the encoder M enci from M i and freeze it to train a classifier on task j .", "This gives us a new task j model, and we test this model on D validj to get the accuracy as the transfer-performance S ij .", "The score shows how the representations learned on task i can be adapted to task j , thus indicating the similarity between tasks.", "Remark: Out-of-Vocabulary Problem In text classification tasks, transferring an encoder with fine-tuned word embeddings from one task to another is difficult as there can be a significant difference between the two vocabularies.", "Hence, while learning the single-task CNN classifiers, we always make the word embeddings fixed.", "Directly using the transfer performance for task clustering may suffer from both efficiency and accuracy issues.", "First, evaluation of all entries in the matrix S involves conducting the source-target transfer learning O ( n 2 ) times, where n is the number of meta-training tasks.", "For a large number of diverse tasks where the n can be larger than 1,000, evaluation of the full matrix is unacceptable (over 1M entries to evaluate).", "Second, the estimated cross-task performance (i.e. some S ij or S ji scores) is often unreliable due to small data 1209 size or label noise.", "When the number of the uncertain values is large, they can collectively mislead the clustering algorithm to output an incorrect task-partition.", "To address the aforementioned challenges, we propose a novel task clustering algorithm based on the theory of matrix completion (Cand`es and Tao, 2010).", "Specifically, we deal with the huge number of entries by randomly sample task pairs to evaluate the S ij and S ji scores.", "Besides, we deal with the unreliable entries and asymmetry issue by keeping only task pairs ( i, j ) with consistent S ij and S ji scores.", "as will be introduced in Eq.", "(4).", "Below, we describe our method in detail.", "Score Filtering First, we use only reliable task pairs to generate a partially-observed similarity matrix Y .", "Specifically, if S ij and S ji are high enough, then it is likely that tasks { i, j } belong to a same cluster and share significant information.", "Conversely, if S ij and S ji are low enough, then they tend to belong to different clusters.", "To this end, we need to design a mechanism to determine if a performance is high or low enough.", "Since different tasks may vary in difficulty, a fixed threshold is not suitable.", "Hence, we define a dynamic threshold using the mean and standard deviation of the target task performance, i.e., j = mean ( S : j ) and \u0000 j = std ( S : j ) , where S : j is the j -th column of S .", "We then introduce two positive parameters p 1 and p 2 , and define high and low performance as S ij greater than j + p 1 \u0000 j or lower than j \u0000 p 2 \u0000 j , respectively.", "When both S ij and S ji are high and low enough, we set their pairwise similarity as 1 and 0 , respectively.", "Other task pairs are treated as uncertain task pairs and are marked as unobserved, and don't influence our clustering method.", "This leads to a partially-observed symmetric matrix Y , i.e., Y ij = Y ji = 8>< > : 1 if S ij > j + p 1 \u0000 j and S ji > i + p 1 \u0000 i 0 if S ij < j \u0000 p 2 \u0000 j and S ji < i \u0000 p 2 \u0000 i unobserved otherwise (4) Matrix Completion Given the partially observed matrix Y , we then reconstruct the full similarity matrix X 2 R n n .", "We first note that the similarity matrix X should be of low-rank (proof deferred to appendix).", "Additionally, since the observed entries of Y are generated based on high and low enough performance, it is safe to assume that most observed entries are correct and only a few may be incorrect.", "Therefore, we introduce a sparse matrix E to capture the observed incorrect entries in Y .", "Combining the two observations, Y can be decomposed into the sum of two matrices X and E , where X is a low rank matrix storing similarities between task pairs, and E is a sparse matrix that captures the errors in Y .", "The matrix completion problem can be cast as the following convex optimization problem: min X , E k X k + \u0000 k E k 1 (5) s.t. P ( X + E ) = P ( Y ) , where k \u0000 k denotes the matrix nuclear norm, the convex surrogate of rank function.", "is the set of observed entries in Y , and P : R n n 7!", "R n n is a matrix projection operator defined as [ P ( A )] ij = A ij if ( i, j ) 2 0 otherwise Finally, we apply spectral clustering on the matrix X to get the task clusters.", "Remark: Sample Efficiency In the Appendix A, we show a Theorem 7.1 as well as its proof, implying that under mild conditions, the problem (5) can perfectly recover the underlying similarity matrix X if the number of observed correct entries is at least O ( n log 2 n ) .", "This theoretical guarantee implies that for a large number n of training tasks, only a tiny fraction of all task pairs is needed to reliably infer similarities over all task pairs.", "For each cluster C k , we train a multi-task MNet model (Figure", "1(b)) with all tasks in that cluster to encourage parameter sharing.", "The result, denoted as f k is called the cluster-encoder of cluster C k .", "The k -th metric of the cluster is thus ( x 1 , x 2 ) = f k ( x 1 ) | f k ( x 2 ) .", "To build a predictor M with access to only a limited number of training samples, we make the prediction probability by linearly combining prediction from learned cluster-encoders:", "where f k is the learned (and frozen) encoder of the k -th cluster, { k } Kk =1 are adaptable parameters trained with few-shot training examples.", "And the predictor P ( y | x ; f k ) from each cluster is P ( y = y l | x ; f k ) = exp { f k ( x l ) | f k ( x ) } P i exp { f k ( x i ) | f k ( x ) } (7) x l is the corresponding training sample of label y l .", "Remark: Joint Method versus Pipeline Method End-to-end joint optimization on training data becomes a popular methodology for deep learning systems, but it is not directly applicable to diverse FSL.", "One main reason is that deep networks could easily fit any task partitions if we optimize on training loss only, making the learned metrics not generalize, as discussed in Section 6.", "As a result, this work adopts a pipeline training approach and employing validation sets for task clustering.", "Combining reinforcement learning with meta-learning could be a potential solution to enable an end-to-end training for future work.", "We test our methods by conducting experiments on two text classification data sets.", "We used NLTK toolkit 3 for tokenization.", "The task are divided into meta-training tasks and meta-testing tasks (target tasks), where the meta-training tasks are used for clustering and cluster-encoder training.", "The meta-testing tasks are few-shot tasks, which are used for evaluating the method in Eq.", "(6).", "First, following Barzilai and Crammer (2015), we construct multiple tasks with the multi-domain sentiment classification (Blitzer et al., 2007) data set.", "The dataset consists of Amazon product reviews for 23 types of products (see Appendix D for the details).", "For each product domain, we construct three binary classification tasks with different thresholds on the ratings: the tasks consider a review as positive if it belongs to one of the following buckets = 5 stars, > = 4 stars or > = 2 stars.", "4 These buckets then form the basis of the task-setup, giving us 23 3 = 69 tasks in total.", "For each domain we distribute the reviews uniformly 3 http://www.nltk.org/ 4 Data downloaded from http://www.cs.jhu.edu/ mdredze/datasets/sentiment/ , in which the 3star samples were unavailable due to their ambiguous nature (Blitzer et al., 2007).", "to the 3 tasks.", "For evaluation, we select 12 (4 3) tasks from 4 domains ( Books, DVD, Electronics, Kitchen ) as the meta-testing (target) tasks out of all 23 domains.", "For the target tasks, we create 5-shot learning problems.", "The second dataset is from an online service which trains and serves intent classification models to various clients.", "The dataset comprises recorded conversations between human users and dialog systems in various domains, ranging from personal assistant to complex service-ordering or customer-service request scenarios.", "During classification, intent-labels 5 are assigned to user utterances (sentences).", "We use a total of 175 tasks from different clients, and randomly sample 10 tasks from them as our target tasks.", "For each meta-training task, we randomly sample 64% data into a training set, 16% into a validation set, and use the rest as the test set.", "The number of labels for these tasks varies a lot (from 2 to 100, see Appendix D for details), making regular k -shot settings not essentially limited-resource problems (e.g., 5-shot on 100 classes will give a good amount of 500 training instances).", "Hence, to adapt this to a FSL scenario, for target tasks we keep one example for each label (one-shot), plus 20 randomly picked labeled examples to create the training data.", "We believe this is a fairly realistic estimate of labeled examples one client could provide easily.", "Remark: Evaluation of the Robustness of Algorithm 2 Our matrix-completion method could handle a large number of tasks via task-pair sampling.", "However, the sizes of tasks in the above two few-shot learning datasets are not too huge, so evaluation of the whole task-similarity matrix is still tractable.", "In our experiments, the incomplete matrices mainly come from the score-filtering step (see Eq. 4).", "Thus there is limited randomness involved in the generation of task clusters.", "To strengthen the conclusion, we evaluate our algorithm on an additional dataset with a much larger number of tasks.", "The results are reported in the multi-task learning setting instead of the few-shot learning setting focused in this paper.", "Therefore we put the results to a non-archive version of 5 In conversational dialog systems, intent-labels are used to guide the dialog-flow.", "this paper 6 for further reference.", "Baselines We compare our method to the following baselines: (1) Single-task CNN : training a CNN model for each task individually; (2) Single-task FastText : training one FastText model (Joulin et al., 2016) with fixed embeddings for each individual task; (3) Fine-tuned the holistic MTL-CNN : a standard transfer-learning approach, which trains one MTL-CNN model on all the training tasks offline, then fine-tunes the classifier layer (i.e. M ( cls ) Figure", "1(a)) on each target task; (4) Matching Network : a metric-learning based few-shot learning model trained on all training tasks; (5) Prototypical Network : a variation of matching network with different prediction function as Eq.", "3; (6) Convex combining all single-task models : training one CNN classifier on each meta-training task individually and taking the encoder, then for each target task training a linear combination of all the above single-task encoders with Eq.", "(6).", "This baseline can be viewed as a variation of our method without task clustering.", "We initialize all models with pre-trained 100-dim Glove embeddings (trained on 6B corpus) (Pennington et al., 2014).", "Hyper-Parameter Tuning In all experiments, we set both p 1 and p 2 parameters in (4) to 0 .", "5 .", "This strikes a balance between obtaining enough observed entries in Y , and ensuring that most of the retained similarities are consistent with the cluster membership.", "The window/hidden-layer sizes of CNN and the initialization of embeddings (ran-dom or pre-trained) are tuned during the cluster-encoder training phase, with the validation sets of meta-training tasks.", "We have the CNN with window size of 5 and 200 hidden units.", "The single-metric FSL baselines have 400 hidden units in the CNN encoders.", "On sentiment classification, all cluster-encoders use random initialized word embeddings for sentiment classification, and use Glove embeddings as initialization for intent classification, which is likely because the training sets of the intent tasks are usually small.", "layer can be also trained as the cluster-encoder for each task cluster.", "Therefore we compared CNN classifier, matching network, and prototypical network on Amazon review, and found that CNN classifier performs similarly well as prototypical network.", "Since some of the Amazon review data is quite large which involves further difficulty on the computation of supporting sets, we finally use binary CNN classifiers as cluster-encoders in all the sentiment classification experiments.", "Selection of the learning rate and number of training epochs for FSL settings, i.e., fitting s in Eq.", "(6), is more difficult since there is no validation data in few-shot problems.", "Thus we pre-select a subset of meta-training tasks as meta-validation tasks and tune the two hyper-parameters on the meta-validation tasks.", "Table 1 shows the main results on", "(i) the 12 few-shot product sentiment classification tasks by leveraging the learned knowledge from the 57 previously observed tasks from other product domains; and", "(ii) the 10 few-shot dialog intent classification tasks by leveraging the 165 previously observed tasks from other clients' data.", "Due to the limited training resources, all the supervised-learning baselines perform poorly.", "The two state-of-the-art metric-based FSL approaches, matching network (4) and prototypical network (5), do not perform better compared to the other baselines, since the single metric is not sufficient for all the diverse tasks.", "On intent classification where tasks are further diverse, all the single-metric or single-model methods (3-5) perform worse compared to the single-task CNN baseline (1).", "The convex combination of all the single training task models is the best performing baseline overall.", "However, on intent classification it only performs on par with the single-task CNN (1), which does not use any meta-learning or transfer learning techniques, mainly for two reasons:", "(i) with the growth of the number of meta-training tasks, the model parameters grow linearly, making the number of parameters (165 in this case) in", "Eq.(6) too large for the few-shot tasks to fit;", "(ii) the meta-training tasks in intent classification usually contain less training data, making the single-task encoders not generalize well.", "It outperforms the baselines in previous work (1-5) by a large margin of more than 6% on the sentiment classification tasks, and more than 3% on the intent classification tasks.", "It is also significantly better than our proposed baseline (6), showing the advantages of the usage of task clustering.", "Adaptive ROBUSTTC-FSL Although the ROBUSTTC-FSL improves over baselines on intent classification, the margin is smaller compared to that on sentiment classification, because the intent classification tasks are more diverse in nature.", "This is also demonstrated by the training accuracy on the target tasks, where several tasks fail to find any cluster that could provide a metric that suits their training examples.", "To deal with this problem, we propose an improved algorithm to automatically discover whether a target task belongs to none of the task-clusters.", "If the task doesn't belong to any of the clusters, it cannot ben-efit from any previous knowledge thus falls back to single-task CNN.", "The target task is treated as out-of-clusters when none of the clusters could achieve higher than 20% accuracy (selected on meta-validation tasks) on its training data.", "We call this method Adaptive ROBUSTTC-FSL , which gives more than 5% performance boost over the best ROBUSTTC-FSL result on intent classification.", "Note that the adaptive approach makes no difference on the sentiment tasks, because they are more closely related so re-using cluster-encoders always achieves better results compared to single-task CNNs.", "Effect of the number of clusters Figure shows the effect of cluster numbers on the two tasks.", "ROBUSTTC achieves best performance Figure 3: Effect of clusters.", "with 5 clusters on sentiment analysis (SA) and 20 clusters on intent classification (Intent).", "All clustering results significantly outperform the single-metric baselines (#cluster=1 in the figure).", "Effect of the clustering algorithms Compared to previous task clustering algorithms, our ROBUSTTC is the only one that can cluster tasks with varying numbers of class labels (e.g. in intent classification tasks).", "Moreover, we show that even in the setting of all binary classifications tasks (e.g. the sentiment-analysis tasks) that previous task clustering research work on, our ROBUSTTC is still slightly better for the diverse FSL problems.", "Figure 3 compares with a state-of-the-art logistic regression based task clustering method ( ASAP-MT-LR ) (Barzilai and Crammer, 2015).", "Our ROBUSTTC clusters give slightly better FSL performance (e.g. 83.12 vs. 82.65 when #cluster=5).", "Visualization of Task Clusters The top rows of Table 2 shows the ten clusters used to generate the sentiment classification results in Figure 3.", "From the results, we can see that tasks with same thresholds are usually grouped together; and tasks in similar domains also tend to appear in the same clusters, even the thresholds are slightly different (e.g. t2 vs t4 and t4 vs t5).", "The bottom of the table shows the weights s in", "Eq.(6) for the target tasks with the largest improvement.", "It confirms that our ROBUSTTC-FSL algorithm accurately adapts multiple metrics for the target tasks.", "Few Shot Learning FSL (Li et al., 2006; Miller et al., 2000) aims to learn classifiers for new classes with only a few training examples per class.", "Bayesian Program Induction (Lake et al., 2015) represents concepts as simple programs that best explain observed examples under a Bayesian criterion.", "Siamese neural networks rank similarity between inputs (Koch, 2015).", "Matching Networks (Vinyals et al., 2016) map a small labeled support set and an unlabeled example to its label, obviating the need for fine-tuning to adapt to new class types.", "These approaches essentially learn one metric for all tasks, which is sub-optimal when the tasks are diverse.", "An LSTM-based meta-learner (Ravi and Larochelle, 2017) learns the exact optimization algorithm used to train another learner neural-network classifier for the few-shot setting.", "Previous FSL research usually adopts the k shot, N -way setting, where all the few-shot tasks have the same number of N class labels, and each label has k training instances.", "Moreover, these few-shot tasks are usually constructed by sampling from one huge dataset, thus all the tasks are guaranteed to be related to each other.", "However, in real-world applications, the few-shot learning tasks could be diverse: there are different tasks with varying number of class labels and they are not guaranteed to be related to each other.", "As a result, a single meta-model or metric-model is usually not sufficient to handle all the few-shot tasks.", "Task Clustering Previous task clustering methods measure the task relationships in terms of similarities among single-task model parameters (Ku-mar and Daume III, 2012; Kang et al., 2011); or jointly assign task clusters and train model parameters for each cluster to minimize the overall training loss (Crammer and Mansour, 2012; Barzilai and Crammer, 2015; Murugesan et al., 2017).", "These methods usually work on convex models but do not fit the deep networks, mainly because of", "(i) the parameters of deep networks are very high-dimensional and their similarities are not necessarily related to the functional similarities; and", "(ii) deep networks have flexible representation power so they may overfit to arbitrary cluster assignment if we consider training loss alone.", "Moreover, these methods require identical class label sets across different tasks, which does not hold in most of the realistic settings.", "We propose a few-shot learning approach for diverse tasks based on task clustering.", "The proposed method can use multiple metrics, and performs significantly better compared to previous single-metric based methods when the few-shot tasks come from diverse domains.", "Future work includes generalizing our method to non-NLP problems, as well as applying the task-clustering idea to other few-shot learning frameworks (Ravi and Larochelle, 2017; Finn et al., 2017; Mishra et al., 2017; Cheng et al., 2017)." ]
[ "method", "method", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method", "objective", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method" ]
[ "In many documents, such as semi-structured webpages, textual semantics are augmented with additional information conveyed using visual elements including layout, font size, and color.", "Prior work on information extraction from semi-structured websites has required learning an extraction model specific to a given template via either manually labeled or distantly supervised data from that template.", "In this work, we propose a solution for zero-shot open-domain relation extraction from webpages with a previously unseen template, including from websites with little overlap with existing sources of knowledge for distant supervision and websites in entirely new subject verticals.", "Our model uses a graph neural network-based approach to build a rich representation of text fields on a webpage and the relationships between them, enabling generalization to new templates.", "Experiments show this approach provides a 31% F1 gain over a baseline for zero-shot extraction in a new subject vertical.", "Semi-structured websites offer rich sources of high-quality data across many areas of knowledge (Dong et al., 2014).", "These websites present information via text that is accompanied by rich visual and layout features that can be generalized beyond a single website.", "However, most prior work on information extraction (IE) from websites has largely ignored most of these features, instead relying only on HTML features specific to an individual website (Ferrara et al., 2014).", "This requires training data for every website targeted for extraction, an approach that cannot scale up if training data must be manually created.", "To circumvent manual data annotation, previous work used a distant supervision process requiring a knowledge base aligned to the website targeted Figure 1: Our zero-shot open-domain information extraction process learns generalizable graph-based representations of how relations are visually presented on semi-structured websites, allowing for training on one vertical (such University sites) and extraction from another (such as Movie sites).", "for extraction (Gentile et al., 2015; Lockard et al., 2018), including for OpenIE extraction (Banko et al., 2007; Bronzi et al., 2013; Lockard et al., 2019).", "These methods, however, can only learn a website-specific model based on seed knowledge for the site, but cannot be generalized to the majority of websites with knowledge from new verticals, by long-tail specialists, and in different languages.", "In this paper, we introduce the task of zero-shot relation extraction from semi-structured websites, in which a learned model is applied to extract from a website that was not represented in its training data (Figure 1).", "Moreover, we introduce ZEROSHOTCERES , a graph neural network model that encodes semantic textual and visual patterns common across different training websites and can generalize to extract information from documents with never-before-seen templates and topics.", "Unlike unstructured text, which can be modeled as a sequence, or images, which can be modeled as a two-dimensional grid of pixels, it is not obvious how to operate over the many shapes and sizes of text fields on a semi-structured webpage.", "We illustrate our intuition using the webpage snippets in Figure 1: Despite their differences, each site uses alignment of relation and object strings, either vertically or horizontally, to help indicate relationships; in addition, relation strings are often more prominent than their objects, either in size or boldness.", "Such features are semantically meaningful to readers and often consistent from site to site; thus, encoding them into the representation of webpages will allow us to generalize to unseen sites.", "Our model, ZEROSHOTCERES , encodes these diverse feature types in a graph representation in which each text field becomes a node in a graph, connected by edges indicating layout relationships on the page.", "This abstracts away the details of the page while maintaining the core visual structure presented to the reader.", "A graph neural network is then applied to produce a new representation of each text field, informed by the surrounding page context.", "This representation is then used to extract entities and relationships from the document.", "This allows us to extract not only in the closed-domain setting, but also allows us to conduct OpenIE on websites about entirely new subject verticals not seen during training.", "Our contributions are threefold:", "(a) We introduce a graph neural network model for webpage representation that integrates multi-modal information including visual, layout, and textual features, enabling generalization for IE from never-before-seen websites.", "(b) We propose the first approach to enable Open Information Extraction from semi-structured websites without prior knowledge or training data in the subject vertical.", "(c) Our method works in both OpenIE and ClosedIE settings.", "We conduct evaluations showing the effectiveness of the technique and exploring the challenges of zero-shot semi-structured IE, achieving a 31% improvement in F1 compared to an OpenIE baseline.", "The graph model gives a 26% F1 boost when extracting according to a defined schema (ClosedIE).", "DOM-based ClosedIE: The conventional approach to extraction from semi-structured websites is wrapper induction (Kushmerick et al., 1997), in", "which training data for documents from a given template is used to learn a rule-based extractor based on DOM (i.e., HTML) features to apply to other documents of the same template, extracting relations according to a pre-defined ontology (ClosedIE).", "Since this approach requires training data for each template targeted for extraction, recent work has focused on reducing the manual work needed per site.", "Fonduer (Wu et al., 2018) provides an interface for easily creating training data, Vertex (Gulhane et al., 2011) uses semi-supervision to minimize the number of labels needed, LODIE (Gentile et al., 2015) and Ceres (Lockard et al., 2018) automatically generate training data based on distant supervision, and DIADEM (Furche et al., 2014) identifies matching rules for specific entity types.", "DOM-based OpenIE: WEIR (Bronzi et al., 2013) and OpenCeres (Lockard et al., 2019) offer OpenIE approaches to DOM extraction.", "The latter method uses visual features in a semi-supervised learning setting to identify candidate pairs that are visually similar to known (relation, object) pairs; however, the ultimate extraction model learned is still site-specific and based on DOM features rather than the more generalizable visual or textual features.", "Pasupat and Liang (2014) present a zero-shot method for extraction from semi-structured webpages, but limit their work to extraction of entities rather than relationships and do not consider visual elements of the page.", "Multi-modal extraction: The incorporation of visual information into IE was proposed by Aumann et al. (2006), who attempted to learn a fitness function to calculate the visual similarity of a document to one in its training set to extract elements like headlines and authors.", "Other recent approaches that attempt to address the layout structure of documents are CharGrid (Katti et al., 2018), which represents a document as a two-dimensional grid of characters, RiSER, an extraction technique targeted at templated emails (Kocayusufoglu et al., 2019), and that by Liu et al. (2018), which presents an RNN method for learning DOM-tree rules.", "However, none of these address the OpenIE setting, which requires understanding the relationship between different text fields on the page.", "The approaches most similar to ours are Gra-phIE (Qian et al., 2019) and the approach by Liu et al. (2019).", "Both approaches involve constructing a graph of text fields with edges representing Figure 2: A depiction of the web page representation module (left) and relation classifiers (right).", "horizontal and vertical adjacency, followed by an application of a GCN.", "However, neither approach makes use of visual features beyond text field adjacency nor DOM features, and both only consider extraction from a single text field rather than OpenIE.", "In addition, they show only very limited results on the ability of their model to generalize beyond the templates present in the training set.", "3 Problem and Approach Overview 3.1 Zero-shot relation extraction from semi-structured websites We address the problem of extracting entities and the relationships between them as expressed by never-before-seen semi-structured websites.", "A semi-structured website typically belongs to a subject vertical V , where V is a general field of knowledge such as movies, finance, or sports.", "A semi-structured website consists of a set of detail pages sharing a similar template, each of which contains a set of facts about a page topic entity e topic .", "The HTML document w defines a set of text fields T , which the web browser renders as a webpage according to the instructions defined in the HTML and any referenced auxiliary files such as CSS or Javascript.", "The text fields have both textual and visual features, described in Section 4.2.1.", "Our goal is to extract (subject, relation, object) knowledge triples, where the subject is e topic , the object is a text field t T containing the name of an entity (or atomic attribute value), and the relation indicates the relationship between the two entities.", "For this work, we assume the page topic entity has already been identified, (such as by the method proposed by Lockard et al. (2018) or by using the HTML title tag) and thus limit ourselves to identifying the objects and corresponding relations.", "We consider the following two settings: Relation Extraction (ClosedIE): Let R define a closed set of relation types, including a special type indicating No Relation.", "Relation Extraction is the assignment of each text field t to one r i R , which indicates the relationship between the entity e object mentioned in t and e topic .", "Open Relation Extraction (OpenIE): Given a pair of text fields ( i, j ) , Open Relation Extraction is a binary prediction of whether i is a relation string indicating a relationship between the entity e object mentioned in j and e topic .", "Unlike prior work that requires the learning of a model specific to the semi-structured website targeted for extraction, we look at zero-shot extraction.", "Given a semi-structured website W targeted for extraction, zero-shot extraction is the learning of a model without any use of pages from W during training.", "We consider two zero-shot settings: Unseen-Website Zero-shot Extraction is the learning of a model without any use of pages from W , but with pages from some other website(s) from vertical V during training.", "Unseen-Vertical Zero-shot Extraction is the learning of a model without any use of pages from W or of pages from any website with vertical V during training.", "Figure 2 depicts our approach for zero-shot relation extraction (detailed in Section 5) leveraging a web page representation that will capture the", "similarities in visual and textual semantics across websites (Section 4).", "Our web page representation module first converts each page into a layout graph (Section 4.1) that abstracts away the details of the page structure while maintaining the adjacency relationships between text fields.", "We represent each text field with an initial feature vector of visual and textual attributes.", "This input is passed into a graph neural network that allows for information to flow between nodes, producing a new text field representation that captures contextual information (Section 4.2).", "To obtain a web page encoding, we leverage a pre-training step with auxilliary loss function L pre that encourages the model to produce an intermediary representation useful for IE.", "This is performed via a three-way classification that determines if a text field contains a relation name, the object of some relation, or irrelevant text (Section 4.3).", "After pre-training, the weights of this GNN are frozen and it can be applied to new pages, with its output used as input into a relation extraction module, optimized with task-specific loss function L task , where the task is either OpenIE or ClosedIE, described in Section 5.", "The resulting approach minimizes our overall loss LZSCERES , with: LZSCERES = L pre + L task (1) 4 Web Page Encoder The key idea behind our solution is to train webpage representations to capture the fundamental similarities in visual and textual semantics across websites to express relations, objects, and their relationships.", "The fundamental characteristics we capture, generalizable across templates and verticals, thus allow us to carry over our knowledge across websites and enable zero-shot extraction.", "There are two key parts in our solution.", "First, we build a graph to capture the layout relationships in a more abstract form that allows us to more easily learn the common features across different sites such as the fact that relation strings are often to the left or above their objects (Section 4.1).", "Second, we apply a Graph Neural Network (GNN) to learn representations for each node capturing contextual information about its neighborhood on the webpage (Section 4.2), allowing information to flow through the nodes, providing context (e.g., flowing through Cast to a far-away node Uma Thurman via the closer node Ethan Hawke in Figure 3).", "This Figure 3: A cropped portion of the detail page from all-movie.com for the film Tape .", "representation will be useful for relation extraction as described in Section 5.", "We encode the layout relationships between text fields in the form of a graph, G , consisting of a set of nodes N , each corresponding to a text field, and a set of edges E corresponding to relationships between the text fields.", "The edges capture three forms of adjacency, as shown in the example in Figure 3: Horizontal: Edges are added when two text fields are horizontal neighbors on the page; that is, they have a shared vertical location and there are no other text fields between them.", "Vertical: Edges are added when two text fields are vertical neighbors on the page; that is, they have an overlapping horizontal location and there are no other text fields between them.", "DOM: Edges are added when two text fields are siblings or cousins in the DOM tree; that is, the absolute XPaths identifying their locations differ only at a single index value.", "To build a representation of each text field that incorporates the surrounding page context, we use Graph Attention Networks (GAT) (Velickovic et al., 2018).", "The feature vector for each text field (de-scribed below) and the page graph form the input to a GAT, which then produces a new representation for each text field based on the surrounding context in the graph.", "where N i is the set of neighbors of node i in the graph, and h l 1 j is the representation of node j from the preceding layer; h 0 j indicates the input features for the node.", "(For each node, we add a self loop to the graph; that is, including i in N i .)", "W lG is a learned weight matrix applied to the node features for layer l 1 and is a non-linear function, in our case a ReLU.", "The attention weight ij determines how influenced a node's representation is by each of its neighbors, calculated as follows: ij = exp (cid:16) (cid:0) a (cid:62) [ W lG h l 1 i ; W lG h l 1 j ] (cid:1)(cid:17) (cid:80) k N i exp (cid:16) (cid:0) a (cid:62) [ W lG h l 1 i ; W l g h l 1 k ] (cid:1)(cid:17) , (3) where a is a weight vector applied against the concatenation (represented by ; ) of the two node's features as transformed by W lG and is a ReLU.", "This produces a new contextualized set of features for each node that are informed by the surrounding page context.", "We describe the original input features for each text field in the next section.", "For each text field on the page, we produce an initial feature vector containing both visual feature vector V and textual feature vector T .", "We define the input feature vector h 0 i for text field i as: h 0 i = [ T ( i ); V ( i )] (4) where ; represents concatenation.", "Visual Features: A numeric feature vector is constructed representing the bounding box coordinates of the text field, the height and width of the bounding box, and the font size, along with one-hot features representing the typeface, font weight, font style, color, and text alignment.", "Textual Features: In ClosedIE, to capture its semantics, the textual content of the text field is processed with a pre-trained BERT (Devlin et al., 2018) model.", "To produce a representation of the entire text field, we simply average the BERT-Base output for each token in the text field.", "For OpenIE, since the goal is to generalize to entirely new subject verticals that may contain text not seen during training, only a single textual feature is used 1 : the percent of pages on the site on which the string in the text field appears.", "This frequency measure helps differentiate relation strings, which are likely to be common, from object strings, which are more likely to be rare.", "To encourage the GNN weights to capture the features necessary to represent relationships on the page, we use a pre-training step to learn the GNN representation before incorporating it into the extraction model.", "The pre-training task is a simplified form of the OpenIE task.", "To speed up training by avoiding the pairwise decisions necessary for OpenIE, we instead perform a multi-class classification of each text field into a class c in the set { Relation, Object, Other } : p (cid:16) c | h li ; (cid:17) = softmax (cid:16) W pre h li (cid:17) (5) where h li is the output of the GNN for the text field, W pre is a weight matrix, and comprises WG and W pre .", "Given a training set with T text fields, each with a ground truth class y prei , we minimize the cross-entropy loss L pre : L pre = T (cid:88) i =1 log p (cid:16) y prei | h li , (cid:17) (6) To discourage overfitting to spurious details in the small number of websites in our training set, we freeze the GNN weights after pre-training and do not update them during the full OpenIE training.", "After pre-training we discard the linear layer W pre since it is not needed for subsequent steps; instead, we directly use the GNN output h l .", "Once we have the new representation h lt of each text field t produced by the above GNN process, we can perform our final classification.", "For OpenIE, the classification decision must be made over a pair of text fields, i and j , the first containing the candidate relation string and the second containing the candidate object string.", "To 1 This feature is also used during ClosedIE avoid examining all possible pairs of fields, we first apply the candidate pair identification algorithm from Lockard et al. (2019), which filters down to a set of potential pairs based on physical and layout distance between text fields.", "For each candidate pair, we concatenate the GNN-produced contextual features h l for both text fields with the original features h 0 for both text fields (since some information can be diluted in the GNN), as well as a pairwise feature vector that simply contains the horizontal and vertical distance between the two text fields, and pass them into a binary classifier: r OIEi = FNN (cid:16) [ h 0 i ; h 0 j ; h li ; h lj ; pairwise i,j ] , OIE (cid:17) (7) where FNN is a feed-forward neural network with parameters OIE , ; indicates concatenation, and r OIEi is the predicted probability that the two text fields constitute a (relation, object) pair.", "We then optimize for cross-entropy loss across training examples T with y OIEi = 1 if the pair is positive: LOIE = T (cid:88) i =1 y OIEi log r OIEi + (cid:0) 1 y OIEi (cid:1) log (cid:0) 1 r OIEi (cid:1) , (8) 5.2 ClosedIE For ClosedIE, we perform a multi-class classification using the contextual representation produced by the GNN ( h li ) along with the original features ( h 0 i ) for text field i : r CIEi = FNN (cid:16) [ h 0 i ; h li ] , CIE (cid:17) (9) where FNN is a feed-forward neural network parameterized by CIE , ; indicates concatenation, and r CIE i is the predicted probability of relation r in set R .", "We optimize for cross entropy loss LCIE : LCIE = T (cid:88) i =1 log p (cid:16) y CIEi | h 0 i , h li , CIE (cid:17) (10) where y CIEi is the true class for example i .", "For both ClosedIE and OpenIE we use one hidden layer in the feed-forward network.", "For both OpenIE and ClosedIE, our primary dataset is the extended version (Lockard et al., 2019) of the", "SWDE dataset (Hao et al., 2011), which contains gold labels for OpenIE extractions for 21 English-language websites (each with one template) in three subject verticals (Movie, NBA, and University), with between 400 and 2,000 pages per site.", "We generated ClosedIE labels by converting the OpenIE labels to ClosedIE labels via manual alignment of OpenIE relations between websites, giving a set of 18 relations for the Movie vertical, 14 for NBA, and 13 for University.", "More information on training data creation and a complete listing of ClosedIE relations is available in the Appendix.", "We used three SWDE Movie sites (AMCTV, AllMovie, and IMDb) as a development set and did not evaluate on them for the reported results.", "For each model tested (both our own and the base-lines), we classify the training setting into the following categories indicating the level of vertical or site-specific knowledge used, in decreasing level of difficulty.", "Level IUnseen-Vertical Zero-shot (OpenIE only): A model is trained on sites from two of the three verticals (e.g. NBA and University) and applied to sites from the other vertical (Movie).", "This is the hardest case and is important when we wish to extract knowledge from new verticals where we do not have any prior knowledge or annotations.", "Level IIZero-shot with Vertical Knowledge: A model is trained on all sites but one (spanning Movie, NBA, and University) and then applied to the held-out site.", "As in cross-validation, experiments are repeated with each site having a turn being held out.", "It is easier than Level I but is still important for a new website that may not have data overlapping with other websites in the same vertical.", "For the ClosedIE setting, we train only on in-vertical sites.", "Level IIISite-specific Knowledge: This is the traditional setting used by two of our baselines where we have seed knowledge overlapping with the website data to allow training a specific model for the website.", "Whereas Level I-II are both zero-shot settings, Level III is not, as it allows site-specific training data via weak supervision.", "(We do not present results using full supervision from manual annotations since it is known from prior work (e.g., Gulhane et al. (2011)) that full supervision from the target website yields highly accurate semi-structured extractors; we note that ZSCERES also achieves comparable results ( 0 . 95 F1) in this setting.", "We repeated our experiments 10 times and we report the results averaged across the runs.", "For OpenIE, we follow the lenient scoring method for SWDE introduced by Lockard et al. (2019), scoring an extraction as correct if the relation string matches any of acceptable surface forms listed by the ground truth for that object.", "Models are constructed in PyTorch (Paszke et al., 2017), with graph functions implemented in DGL (Wang et al., 2019) and optimization performed using Adam (Kingma and Ba, 2014) and a batch size of 20.", "For OpenIE, we use a hidden layer size of 25 for the GAT and 100 for the feed-forward layer.", "For ClosedIE, we use a hidden layer size of 200 for all layers.", "We use a 2-layer GAT and dropout of 0.25.", "We obtain visual features by rendering the page using the headless Chrome browser and querying the values using Selenium 2 .", "Extraction Threshold: Since our zero-shot setting means we cannot use a development set of pages from the target site to tune the decision threshold, we instead set the threshold for each experiment to the value that attains the optimal F1 on the experiments where other sites were held-out.", "OpenIE Postprocessing Rules: To ensure consistency among the extracted values, we keep only the highest confidence extraction in the case that the same text field is extracted as both a relation and object, or if multiple relations are extracted for the same object.", "In addition, some pages in the dataset contain relational tables, from which we sometimes extract the column headers as relations with the column contents as objects.", "While we believe a post-processing step could potentially recover these relational contents from our extractions, the SWDE data does not contain ground truth for such facts.", "Instead, we apply the heuristics described by (Cafarella et al., 2008) to identify these tables and remove them from our extractions.", "(:) and assumes they are relation strings, then extracts the text field to the right or below, whichever is closer, as the object.", "We consider it as Level I knowledge since it requires no training.", "WEIR (OpenIE) This approach by Bronzi et al. (2013) discovers relations by aligning multiple pages about the same entity.", "Because it requires sites to be grouped by vertical and uses a gazetteer list of entity names for the alignment, it has Level III knowledge.", "OpenCeres (OpenIE) This applies the model by Lockard et al. (2019), which requires a knowledge base matching some facts presented on the target website, using Level III knowledge.", "ZSCERES-FFNN (Feed-forward neural net-work): This model takes the same features and training data as the full ZSCERES model but removes the GNN component, with versions tested with both Level I (ZSCERES-FFNN Unseen-Vertical) and Level II (ZSCERES-FFNN Unseen-Website) knowledge.", "ZSCERES-GNN : This applies the full model described in Section 4.2, with versions tested with both Level I (ZSCERES-GNN Unseen-Vertical) and Level II (ZSCERES-GNN Unseen-Website) knowledge.", "Level-I Knowledge: Table 1 shows that ZSCERES is able to extract facts in entirely new subject verticals 31% more accurately than the colon baseline.", "Across all SWDE sites (micro-averaging across all extractions), ZSCERES-GNN achieves an F1 of 0.45, in comparison with 0.43 for ZSCERES-FFNN, showing that the additional information provided by the page encoder allows for a better representation of the relationships between text fields.", "By successfully learning general patterns of relational presentation on webpages, ZSCERES-GNN is able to train solely on a set of 16 websites about Movies and NBA players, and then extract from University websites more accurately than the WEIR and OpenCeres systems, which take advantage of Level III knowledge to learn models specific to those University sites.", "While OpenCeres's rich vertical knowledge allows it to attain better results in Movie and NBA, ZSCERES-GNN still posts much stronger results than the other baselines in these two verticals.", "Level-II Knowledge: Figure 4 shows that adding the in-vertical sites to the training set (but still withholding the test site) allows the model to achieve performance better than the Level I training set that uses only out-of-vertical data.", "Table 2 shows the results for ClosedIE extraction.", "ZSCERES-GNN attains an overall F1 of 0.58 averaged across the three verticals.", "This significantly outperforms the feed-forward model that did not use the GNN, which attained an F1 of 0.46.", "While our performance on this dataset is far below Figure 5: Performance on the ClosedIE Movie vertical increases significantly as more sites are added to the training data.", "the state-of-the-art for semi-structured ClosedIE (above 0.9 for all verticals), prior systems all learn site-specific models based on manual labeling or prior knowledge aligned to the website, while we have only Level II Knowledge available.", "Figure 5 shows how adding additional training data improves performance in the Movie vertical.", "It appears that adding additional training sites would further improve the performance.", "Table 3 shows the contributions of different elements of the model in the OpenIE and ClosedIE settings as calculated on the development set of three sites in the Movie vertical.", "These ablations show that the GNN helps in both settings, with a larger effect in ClosedIE, which is likely due to sharing the rich information about the text of nearby text fields.", "Pre-training is important in OpenIE but does not have a significant effect for ClosedIE.", "This is not surprising given that the pre-training task is closely related to the OpenIE task.", "Both DOM and spatial adjacency edges contribute to the success of the page layout graph for the GNN.", "In the ClosedIE setting, the text and layout relationships alone will generally contain sufficient information to make an extraction, while in OpenIE the visual elements (such as whether text is bold or underlined) are a strong source of consistency across websites.", "OpenIE: To understand what cases our ZSCERESGNN model is missing, we sampled 100 error cases in each vertical from the Unseen-Vertical experiment and manually examined them.", "Some examples of both erroneous and correct extractions are shown in Table 4 in the Appendix.", "False positives were largely due to the presence of two different types of n-ary relationships on the page.", "The first class of errors involving n-ary relationships, making up 43% of all false positives, were where several facts have a multi-way relationship with the page topic, but individually the fields are not meaningful.", "For example, the NBA site US-AToday includes a Latest notes section with links to several articles relevant to the page topic entity, mentioning the date, headline, and summary.", "We extract all of these objects with the Latest notes relation, but to obtain meaningful knowledge it would be necessary to additionally associate the correct date, headline, and summary with each other.", "While we can envision methods for doing this via post-processing, the SWDE benchmark considers these to be errors.", "In the second class, ZSCERES correctly extracted (relation, object) pairs, but from page sections that contain facts about entities other than the page topic.", "For example, on the MatchCollege site, a section of Similar Local Colleges contains some of the same relations presented for the page topic, in similar formatting.", "These types of errors made up another 6% of false positives.", "Of the remaining errors, 33% were due to the extraction of pairs where the extracted relation did not represent a relationship, while another 14% were due to the extraction of pairs with a correct relation string and incorrect object.", "Most false negatives occurred in long vertical lists, where some values were extracted, but not all.", "ClosedIE : False negatives were most likely to occur on long lists of values (such as cast lists), where values toward the bottom of the list were sometimes missed.", "Recall also suffered on relations where the relation name varied significantly from site to site, or where ambiguity existed.", "For example, the string Produced by is used by some sites to indicate the producer of the film, while on other sites it indicates the production company.", "We have introduced a zero-shot method for learning a model for relation extraction from semi-structured documents that generalizes beyond a single document template.", "Moreover, this approach enables OpenIE extraction from entirely new subject verticals where no prior knowledge is available.", "By representing a webpage as a graph defined by layout relationship between text fields, with text fields associated with both visual and textual features, we attain a 31% improvement over the baseline for new-vertical OpenIE extraction.", "Future extensions of this work involve a more general pre-training objective allowing for the learned representations to be useful in many tasks as well as distantly or semi-supervised approaches to benefit from more data.", "We would like to acknowledge grants from ONR N0001418-1-2826, DARPA N66001-19-2-403, NSF (IIS1616112, IIS1252835), Allen Distinguished Investigator Award, and Sloan Fellowship." ]
[ "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "result", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "objective", "objective", "result", "objective", "abstain", "objective", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "other", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "other" ]
[ "Large-scale pretrained language models are the major driving force behind recent improvements in performance on the Winograd Schema Challenge, a widely employed test of commonsense reasoning ability.", "We show, however, with a new diagnostic dataset, that these models are sensitive to linguistic perturbations of the Winograd examples that minimally affect human understanding.", "Our results highlight interesting differences between humans and language models: language models are more sensitive to number or gender alternations and synonym replacements than humans, and humans are more stable and consistent in their predictions, maintain a much higher absolute performance, and perform better on non-associative instances than associative ones.", "Overall, humans are correct more often than out-of-the-box models, and the models are sometimes right for the wrong reasons.", "Finally, we show that fine-tuning on a large, task-specific dataset can offer a solution to these issues.", "Large-scale pre-trained language models have recently led to improvements across a range of natural language understanding (NLU) tasks (Devlin et al., 2019; Radford et al., 2019; Yang et al., 2019), but there is some scepticism that benchmark leaderboards do not represent the full picture (Kaushik and Lipton, 2018; Jumelet and Hup-kes, 2018; Poliak et al., 2018).", "An open question is whether these models generalize beyond their training data samples.", "In this paper, we examine how pre-trained language models generalize on the Winograd Schema Challenge (WSC).", "task takes the form of a binary reading comprehension test where a statement with two referents and a pronoun (or a possessive adjective) is given, and the correct antecedent of the pronoun must be chosen.", "Examples are chosen carefully to have a preferred reading, based on semantic plausibility rather than co-occurrence statistics.", "WSC examples come in pairs that are distinguished only by a discriminatory segment that flips the correct referent, as shown in Figure 1a.", "Levesque et al. define a set of qualifying criteria for instances and the pitfalls to be avoided when constructing examples (see 3.2).", "These combine to ensure an instance functions as a test of what they refer to as think-ing' (or common sense reasoning).", "Recent work has reported significant improvements on the WSC (Kocijan et al., 2019; Sakaguchi et al., 2019).", "As with many other NLU tasks, this improvement is primarily due to large-scale language model pre-training, followed by fine-tuning for the target task.", "We believe that further examination is warranted to determine whether these impressive results reflect a fundamental advance in reasoning ability, or whether our models have learned to simulate this ability in ways that do not generalize.", "In other words, do models learn accidental correlations in our datasets, or do they extract patterns that generalize in robust ways beyond the dataset samples?", "In this paper, we conduct experiments to investigate this question.", "We define a set of lexical and syntactic variations and perturbations for the WSC examples and use altered examples (Figure 1b) to test models that have recently reported improved results.", "These variations and perturbations are designed to highlight the robustness of human linguistic and reasoning abilities and to test models under these conditions.", "Contributions We introduce a new Winograd Schema dataset for evaluating generalization across seven controlled linguistic perturbations.", "1 We use this dataset to compare human and language model sensitivity to those perturbations, finding marked differences in model performance.", "We present a detailed analysis of the behaviour of the language models and how they are affected by the perturbations.", "Finally, we investigate the effect of fine-tuning with large task-specific datasets, and present an error analysis for all models.", "Probing datasets Previous studies have explored the robustness of ML models towards different linguistic phenomena (Belinkov and Glass, 2019), e.g., by creating challenge datasets such as the one introduced here.", "When predicting subject-verb agreement, Linzen et al. (2016) found that inserting a relative clause hurt the performance of recurrent networks.", "2 A large body of research has since emerged on probing pre-trained (masked) language models for linguistic structure (Goldberg, 2019; Hewitt and Manning, 2019; Lin et al., 2019; Clark et al., 2019) and analysing them via comparison to psycholinguistic and brain imaging data (Abnar et al., 2019; Ettinger, 2019; Abdou et al., 2019; Gauthier and 1 Code and dataset can be found at: https://github.", "2 This contrasts with our results with Transformer-based architecture and is probably explained by memory loss in recurrent networks trained on short sequences.", "Similarly, Gu-lordava et al. (2018) tested whether a Recurrent Neural Network can predict long-distance number agreement in various constructions comparing natural and nonsensical sentences where RNNs cannot rely on semantic or lexical cues.", "Levy, 2019).", "Other recent work has attempted to probe these models for what is referred to as common sense or factual knowledge (Petroni et al., 2019; Feldman et al., 2019).", "Their findings show that these models do indeed encode such knowledge and can be used for knowledge base completion or common sense mining from Wikipedia.", "Clever Hans A considerable amount of work has also been devoted to what might be described as the Clever Hans effect.", "This work has aimed to quantify the extent to which models are learning what we expect them to as opposed to leveraging statistical artifacts.", "This line of work has to date revealed significant problems (and some possible solutions to those problem) with reading comprehension datasets (Chen et al., 2016; Kaushik and Lipton, 2018), natural language inference datasets (Tsuchiya, 2018; Gururangan et al., 2018; Poliak et al., 2018; Belinkov et al., 2019a; McCoy et al., 2019), and the story cloze challenge (Schwartz et al., 2017), among others.", "Winograd Schema Challenge Trinh and Le (2018) first proposed using neural language models for the WSC, achieving an accuracy of 63.7% using an ensemble of 14 language models.", "Ruan et al. (2019) and Kocijan et al. (2019) fine-tune BERT (Devlin et al., 2019) on the PDP (Rah-man and Ng, 2012) and an automatically generated MaskedWiki dataset, reaching an accuracy of 71.1% and 72.5% respectively.", "Meanwhile, Radford et al. (2019) report an accuracy of 70.7% without fine-tuning using the GPT-2 language model.", "Most recently, Sakaguchi et al. (2019) present an adversarial filtering algorithm which they use for crowd-sourcing a large corpus of WSC-like examples.", "Fine-tuning RoBERTa (Liu et al., 2019) on this, they achieve an accuracy of 90.1%.", "In an orthogonal direction, Trichelair et al. (2018) presented a timely critical treatment of the WSC.", "They classified the dataset examples into associative and non-associative subsets, showing that the success of the LM ensemble of Trinh and Le (2018) mainly resulted from improvements on the associative subset.", "Moreover, they suggested switching the candidate referents (where possible) to test whether systems make predictions by reasoning about the entirety of a schema or by exploiting statistical quirks of individual entities.", "study of robustness along different axes of linguistic variation.", "This type of study is rarely possible in NLP due to the large size of datasets used and the focus on obtaining improved results on said datasets.", "Like a carefully constructed dataset which is thought to require true natural language understanding, the WSC presents an ideal testbed for this investigation.", "We define a suite of seven perturbations that can be applied to the 285 WSC examples, which we refer to as the original examples.", "These perturbations are designed to test the robustness of an answer to semantic, syntactic, and lexical variation.", "Each of the perturbations is applied to every example in the WSC (where possible), resulting in a dataset of 2330 examples, an example of each type is shown in Table 1. Crucially, the correct referent in each of the perturbed examples is not altered by the perturbation.", "The perturbations are manually constructed, except for the sampling of names and synonyms.", "Further details can be found in Appendix E. Tense switch ( TEN ) Most WSC instances are written in the past tense and thus are changed to the present continuous tense (247 examples).", "The remaining 34 examples are changed from the present to the past tense.", "Number switch ( NUM ) Referents have their numbers altered: singular referents (and the relevant pronouns) are pluralised (223 examples), and plural referents are modified to the singular (30 examples).", "Sentences with names have an extra name added via conjunction; eg.", "Carol is replaced with Carol and Susan.", "Possessives only mark possession on the second conjunct (John and Steve's uncle rather than John's and Steve's uncle).", "Gender switch ( GEN ) Each of the referents in the sentence has their gender switched by replacing their names with other randomly drawn frequent English names of the opposite gender.", "3 92% of the generated data involved a gender switch for a name.", "Though humans may be biased towards gender (Collins, 2011; Desmond and Danilewicz, 2010; Hoyle et al., 2019), the perturbations do not 3 Names sourced from https://github.com/ AlessandroMinoccheri/human-names/tree/master/data introduce ambiguity concerning gender, only the entity.", "Voice switch ( VC ) All WSC examples, except for 210 and 211, are originally in the active voice and are therefore passivized.", "210 and 211 are changed to the active voice.", "65 examples could not be changed.", "Passive voice is known to be more difficult to process for humans (Olson and Filby, 1972; Feng et al., 2015).", "Relative clause insertion ( RC ) A relative clause is inserted after the first referent.", "For each example, an appropriate clause was constructed by first choosing a template such as who we had discussed or that is known for from a preselected set of 19 such templates.", "An appropriate ending, such as who we had discussed with the politicians is then appended to the template depending on the semantics of the particular instance.", "Relative clauses impose an increased demand on working memory capacity, thereby making processing more difficult for humans (Just and Carpenter, 1992; Gibson, 1998).", "Adverbial qualification ( ADV ) An adverb is inserted to qualify the main verb of each instance.", "When a conjunction is present both verbs are modified.", "For instances with multiple sentences, all main verbs are modified.", "Synonym/Name substitution ( SYN / NA ) Each of the two referents in an example is substituted with an appropriate synonym, or if it is a name, is replaced with a random name of the same gender from the same list of names used for the gender perturbation.", "We expect that humans are robust to these perturbations because they represent naturally occurring phenomena in language; we test this hypothesis by collecting human judgements for the perturbed examples.", "We collect the judgments for the perturbed examples using Amazon Mechanical Turk.", "The annotators are presented with each instance where the pronoun of interest is boldfaced and in red font.", "They are also presented with two options, one for each of the possible referents.", "They are then instructed to choose the most likely option, in exchange for $0.12.", "Following Sakaguchi et al. (2019), each instance is annotated by three anno-Instance / Perturbed Instance Count Original Sid explained his theory to Mark but he couldn't convince him.", "tators and majority vote results are reported.", "Results are reported later in 5.", "All three annotators agreed on the most likely option in 82-83% of the instances, except for gender, where a full agreement was obtained for only 78% of the instances.", "See Appendix B for further annotation statistics, a sample of the template presented to annotators, and restrictions applied to pool of annotators.", "We did not require an initial qualification task to select participants.", "Constructing WSC problems is known to be difficult.", "Indeed, the original dataset was carefully crafted by domain experts and subsequent attempts at creating WSC-like datasets by non-experts such as in Rahman and Ng (2012) have produced examples which were found to be less challenging than the original dataset.", "Two likely pitfalls listed in Levesque et al. (2012) concern A ) statistical preferences which make one answer more readily associated with the special discriminatory segment or other components of an example 4 (this is termed as Associativity , and it is described as non-Google-proofness in Levesque et al. (2012)); and B ) inherent ambiguity which makes the examples open to other plausible interpretations.", "In what follows, we discuss these pitfalls, demonstrating that the perturbed examples remain resilient to both.", "Quantifying Associativity To verify that the perturbations have not affected the correctness of 4 Trichelair et al. (2018) find that 13.5% of examples from the original WSC might still be considered to be associative .", "the original problems with regards to pitfall A , we employ pointwise mutual information (PMI) to test the associativity of both the original and perturbed examples.", "PMI is known to be a reasonable measure of associativity (Church and Hanks, 1990) and, among a variety of measures, has been shown to correlate best with association scores from human judgements of contextual word association (Frassinelli, 2015).", "We compute unigram PMI on the two corpora used to train BERT (see Appendix C for details).", "Figure 2 shows the divergence of the perturbed examples from the original WSC dataset.", "We estimate divergence as the average difference in PMI between the correct ( C ) and incorrect ( I ) candidates: = pmi ( c j , x j ) pmi ( i j , x j ) where X is either:", "i) the discriminatory segments or", "ii) the full text of the example, and pmi ( , ) is average unigram PMI.", "can be seen as a measure of whether the correct or incorrect candidate is a better associative fit' for either the discriminatory segment or the full context, making the examples trivial to resolve.", "Observe that this difference in PMI declines for the perturbed examples, showing that these the perturbed example do not increase in associativity.", "Confirming Solvability Three expert annotators 5 are asked to solve the small subset of examples (99 in total across perturbations) which were annotated incorrectly by the majority vote of Mechanical Turk workers.", "To address pitfall B , the expert annotators are asked to both attempt to solve the instances and indicate if they believe them to be too ambiguous to be solved.", "The majority vote of the annotators determines the preferred referent or whether an instance is ambiguous.", "Out of a total of 99 examples, 10 were found to be ambiguous.", "Of the remaining 89 examples, 67 were answered correctly by the majority vote.", "See Appendix D for more details.", "Our experiments are designed to test the robustness of language models to the Winograd Schema perturbations described in the previous section.", "Evaluation Models are evaluated using two types of measures.", "The first is accuracy .", "For each of the perturbations, we report", "(a) the accuracy on the perturbed set ( Perturbation accuracy ),", "(b) the difference in accuracy on the perturbed set and on the equivalent subset of original dataset: 6 Acc.", "= Perturbation accuracy Original subset accuracy , and", "(c) Pair accuracy , defined as the number of pairs for which both examples in the pair are correctly answered divided by the total number of pairs.", "The second measure is stability , S .", "This is the proportion of perturbed examples P (cid:48) for which the predicted referent is the same as the original prediction P : S = | { ( p (cid:48) i , p i ) | p (cid:48) i P (cid:48) p i P p (cid:48) i = p i } | | P | Since the perturbations do not alter the correct referent, this provides a strong indication of robustness towards them.", "Baseline We take the unigram PMI between candidates and discriminatory segments (see 3.2) as a baseline.", "We expect that this simple baseline will perform well for instances with a high level of associativity but not otherwise.", "Language Models Our analysis is applied to three out-of-the-box language models (LMs): BERT (Devlin et al., 2019), ROBERTA (Liu et al., 2019), and XLNET (Yang et al., 2019).", "These models are considered to be the state-of-the-art for the wide variety of natural language understanding tasks found in the GLUE (Wang et al., 2018) and SuperGLUE (Wang et al., 2019) benchmarks.", "We use the large pre-trained publicly available models (Wolf et al., 2019).", "7 Fine-tuned Language Models We also examine the effect of fine-tuning language models.", "BERT+WW uses BERT fine-tuned on the MaskedWiki and WscR datasets which consist of 2.4M and 1322 examples (Kocijan et al., 2019), and RoBERTa+WG is fine-tuned on WinoGrande XL , which consists of 40,938 adversarially filtered examples (Sakaguchi et al., 2019).", "Both fine-tuned models have been reported by recent work to achieve significant improvements on the WSC.", "Scoring To score the two candidate referents in each WSC instance we employ one of two mechanisms.", "The first, proposed in Trinh and Le (2018) and adapted to masked LMs by Kocijan et al. (2019) involves computing the probability of the two candidates c 1 and c 2 , given the rest of the text in the instance s .", "To accomplish this, the pronoun of interest is replaced with a number of MASK tokens corresponding to the number of tokens in each of c 1 and c 2 .", "The probability of a candidate, p ( c | s ) is then computed as the average of the probabilities assigned by the model to the candidate's tokens and the maximum probability candidate is taken as the answer.", "This scoring method is used for all models, except ROBERTA +WG.", "For that, we follow the scoring strategy employed in Sakaguchi et al. (2019) where an instance is split into context and option using the candidate answer as a delimiter.", "8 7 https://github.com/huggingface/ pytorch-transformers 8 [CLS] context [SEP] option [SEP] , e.g. [CLS] The sculpture rolled off the shelf because [SEP] wasn't anchored [SEP].", "The blank is filled with either option 1 ( the sculpture ) or 2 ( the trophy ).", "Following the experimental protocol, we evaluate the three out-of-the-box language models and the two fine-tuned models on the original WSC and each of the perturbed sets.", "Table 2 shows Perturbation accuracy results for all models 9 and contrasts them with human judgements and the PMI baseline.", "Humans maintain a much higher performance compared to out-of-the-box LMs across perturbations.", "The difference in accuracy between the perturbed and original examples, Acc.", ", as defined in Section 4 is shown in Figure 4. A general trend of decrease can be observed for both models and humans across the perturbations.", "This decline in accuracy is on average comparable between models and humans with a handful of exceptions.", "Taking the large gap in absolute accuracy into account, this result might be interpreted in two ways.", "If a comparison is made relative to the upper bound of performance, human performance has suffered from a larger error increase.", "Alternately, if we compare relative to the lower bound of performance, then the decline in the already low performance of language models is more meaningful, since 'there is not much more to lose'.", "A more transparent view can be gleaned from the stability results shown in Table 3. Here it can be seen that the three out-of-the-box LMs are substantially more likely to switch predictions due to the perturbations than humans.", "Furthermore, we observe that the LMs are least stable for word-level perturbations like gender ( GEN ), number ( NUM ), and synonym or name replacement ( SYN / NA ), while humans appear to be most affected by sentence-level ones, such as relative clause insertion ( RC ) and voice perturbation ( VC ).", "To better understand the biases acquired through pre-training which are pertinent to this task, we consider", "a) a case of essential feature omission and", "b) the marginal cases where LMs answer very correctly or incorrectly, in both the original and perturbed datasets.", "We present analysis for BERT, but similar findings hold for the other LMs.", "9 It is interesting to note that XLNet is trained on Com-monCrawl which indexes an online version of the original WSC found here: https://cs.nyu.edu/faculty/ davise/papers/WinogradSchemas/WS.html.", "Masking discriminatory segments result in identical sentence pairs because these segments are the only part of a sentence that sets WSC pairs apart (see Figure 1a).", "To determine whether there is a bias in the selectional preference for one of the candidates over the other, we test BERT on examples where these discriminatory segments have been replaced with the MASK token.", "An unbiased model should be close to random selection but BERT consistently prefers (by a margin of 25-30%) the candidate which appears second in the text to the one appearing first, for all perturbations except voice, where it prefers the first.", "This observation holds even when the two referents are inverted, which is possible for the 'switchable' subset of the examples as shown in Trichelair et al. (2018).", "This indicates that the selections are not purely semantic but also syntactic or structural and it points towards BERT having a preference referents in the object role.", "Detailed results are presented in Appendix F. Marginal examples are found where the model assigns a much higher probability to one referent over the other.", "We extract the top 15% examples where the correct candidate is preferred by the largest margin ( P correct (cid:29) P incorrect ) and the bottom 15% where the incorrect one is preferred ( P incorrect (cid:29) P correct ).", "Surprisingly, we find that there is a large overlap (50%60%) between these two sets of examples, both in the original and the perturbed datasets.", "10 For the examples which are both the most correct and incorrect, BERT strongly prefers one of the candidates without considering the special discriminatory segment which flips the correct referent.", "Indeed we find that the correlation between the probability assigned by BERT to a referent when it is the correct referent and when it is not is very strong and significant, with Spearman's 0 .", "75 across perturbations (see Appendix G for details).", "10 To clarify, consider the following original WSC pair:", "(i) Alice looked for her friend Jade in the crowd.", "Since she always has good luck, Alice spotted her quickly.", "(ii) Alice looked for her friend Jade in the crowd.", "Since she always wears a red turban, Alice spotted her quickly.", "The first example gives P correct (cid:29) P incorrect by the largest margin, and its counterpart gives P incorrect (cid:29) P correct by the largest margin.", "In other words, the model assigns a much higher probability for Alice in both cases.", "The accuracy and stability results (Tables 2 and 3) indicate that fine-tuning makes language models more robust to the perturbations.", "ROBERTA +WG, in particular, is the most accurate and most stable model.", "While impressive, this is not entirely surprising: fine-tuning on task-specific datasets is a well-tested recipe for bias correction (Belinkov et al., 2019b).", "Indeed, these results provide evidence that it is possible to construct larger fine-tuning datasets whose distribution is correct for the WSC.", "We note that both fine-tuned models perform worst on the VC and RC perturbations, which may not frequently occur in the crowd-sourced datasets used for fine-tuning.", "To test this intuition, we apply a dependency parser (UDPipe (Straka et al., 2016)) to the WinoGrande XL examples, finding that only 5% of the examples are in the passive voice and 6 .", "5% contain relative clauses.", "achieve robustness, we fine-tune ROBERTA on the five WinoGrande training set splits defined by Sakaguchi et al. (2019): XS (160) 11 , S (640), M (2558), L (10234), and XL (40398).", "Figure 3 shows the average accuracy and stability scores for the models fine-tuned on each of the training splits 12 .", "We observe that the two smallest splits do not have a sufficient number of examples to adequately bias the classification head, leading to near-random performance.", "The model fine-tuned on the M splitwith just 2558 examples is, however, already able to vastly outperform the non-fine-tuned ROBERTA.", "Increasing the number of examples five-fold and twenty-fold leads to significant but fast diminishing improvements.", "11 No.", "of examples in set.", "12 Note that the stability score for the model fine-tuned on XL in Figure 3 is different from that reported in Table 3. In the latter we reported results from the model provided by Sakaguchi et al. (2019), rather than the model we fine-tuned ourselves.", "Since we utilise identical hyperparameters to theirs for fine-tuning, this anomalous difference in score may perhaps be explained by a difference in initialization as suggested in Dodge et al. (2020).", "How do perturbations affect token probability distributions?", "To obtain a holistic view of the effect the perturbations have on LMs and fine-tuned LMs, we analyze of the shift in the probability distribution (over the entire vocabulary) which a model assigns to a MASK token inserted in place of the pronoun of interest.", "We apply probability distribution truncation with a threshold of p = 0 .", "9 as proposed in Holtzman et al. (2019) to filter out the uninformative tail of the distribution.", "Following this, we compute the JensenShannon distance between this dynamically truncated distribution for an original example and each of its perturbed counterparts.", "Figure 5 shows the average of this measure over the subset of the 128 examples which are common to all perturbations.", "Overall, we observe that large shifts in the distribution correspond to lower stability and accuracy scores and that fine-tuned models exhibit lower shifts than their non-fine-tuned counterparts.", "The difference in shifts between out-of-the-box models and their fine-tuned counterparts is lower for the VC, RC and ADV perturbations, meaning that when fine-tuned, the models' probability distributions are roughly just as divergent for these perturbations as they were before fine-tuning.", "We hypothesize the same reasons we did in 5.2, which is that these examples are just under-represented in our fine-tuning corpus; indeed, these results roughly correspond to the differences in Acc.", "from Figure 4. Further details about the number of examples excluded via the probability distribution truncation and other measures of the perturbations' effect can be found in Appendix G. TEN NUM GEN VC RC ADV SYN/NA Perturbation 15.0 12.5 10.0 7.5 5.0 2.5 0.0 2.5 a cc BERTBERT+WWRoBERTaRoBERTa+WGXLNetHumansPMI Figure 4: Acc.", "Pair Accuracy Here we consider a more challenging evaluation setting where each WSC pair is treated as a single instance.", "Since the WSC examples are constructed as minimally contrastive pairs (Levesque et al., 2012), we argue that this is an appropriate standard of evaluation.", "Consider again the example in Figure 1a.", "It is reasonable to suppose that for an answerer which truly un-derstands' (Levesque et al., 2012), being able to link the concepts heavy and son in one of the resolutions is closely related and complementary to linking the concepts weak and man in the other.", "13 The results for this evaluation are shown in Figure 6.", "They show that human resolution of the problems exhibits greater complementarity compared to the language models; human pair accuracy (pair) is closer to perturbation accuracy (sin-gle) than is the case for the LMs.", "Furthermore, human performance on pair accuracy is more robust to perturbations when compared to the models.", "Indeed, the large gap between pair accuracy and perturbation accuracy raises some doubts about the performance of these models.", "However, ROBERTA-WG is a notable exception, showing near-human robustness to pair complementarity.", "Associativity Next, we examine the effect of associativity on performance.", "Figure 7 shows accuracy results 14 for all perturbations on the associative and non-associative subsets of the WSC as labelled by Trichelair et al. (2018).", "We observe that the difference between associative and non-13 As a sanity check, consider random pairings of WSC examples.", "associative is much smaller for humans and that unlike all language models, humans do better on the former than the latter.", "As expected, the PMI baseline does almost as well as the LMs on the associative subset but it performs at chance level for the non-associative subset.", "We presented a detailed investigation of the effect of linguistic perturbations on how language models and humans perform on the Winograd Schema Challenge.", "We found that compared to out-of-the-box models, humans are significantly more stable to the perturbations and that they answer non-associative examples with higher accuracy than associative ones, show sensitivity to WSC pair complementarity, and are more sensitive to sentence-level (as opposed to word-level) perturbations.", "In an analysis of the behaviour of language models, we observe that there is a preference for referents in the object role and that the models do not always consider the discriminatory segments of examples.", "Finally, we find that fine-tuning language models can lead to much-improved accuracy and stability.", "It remains an open question whether this task-specific approach to generalisation constitutes a true advancement in reasoning.", "Fine-tuning a model on a rather large number of examples similar to the WSC leads to increased robustness, but this stands in stark contrast to humans, who are robust to the perturbations without having been exposed to similar examples in the past.", "We would like to thank Mitja Nikolaus, Artur Kul-mizev, Ana Valeria Gonzalez, and the anonymous reviewers for their helpful comments.", "Mostafa Abdou and Anders Sgaard are supported by a Google Focused Research Award and a Facebook Research Award.", "Yonatan Belinkov was supported by the Harvard Mind, Brain, and Behavior Initiative." ]
[ "abstain", "objective", "result", "abstain", "result", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "objective", "result", "abstain", "objective", "result", "method", "objective", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "other", "other", "other", "other", "other", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "abstain", "abstain", "method", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "result", "result", "result", "abstain", "abstain", "other", "other", "other" ]
[ "Keyphrase generation (KG) aims to summarize the main ideas of a document into a set of keyphrases.", "A new setting is recently introduced into this problem, in which, given a document, the model needs to predict a set of keyphrases and simultaneously determine the appropriate number of keyphrases to produce.", "Previous work in this setting employs a sequential decoding process to generate keyphrases.", "However, such a decoding method ignores the intrinsic hierarchical compositionality existing in the keyphrase set of a document.", "Moreover, previous work tends to generate duplicated keyphrases, which wastes time and computing resources.", "To overcome these limitations, we propose an exclusive hierarchical decoding framework that includes a hierarchical decoding process and either a soft or a hard exclusion mechanism.", "The hierarchical decoding process is to explicitly model the hierarchical compositionality of a keyphrase set.", "Both the soft and the hard exclusion mechanisms keep track of previously-predicted keyphrases within a window size to enhance the diversity of the generated keyphrases.", "Extensive experiments on multiple KG benchmark datasets demonstrate the effectiveness of our method to generate less duplicated and more accurate keyphrases 1 .", "Keyphrases are short phrases that indicate the core information of a document.", "As shown in Figure 1, the keyphrase generation (KG) problem focuses on automatically producing a keyphrase set (a set of keyphrases) for the given document.", "Because of the condensed expression, keyphrases can benefit various downstream applications including opinion mining (Berend, 2011; Wilson et al., 2005), doc-1 Our code is available at https://github.com/ Chen-Wang-CUHK/ExHiRD-DKG .", "Input Document: A noninvasive diagnostic device was developed to assess the vascular origin and severity of penile dysfunction.", "It was designed and studied using both a mathematical model of penile hemodynamics and preliminary experiments on healthy young volunteers.", "Simulations using a mathematical model show that the device is capable of differentiating between arterial insufficiency and venous leak and indicate the severity of each.", "Keyphrases: {erectile dysfunction; arterial insufficiency; venous leak; veno-occlusive mechanism; mathematical model; hemodynamics} Figure 1: An example of an input document and its expected keyphrase output for keyphrase generation problem.", "Keyphrases of a document can be categorized into two groups: present keyphrase that appears in the document and absent keyphrase that does not appear in the document.", "Recent generative methods for KG apply the attentional encoder-decoder framework (Luong et al., 2015; Bahdanau et al., 2014) with copy mechanism (Gu et al., 2016; See et al., 2017) to predict both present and absent keyphrases.", "To generate multiple keyphrases for an input document, these methods first use beam search to generate a huge number of keyphrases (e.g., 200) and then pick the top N ranked keyphrases as the final prediction.", "Thus, in other words, these methods can only predict a fixed number of keyphrases for all documents.", "However, in a practical situation, the appropriate number of keyphrases varies according to the content of the input document.", "To simultaneously predict keyphrases and determine the suitable number of keyphrases, Yuan et al. (2018) adopts a sequential decoding method with greedy search to generate one sequence consisting of the predicted keyphrases and separators.", "For example, the produced sequence may be hemodynamics [sep] erectile dysfunction [sep] ..., where [sep] is the separator.", "After producing an ending token, the decoding process terminates.", "The final keyphrase predictions are obtained after splitting the sequence by separators.", "However, there are two drawbacks to this method.", "First, the sequential decoding method ignores the hierarchical compositionality existing in a keyphrase set (a keyphrase set is composed of multiple keyphrases and each keyphrase consists of multiple words).", "In this work, we examine the hypothesis that a generative model can predict more accurate keyphrases by incorporating the knowledge of the hierarchical compositionality in the decoder architecture.", "Second, the sequential decoding method tends to generate duplicated keyphrases.", "It is simple to design specific post-processing rules to remove the repeated keyphrases, but generating and then removing repeated keyphrases wastes time and computing resources.", "To address these two limitations, we propose a novel exclusive hierarchical decoding framework for KG, which includes a hierarchical decoding process and an exclusion mechanism.", "Our hierarchical decoding process is designed to explicitly model the hierarchical compositionality of a keyphrase set.", "It is composed of phrase-level decoding (PD) and word-level decoding (WD).", "A PD step determines which aspect of the document to summarize based on both the document content and the aspects summarized by previously-generated keyphrases.", "The hidden representation of the captured aspect is employed to initialize the WD process.", "Then, a new WD process is conducted under the PD step to generate a new keyphrase word by word.", "Both PD and WD repeat until meeting the stop conditions.", "In our method, both PD and WD attend the document content to gather contextual information.", "Moreover, the attention score of each WD step is rescaled by the corresponding PD attention score.", "The purpose of the attention rescaling is to indicate which aspect is focused on by the current PD step.", "We also propose two kinds of exclusion mechanisms (i.e., a soft one and a hard one) to avoid generating duplicated keyphrases.", "Either the soft one or the hard one is used in our hierarchical decoding process.", "Both of them are used in the WD process of our hierarchical decoding.", "Besides, both of them collect the previously-generated K keyphrases, where K is a predefined window size.", "The soft exclusion mechanism is incorporated in the training stage, where an exclusive loss is employed to encourage the model to generate a different first word of the current keyphrase with the first words of the collected K keyphrases.", "However, the hard exclusion mechanism is used in the inference stage, where an exclusive search is used to force WD to produce a different first word with the first words of the collected K keyphrases.", "Our motivation is from the statistical observation that in 85% of the documents on the largest KG benchmark, the keyphrases of each individual document have different first words.", "Moreover, since a keyphrase is usually composed of only two or three words, the predicted first word significantly affects the prediction of the following keyphrase words.", "Thus, our exclusion mechanisms can boost the diversity of the generated keyphrases.", "In addition, generating fewer duplications will also improve the chance to produce correct keyphrases that have not been predicted yet.", "We conduct extensive experiments on four popular real-world benchmarks.", "Empirical results demonstrate the effectiveness of our hierarchical decoding process.", "Besides, both the soft and the hard exclusion mechanisms significantly reduce the number of duplicated keyphrases.", "Furthermore, after employing the hard exclusion mechanism, our model consistently outperforms all the SOTA sequential decoding baselines on the four benchmarks.", "We summarize our main contributions as follows: (1) to our best knowledge, we are the first to design a hierarchical decoding process for the keyphrase generation problem; (2) we propose two novel exclusion mechanisms to avoid generating duplicated keyphrases as well as improve the generation accuracy; and (3) our method consistently outperforms all the SOTA sequential decoding methods on multiple benchmarks under the new setting.", "Most of the traditional extractive methods (Witten et al., 1999; Mihalcea and Tarau, 2004) focus on extracting present keyphrases from the input document and follow a two-step framework.", "They first extract plenty of keyphrase candidates by handcrafted rules (Medelyan et al., 2009).", "Then, they score and rank these candidates based on either unsupervised methods (Mihalcea and Tarau, 2004) or supervised learning methods (Nguyen and Kan, 2007; Hulth, 2003).", "Recently, neural-based sequence labeling methods (Gollapalli et al., 2017; Luan et al., 2017; Zhang et al., 2016) are also explored in keyphrase extraction problem.", "However, these extractive methods cannot predict absent keyphrase which is also an essential part of a keyphrase set.", "To produce both present and absent keyphrases, Meng et al. (2017) introduced a generative model, CopyRNN, which is based on an attentional encoder-decoder framework (Bahdanau et al., 2014) incorporating with a copy mechanism (Gu et al., 2016).", "A wide range of extensions of Copy-RNN are recently proposed (Chen et al., 2018, 2019b; Ye and Wang, 2018; Chen et al., 2019a; Zhao and Zhang, 2019).", "All of them rely on beam search to over-generate lots of keyphrases with large beam size and then select the top N (e.g., five or ten) ranked ones as the final prediction.", "That means these over-generated methods will always predict N keyphrases for any input documents.", "Nevertheless, in a real situation, the keyphrase number should be determined by the document content and may vary among different documents.", "To this end, Yuan et al. (2018) introduced a new setting that the KG model should predict multiple keyphrases and simultaneously decide the suitable keyphrase number for the given document.", "Two models with a sequential decoding process, catSeq and catSeqD, are proposed in Yuan et al. (2018).", "The catSeq is also an attentional encoder-decoder model (Bahdanau et al., 2014) with copy mechanism (See et al., 2017), but adopting new training and inference setup to fit the new setting.", "The catSeqD is an extension of catSeq with orthogonal regularization (Bousmalis et al., 2016) and target encoding.", "Lately, Chan et al. (2019) proposed a reinforcement learning based fine-tuning method, which fine-tunes the pre-trained models with adaptive rewards for generating more sufficient and accurate keyphrases.", "We follow the same setting with Yuan et al. (2018) and propose an exclusive hierarchical decoding method for the KG problem.", "To the best of our knowledge, this is the first time the hierarchical decoding is explored in the KG problem.", "Different from the hierarchical decoding in other areas (Fan et al., 2018; Yarats and Lewis, 2018; Tan et al., 2017; Chen and Zhuge, 2018), we rescale the attention score of each WD step with the corresponding PD attention score to provide aspect guidance when generating keyphrases.", "Moreover, either a soft or a hard exclusion mechanism is innovatively incorporated in the decoding process to improve generation diversity.", "We denote vectors and matrices with bold lowercase and uppercase letters respectively.", "Sets are denoted with calligraphy letters.", "We use W to represent a parameter matrix.", "We define the keyphrase generation problem as follows.", "The input is a document x , the output is a keyphrase set Y = { y i } i =1 ,..., |Y| , where |Y| is the keyphrase number of x .", "Both the x and each y i are sequences of words, i.e., x = [ x 1 , ..., x l x ] and y i = [ y i 1 , ..., y il y i ] , where l x and l y i are the word numbers of x and y i correspondingly.", "We first encode each word of the document into a hidden state and then employ our exclusive hierarchical decoding shown in Figure 2 to produce keyphrases for the given document.", "Our hierarchical decoding process consists of phrase-level decoding (PD) and word-level decoding (WD).", "Each PD step decides an appropriate aspect to summarize based on both the context of the document and the aspects summarized by previous PD steps.", "Then, the hidden representation of the captured aspect is employed to initialize the WD process to generate a new keyphrase word by word.", "The WD process terminates when producing a [eowd] token.", "If the WD process output a [eopd] token, the whole hierarchical decoding process stops.", "Both PD and WD attend the document content.", "The PD attention score is used to re-weight the WD attention score to provide aspect guidance.", "To improve the diversity of the predicted keyphrases, we incorporate either an exclusive loss when training (i.e., the soft exclusion mechanism) or an exclusive search mechanism when inference (i.e., the hard exclusion mechanism).", "To obtain the context-aware representation of each document word, we employ a two-layered bidirectional GRU (Cho et al., 2014) as the document encoder: m k = BiGRU ( e x k , m k 1 , m k +1 ) , where k = 1 , 2 , ..., l x and e x k is the embedding vector of x k with d e dimensions.", "m k = [ m k ; m k ] R d 1,0 1,1 1,2 1,3 2,0 2,1 2,2 4,0 3,0 3,1 3,2 3,3 [eowd] [eopd] [eowd] [eowd] [neopd] [neopd] [neopd] y 1 1 y 21 y 12 y 13 y 23 1 2 4 3 0 Phrase-level Decoding (PD) 3 3,0 3,1 3,2 [neopd] y 1 3 WD-Attention PD-Attention 2,2 2 3 = [ 3,1 ,, 3, ] 1 3 EL/ES 3,1 3,0 3,1 [ 1 ,, ] Word-level Decoding (WD)", "is the encoded context-aware representation of x k .", "Here, [ ; ] means concatenation.", "In PD-Attention process, the PD attentional score i = [ i, 1 , i, 2 , . . . , i,l x ] is computed from the following attention mechanism employing h i as the query vector: i,k = exp( s i,k ) / l x (cid:88) n =1 exp( s i,n ) , (2) s i,n = ( h i ) TW 1 m n .", "Our hierarchical decoding process is controlled by the hierarchical decoder, which utilizes a phrase-level decoder and a word-level decoder to handle the PD process and the WD process respectively.", "We present our hierarchical decoder first and then introduce the exclusion mechanisms.", "In our decoders, all the hidden states and attentional vectors are d -dimensional vectors.", "We adopt a unidirectional GRU layer as our phrase-level decoder.", "After the WD process under last PD step is finished, the phrase-level decoder will update its hidden state as follows: h i = GRU 1 ( h i 1 ,end , h i 1 ) , (1) where h i 1 ,end is the attentional vector for the ending WD step under the ( i -1)-th PD step (e.g., h 2 , 2 in Figure", "2(b)).", "h i is regarded as the hidden representation of the captured aspect at the i -th PD step.", "h 0 is initialized as the document representation [ m l x ; m 1 ] .", "h 0 ,end is initialized with zeros.", "We choose another unidirectional GRU layer to conduct word-level decoding.", "Under the i -th PD step, the word-level decoder updates its hidden state first: h i,j = GRU 2 ([ h i,j 1 ; e y ij 1 ] , h i,j 1 ) , (4) where h i,j 1 is the WD attentional vector of the ( j -1)-th WD step and e y ij 1 is the d e -dimensional embedding vector of the y ij 1 token.", "We define h i, 0 = GRU 2 ([ 0 ; e s ] , h i ) , where h i is the current hidden state of the phrase-level decoder, 0 is a zero vector, and e s is the embedding of the start token.", "Then, the WD attentional vector is computed: h i,j = tanh ( W 2 [ h i,j ; a i,j ]) , (5) a i,j = l x (cid:88) k =1 ( i,j ) ,k m k , (6) ( i,j ) ,k = ( i,j ) ,k i,k (cid:80) l x n =1 ( i,j ) ,n i,n , (7) where ( i,j ) ,k is the original WD attention score which is computed similar to i,k except that a new parameter matrix is used and h i,j is employed as the query vector.", "The purpose of the rescaling operation in Eq.", "(7) is to indicate the focused aspect of the current PD step for each WD step.", "Finally, the h i,j is utilized to predict the probability distribution of current keyword with the copy mechanism (See et al., 2017): P ij = (1 g ij ) P ij, V + g ij P ij, X , (8) where g ij = sigmoid ( w Tg h i,j + b g ) R is the copy gate.", "P ij, V = softmax ( W 3 h i,j + b V ) R |V| is the probability distribution over a predefined vocabulary V .", "P ij, X = (cid:80) k : x k = y ij ( i,j ) ,k R |X| is the copying probability distribution over X which is a set of all the words that appeared in the document.", "P ij R |VX| is the final predicted probability distribution.", "Finally, greedy search is applied to produce the current token.", "The WD process terminates when producing a [eowd] token.", "The whole hierarchical decoding process ends if the word-level decoder produces a [eopd] token at the 0 -th step, i.e., y i 0 is predicted as [eopd].", "A standard negative log-likelihood loss is employed as the generation loss to train our hierarchical decoding model:", "where Y i 1 = y 1 , . . . , y i 1 are the target keyphrases of previously-finished PD steps and y ij 1 = y i 0 , . . . , y ij 1 are target keyphrase words of previous WD steps under the i -th PD step.", "When training, each original target keyphrase is extended with a [neopd] token and a [eowd] token, i.e., y i = [ [neopd] , y i 1 , . . . , y il y i , [eowd] ] .", "Besides, a [eopd] token is also incorporated into the targets to indicate the ending of whole decoding process.", "Teacher forcing is employed when training.", "To alleviate the duplication generation problem, we propose a soft and a hard exclusion mechanisms.", "Either of them can be incorporated into our hierarchical decoding process to form one kind of exclusive hierarchical decoding method.", "Soft Exclusion Mechanism .", "An exclusive loss (EL) is introduced in the training stage as shown Algorithm 1 Training with Exclusive Loss Require: The window size KEL .", "The target keyphrases [ y 1 , . . . , y i , . . . , y | Y| ] .", "The predicted probability distribution P ij for the j -th WD step under the i -th PD step where i = 1 , . . . , | Y| and j = 0 , 1 , . . . , l y i .", "1: Firstly, the exclusive loss of the j -th WD step under the i -th PD step is computed as follows.", "2: KEL min { KEL , i 1 } 3: if KEL > 0 and j == 1 then 4: L i,jEL = (cid:80) i 1 idx = i KEL , y idxj (cid:54) = y ij log(1 P ij ( y idxj )) 5: else 6: L i,jEL = 0 .", "0 7: end if 8: Secondly, the exclusive loss for the whole decoding process is calculated as LEL = (cid:80) i,j L i,jEL .", "9: Finally, the joint loss L = L g + LEL is employed to train the model.", "Algorithm 2 Inference with Exclusive Search Require: The window size KES .", "in Algorithm", "1. j == 1 in line 3 means the current WD step is predicting the first word of a keyphrase.", "In short, the exclusive loss punishes the model for the tendency to generate the same first word of the current keyphrase with the first words of previously-generated keyphrases within the window size KEL .", "Hard Exclusion Mechanism .", "An exclusive search (ES) is introduced in the inference stage as shown in Algorithm", "2. The exclusive search mechanism forces the word-level decoding to predict a different first word with the first words of previously-predicted keyphrases within the window size KES .", "Since a keyphrase usually has only two or three words, the first word significantly affects the prediction of the following words.", "Therefore, both the soft and the hard exclusion mechanisms can improve the diversity of generated keyphrases.", "Our model implementations are based on the OpenNMT system (Klein et al., 2017) using Py-Torch (Paszke et al., 2017).", "Experiments of all models are repeated with three different random seeds and the averaged results are reported.", "We employ four scientific article benchmark datasets to evaluate our models, including KP20k (Meng et al., 2017), Inspec (Hulth, 2003), Krapivin (Krapivin et al., 2009), and SemEval (Kim et al., 2010).", "Following previous work (Yuan et al., 2018; Chen et al., 2019a), we use the training set of KP20k to train all the models.", "After removing the duplicated data, we maintain 509,818 data samples in the training set, 20,000 in the validation set, and 20,000 in the testing set.", "After training, we test all the models on the testing datasets of these four benchmarks.", "The dataset statistics are shown in Table", "1. Dataset Total Validation Testing Inspec 2,000 1,500 500 Krapivin 2,303 1,903 400 SemEval 244 144 100 KP20k 549,818 20,000 20,000 Table 1: The statistics of validation and testing datasets.", "We focus on the comparisons with state-of-the-art decoding methods and choose the following generation models under the new setting as our baselines:", "Transformer (Vaswani et al., 2017).", "A transformer-based sequence to sequence model incorporating with copy mechanism.", "catSeq (Yuan et al., 2018).", "An RNN-based attentional encoder-decoder model with copy mechanism.", "Both the encoding and decoding are sequential.", "catSeqD (Yuan et al., 2018).", "An extension of catSeq which incorporates orthogonal regularization (Bousmalis et al., 2016) and target encoding into the sequential decoding process to improve the generation diversity and accuracy.", "catSeqCorr (Chan et al., 2019).", "Another extension of catSeq, which incorporates the sequential decoding with coverage (See et al., 2017) and review mechanisms to boost the generation diversity and accuracy.", "This method is adjusted from Chen et al. (2018) to fit the new setting.", "In this paper, we propose two novel models that are denoted as follows: ExHiRD-s .", "Our Ex clusive Hi e R archical D ecoding model with the s oft exclusion mechanism.", "In experiments, the window size KEL is selected as 4 after tuning on the KP20k validation dataset.", "ExHiRD-h .", "Our Ex clusive Hi e R archical D ecoding model with the h ard exclusion mechanism.", "In experiments, the values of the window size KES are selected as 4, 1, 1, 1 for Inspec, Krapivin, SemEval, and KP20k respectively after tuning on the corresponding validation datasets.", "We engage F 1 @ M which is recently proposed in Yuan et al. (2018) as one of our evaluation metrics.", "F 1 @ M compares all the predicted keyphrases by the model with ground-truth keyphrases, which means it does not use a fixed cutoff for the predictions.", "Therefore, it considers the number of predictions.", "We also use F 1 @5 as another evaluation metric.", "When the number of predictions is less than five, we randomly append incorrect keyphrases until it obtains five predictions instead of directly using the original predictions.", "If we do not adopt such an appending operation, F 1 @5 will become the same with F 1 @ M when the prediction number is less than five.", "The macro-averaged F 1 @ M and F 1 @5 scores are reported.", "When determining whether two keyphrases are identical, all the keyphrases are stemmed first.", "Besides, all the duplicated keyphrases are removed after stemming.", "Following previous work (Meng et al., 2017; Yuan et al., 2018; Chen et al., 2019a; Chan et al., 2019), we lowercase the characters, tokenize the sequences, and replace digits with < digit > token.", "Similar to Yuan et al. (2018), when training, the present keyphrase targets are sorted according to the orders of their first occurrences in the document.", "Then, the absent keyphrase targets are put at the end of the sorted present keyphrase targets.", "We use < p start > and < a start > as the Model Inspec Krapivin SemEval KP20k F 1 @ M F 1 @5 F 1 @ M F 1 @5 F 1 @ M F 1 @5 F 1 @ M F 1 @5 Transformer 0.254 5 0.210 7 0.328 14 0.252 4 0.310 5 0.257 4 0.360 3 0.282 10 catSeq 0.276 5 0.233 4 0.344 14 0.269 5 0.313 8 0.262 11 0.368 1 0.295 2 catSeqD 0.280 3 0.236 1 0.344 9 0.268 8 0.311 6 0.263 6 0.368 2 0.296 2 catSeqCorr 0.253 3 0.208 6 0.343 9 0.258 9 0.318 18 0.260 14 0.367 3 0.281 4 ExHiRD-s 0.278 5 0.235 3 0.338 3 0.278 0 0.322 5 0.276 5 0.372 1 0.307 0 ExHiRD-h 0.291 3 0.253 4 0.347 4 0.286 4 0.335 17 0.284 15 0.374 0 0.311 1 Table 2: Present keyphrase prediction results of all models on all datasets.", "[neopd] token of present and absent keyphrases respectively.", "; is employed as the [eowd] token for both present and absent keyphrases.", "< /s > is used as the [eopd] token.", "The vocabulary with 50,000 tokens is shared between the encoder and decoder.", "We set d e as 100 and d as 300.", "The hidden states of the encoder layers are initialized as zeros.", "In the training stage, we randomly initialize all the trainable parameters including the embedding using a uniform distribution in [ 0 . 1 , 0 . 1] .", "We set batch size as 10, max gradient norm as 1.0, and initial learning rate as 0.001.", "We do not use dropout.", "Adam (Kingma and Ba, 2014) is used as our optimizer.", "The learning rate decays to half if the perplexity on KP20k validation set stops decreasing.", "Early stopping is applied when training.", "When inference, we set the minimum phrase-level decoding step as 1 and the maximum as 20.", "We show the present and absent keyphrase prediction results in Table 2 and Table 3 correspondingly.", "As indicated in these two tables, both the ExHiRD-s model and the ExHiRD-h outperform the state-of-the-art baselines on most of the metrics, which demonstrates the effectiveness of our exclusive hierarchical decoding methods.", "Besides, the ExHiRD-h model consistently achieves the best results on both present and absent keyphrase preModel Inspec Krapivin SemEval KP20k Transformer 0.286 25 0.297 46 0.220 38 0.223 41 catSeq 0.302 11 0.277 8 0.200 2 0.217 4 catSeqD 0.304 14 0.283 9 0.199 1 0.215 8 catSeqCorr 0.352 38 0.354 4 0.249 23 0.282 14 ExHiRD-s 0.210 14 0.182 12 0.119 8 0.137 6 ExHiRD-h 0.030 6 0.140 6 0.091 10 0.110 1 Table 4: The average DupRatios of predicted keyphrases on all datasets.", "In this section, we study the model capability of avoiding producing duplicated keyphrases.", "Duplication ratio is denoted as DupRatio and defined as follows: DupRatio = # duplications # predictions , (10) where # means the number of.", "For instance, the DupRatio is 0.5 (3/6) for [A, A, B, B, A, C].", "We report the average DupRatio per document in Table", "4. From this table, we observe that our ExHiRD-s and ExHiRD-h consistently and significantly reduce the duplication ratios on all datasets.", "Moreover, we also find that our ExHiRD-h model achieves the lowest duplication ratios on all datasets.", "2 We also tried to simultaneously incorporate the soft and the hard exclusion mechanisms into our hierarchical decoding model, but it still underperforms ExHiRD-h.", "We also study the average number of unique keyphrase predictions per document.", "Duplicated keyphrases are removed.", "The results are shown in Table", "5. One main finding is that all the models generate an insufficient number of unique keyphrases on most datasets, especially for predicting absent keyphrases.", "We also observe that our methods can improve the number of unique keyphrases by a large margin, which is extremely beneficial to solve the problem of insufficient generation.", "Correspondingly, it also leads to over-generate more keyphrases than the ground-truth for the cases that do not have this problem, such as the present keyphrase predictions on Krapivin and KP20k datasets.", "We leave solving the over-generation of present keyphrases on Krapivin and KP20k as our future work.", "Since our ExHiRD-h model achieves the best performance on almost all of the metrics, we select it as our final model and probe it more subtly in the following sections.", "In order to understand the effects of each component of ExHiRD-h, we conduct an ablation study on it and report the results on the SemEval dataset in Table", "6. We observe that both our hierarchical decoding process and exclusive search mechanism are help-K ES Present Absent DupRatio F 1 @ M F 1 @5 #PK F 1 @ M F 1 @5 #AK Oracle -3.32 -1.93 0 0.376 0.303 3.76 0.028 0.013 0.61 0.195 1 0.374 0.311 3.97 0.033 0.016 0.86 0.110 2 0.371 0.314 4.11 0.034 0.017 1.00 0.069 3 0.368 0.316 4.21 0.034 0.017 1.08 0.038 4 0.366 0.316 4.27 0.033 0.017 1.16 0.017 5 0.366 0.316 4.30 0.033 0.017 1.19 0.010 all 0.365 0.316 4.32 0.032 0.017 1.25 0.002 Table 7: Results of ExHiRD-h on KP20k with different window size KES .", "ful to generate more accurate present and absent keyphrases.", "Besides, we also find that the signifi-cant performance margins on the duplication ratio and the keyphrase numbers are mainly from the exclusive search mechanism.", "For a more comprehensive understanding of our exclusive search mechanism in our ExHiRD-h model, we also study the effects of the window size KES .", "We conduct the experiments on KP20k dataset and list the results in Table", "7. We note that a larger window size KES leads to a lower DupRatio as we anticipated.", "It is because the exclusive search can observe more previously-generated keyphrases to avoid generating duplicated keyphrases when KES is larger.", "When KES is all, the DupRatio is not absolute zero because we stem keyphrases when determining whether they are duplicated.", "Besides, we also find that larger KES leads to better F 1 @5 scores.", "The reason is that for F 1 @5 scores, we append incorrect keyphrases to obtain five predictions when the number of predictions is less than five.", "A larger KES leads to predict more unique keyphrases, append less absolutely incorrect keyphrases and improve the chance to output more accurate keyphrases.", "However, generating more unique keyphrases may also lead to more incorrect predictions, which will degrade the F 1 @ M scores since F 1 @ M considers all the unique predictions without a fixed cutoff.", "Our exclusive search is a general method that can be easily applied to other models.", "In this section, we study the effects of our exclusive search on other baseline models.", "We show the experimental results on KP20k dataset in Table", "8. From this table, we note that the effects of exclusive search on baselines are similar to the effects on our hierarchical decoding.", "We also see our ExHiRD-h still achieves the best performance on most of the metrics, even if baselines are also incorporated with exclusive search, which exhibits the superiority of our hierarchical decoding again.", "We display a prediction example in Figure", "3. Our ExHiRD-h model generates more accurate keyphrases for the document comparing to the four baselines.", "Besides, we also observe much less repeated keyphrases are generated by our ExHiRD-h.", "For instance, all the baselines produce the keyphrase debugging at least three times.", "However, our ExHiRD-h only generates it once, which demonstrates that our proposed method is more powerful in avoiding duplicated keyphrases.", "In this paper, we propose an exclusive hierarchical decoding framework for keyphrase generation.", "Unlike previous sequential decoding methods, our hierarchical decoding consists of a phrase-level decoding process to capture the current aspect to summarize and a word-level decoding process to generate keyphrases based on the captured aspect.", "Besides, we also propose a soft and a hard exclusion mechanisms to enhance the diversity of the generated keyphrases.", "Extensive experimental results demonstrate the effectiveness of our meth-SOC HW/SW co-verification based debugging technique.", "ods.", "One interesting future direction is to explore whether the beam search is helpful to our model.", "The work described in this paper was partially supported by the Research Grants Council of the Hong Kong Special Administrative Region, China (CUHK 2300174 (Collaborative Research Fund, No. C5026-18GF)).", "We would like to thank our colleagues for their comments." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "method", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "method", "objective", "abstain", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "other", "method", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "method", "other", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "objective", "objective", "objective", "objective", "abstain", "objective", "other", "other" ]
[ "Linking pronominal expressions to the correct references requires, in many cases, better analysis of the contextual information and external knowledge.", "In this paper, we propose a two-layer model for pronoun coreference resolution that leverages both context and external knowledge, where a knowledge attention mechanism is designed to ensure the model leveraging the appropriate source of external knowledge based on different context.", "Experimental results demonstrate the validity and effectiveness of our model, where it outperforms state-of-the-art models by a large margin.", "The question of how human beings resolve pronouns has long been of interest to both linguistics and natural language processing (NLP) communities, for the reason that pronoun itself has weak semantic meaning (Ehrlich, 1981) and brings challenges in natural language understanding.", "To explore solutions for that question, pronoun coreference resolution (Hobbs, 1978) was proposed.", "As an important yet vital sub-task of the general coreference resolution task, pronoun coreference resolution is to find the correct reference for a given pronominal anaphor in the context and has been shown to be crucial for a series of downstream tasks (Mitkov, 2014), including machine translation (Mitkov et al., 1995), summarization (Steinberger et al., 2007), information extraction (Edens et al., 2003), and dialog sys-tems (Strube and Muller, 2003).", "Conventionally, people design rules (Hobbs, 1978; Nasukawa, 1994; Mitkov, 1998) or use features (Ng, 2005; Charniak and Elsner, 2009; Li et al., 2011) to resolve the pronoun coreferences.", "These methods heavily rely on the coverage and quality of the manually defined rules and features.", "Until recently, end-to-end solution (Lee et al., 2017) was proposed towards solving the general coreference problem, where deep learning models were used to better capture contextual information.", "However, training such models on annotated corpora can be biased and normally does not consider external knowledge.", "Despite the great efforts made in this area in the past few decades (Hobbs, 1978; Mitkov, 1998; Ng, 2005; Rahman and Ng, 2009), pronoun coreference resolution remains challenging.", "The reason behind is that the correct resolution of pronouns can be influenced by many factors (Ehrlich, 1981); many resolution decisions require reasoning upon different contextual and external knowledge (Rah-man and Ng, 2011), which is also proved in other NLP tasks (Song et al., 2017, 2018; Zhang et al., 2018).", "Figure 1 demonstrates such requirement with three examples, where Example A depends on the plurality knowledge that them' refers to plural noun phrases; Example B illustrates the gender requirement of pronouns where she' can only refer to a female person (girl); Example C requires a more general type of knowledge 1 that cats can climb trees but a dog normally does not'.", "All of these knowledge are difficult to be learned from training data.", "Considering the importance of both contextual information and external human knowledge, how to jointly leverage them becomes an important question for pronoun coreference resolution.", "In this paper, we propose a two-layer model to address the question while solving two challenges of incorporating external knowledge into deep models for pronoun coreference resolution, where the challenges include: first, different cases have their knowledge preference, i.e., some knowledge is exclusively important for certain cases, which requires the model to be flexible in selecting appropriate knowledge per case; second, the availability of knowledge resources is limited and such resources normally contain noise, which requires the model to be robust in learning from them.", "Consequently, in our model, the first layer predicts the relations between candidate noun phrases and the target pronoun based on the contextual information learned by neural networks.", "The second layer compares the candidates pair-wisely, in which we propose a knowledge attention module to focus on appropriate knowledge based on the given context.", "Moreover, a softmax pruning is placed in between the two layers to select high confident candidates.", "The architecture ensures the model being able to leverage both context and external knowledge.", "Especially, compared with conventional approaches that simply treat external knowledge as rules or features, our model is not only more flexible and effective but also interpretable as it reflects which knowledge source has the higher weight in order to make the decision.", "Experiments are conducted on a widely used evaluation dataset, where the results prove that the proposed model outperforms all baseline models by a great margin.", "2 Above all, to summarize, this paper makes the following contributions:", "1. We propose a two-layer neural model to combine contextual information and external 1 This is normally as selectional preference (SP) (Hobbs, 1978), which is defined as given a predicate (verb), a human has the preference for its argument (subject in this example).", "knowledge for the pronoun coreference resolution task.", "2. We propose a knowledge attention mechanism that allows the model to select salient knowledge for different context, which predicts more precisely and can be interpretable through the learned attention scores.", "3. With our proposed model, the performance of pronoun coreference resolution is boosted by a great margin over the state-of-the-art models.", "Following the conventional setting (Hobbs, 1978), the task of pronoun coreference resolution is defined as: for a pronoun p and a candidate noun phrase set N , the goal is to identify the correct non-pronominal references set 3 C .", "the objective is to maximize the following objective function: J = (cid:80) c C e F ( c,p ) (cid:80) n N e F ( n,p ) , (1) where c is the correct reference and n the candidate noun phrase.", "F ( ) refers to the overall coreference scoring function for each n regarding p .", "Following (Mitkov, 1998), all non-pronominal noun phrases in the recent three sentences of the pronoun p are selected to form N .", "knowledge in this task, thus for each n and p , F ( . ) is decomposed into two components: F ( n, p ) = F c ( n, p ) + F k ( n, p ) , (2) where F c ( n, p ) is the scoring function that predicts the relation between n and p based on the contextual information; F k ( n, p ) is the scoring function that predicts the relation between n and p based on the external knowledge.", "There could be multiple ways to compute F c and F k , where a solution proposed in this paper is described as follows.", "The architecture of our model is shown in Figure 2, where we use two layers to incorporate contextual information and external knowledge.", "Specifically, the first layer takes the representations of different n and the p as input and predict the relationship between each pair of n and p , so as to compute F c .", "The second layer leverages the external knowledge to compute F k , which consists of pair-wise knowledge score f k among all candidate n .", "To enhance the efficiency of the model, a softmax pruning module is applied to select high confident candidates into the second layer.", "The details of the aforementioned components are described in the following subsections.", "Before F c is computed, the contextual information is encoded through a span 4 representation (SR) module in the first layer of the model.", "Following Lee et al. (2017), we adopt the standard bidirectional LSTM (biLSTM) (Hochreiter and Schmid-huber, 1997) and the attention mechanism (Bah-danau et al., 2015) to generate the span representation, as shown in Figure", "3. Given that the initial word representations in a span n i are x 1 , ..., x T , we denote their representations x 1 , ..., x T after encoded by the biLSTM.", "Then we obtain the inner-span attention by a t = e t (cid:80) Tk =1 e k , (3) where t is computed via a standard feed-forward neural network 5 t = NN ( x t ) .", "Thus, we have 4 Both noun phrases and the pronoun are treated as spans.", "5 We use NN to present feed-forward neural networks throughout this paper.", "Afterwards, we concatenate the starting ( x start ) and ending ( x end ) embedding of each span, as well as its weighted embedding ( x i ) and the length feature ( ( i ) ) to form its final representation e :", "Once the span representation of n N and p are obtained, we compute F c for each n with a standard feed-forward neural network:", "In the second layer of our model, external knowledge is leveraged to evaluate all candidate n so as to give them reasonable F k scores.", "In doing so, each candidate is represented as a group of features from different knowledge sources, e.g., the cat' can be represented as a singular noun, unknown gender creature, and a regular subject of the predicate verb climb'.", "For each candidate, we conduct a series of pair-wise comparisons between it and all other ones to result in its F k score.", "An attention mechanism is proposed to perform the comparison and selectively use the knowledge features.", "Consider there exists noise in external knowledge, especially when it is automatically Figure 4: The structure of the knowledge attention module.", "generated, such attention mechanism ensures that, for each candidate, reliable and useful knowledge is utilized rather than ineffective ones.", "The details of the knowledge attention module and the overall scoring are described as follows.", "Knowledge Attention Figure 4 demonstrates the structure of the knowledge attention module, where there are two components: (1) weighting: assigning weights to different knowledge features regarding their importance in the comparison; (2) scoring: valuing a candidate against an-other one based on their features from different knowledge sources.", "Assuming that there are m knowledge sources input to our model, each candidate can be represented by m different features, which are encoded as embeddings.", "Therefore, two candidates n and n (cid:48) regarding p have their knowledge feature embeddings k 1 n,p , k 2 n,p , ..., k mn,p and k 1 n (cid:48) ,p , k 2 n (cid:48) ,p , ..., k mn (cid:48) ,p , respectively.", "The weighting component receives all features k for n and n (cid:48) , and the span representations e n and e n (cid:48) as input, where e n and e n (cid:48) help selecting appropriate knowledge based on the context.", "As a result, for a candidate pair ( n , n (cid:48) ) and a knowledge source i , its knowledge attention score is computed via i ( n, n (cid:48) , p ) = NN ka ([ o in,p , o in (cid:48) ,p , o in,p (cid:12) o in (cid:48) ,p ]) , (7) where o in,p = [ e n , k in,p ] and o in (cid:48) ,p = [ e n (cid:48) , k in (cid:48) ,p ] are the concatenation of span representation and external knowledge embedding for candidate n and n (cid:48) respectively.", "The weight for features from different knowledge sources is thus computed via w i = e i (cid:80) mj =1 e j .", "Similar to the weighting component, for each feature i , we compute its score f ik ( n, n (cid:48) , p ) for n against n (cid:48) in the scoring component through", "where it is worth noting that we exclude e in this component for the reason that, in practice, the dimension of e is normally much higher than k .", "As a result, it could dominate the computation if e and k is concatenated.", "6 Once the weights and scores are obtained, we have a weighted knowledge score for n against n (cid:48) : f k ( n, n (cid:48) , p ) = m (cid:88) i =1 w i f ik ( n, n (cid:48) , p ) .", "Overall Knowledge Score After all pairs of n and n (cid:48) are processed by the attention module, the overall knowledge score for n is computed through the averaged f k ( n, n (cid:48) , p ) over all n (cid:48) : F k ( n, p ) = (cid:80) n (cid:48) N o f k ( n, n (cid:48) , p ) |N o | , (11) where N o = N n for each n .", "Normally, there could be many noun phrases that serve as the candidates for the target pronoun.", "One potential obstacle in the pair-wise comparison of candidate noun phrases in our model is the squared complexity O ( |N | 2 ) with respect to the size of N .", "To filter out low confident candidates so as to make the model more efficient, we use a softmax-pruning module between the two layers in our model to select candidates for the next step.", "The module takes F c as input for each n , uses a softmax computation: F c ( n, p ) = e F c ( n,p ) (cid:80) n i N e F c ( n i ,p ) .", "where candidates with higher F c are kept, based on a threshold t predefined as the pruning standard.", "Therefore, if candidates have similar F c 6 We do not have this concern for the weighting component because the softmax (c.f. Eq. 8) actually amplifies the difference of even if they are not much differentiated.", "scores, the module allow more of them to proceed to the second layer.", "Compared with other conventional pruning methods (Lee et al., 2017, 2018) that generally keep a fixed number of candidates, our pruning strategy is more efficient and flexible.", "The CoNLL-2012 shared task (Pradhan et al., 2012) corpus is used as the evaluation dataset, which is selected from the Ontonotes 5.0 7 .", "Following conventional approaches (Ng, 2005; Li et al., 2011), for each pronoun in the document, we consider candidate n from the previous two sentences and the current sentence.", "For pronouns, we consider two types of them following Ng (2005), i.e., third personal pronoun ( she , her , he , him , them , they , it ) and possessive pronoun ( his , hers , its , their , theirs ).", "Table 1 reports the number of the two type pronouns and the overall statistics for the experimental dataset.", "According to our selection range of candidate n , on average, each pronoun has 4.6 candidates and 1.3 correct references.", "In this study, we use two types of knowledge in our experiments.", "The first type is linguistic features, i.e., plurality and animacy & gender.", "We employ the Stanford parser 8 , which generates plurality, animacy, and gender markups for all the noun phrases, to annotate our data.", "Specifically, the plurality feature denotes each n and p to be singular or plural.", "For each candidate n , if its plurality status is the same as the target pronoun, we label it 1, otherwise 0.", "The animacy & gender (AG) feature denotes whether a n or p is a living object, and being male, female, or neutral if it is alive.", "For each candidate n , if its AG feature matches the target pronoun's, we label it 1, otherwise 0.", "The second type is the selectional preference (SP) knowledge.", "For this knowledge, we create 7 https://catalog.ldc.upenn.edu/LDC2013T19 8 https://stanfordnlp.github.io/CoreNLP/ a knowledge base by counting how many times a predicate-argument tuple appears in a corpus and use the resulted number to represent the preference strength.", "Specifically, we use the English Wikipedia 9 as the base corpus for such counting.", "Then we parse the entire corpus through the Stanford parser and record all dependency edges in the format of (predicate, argument, relation, number) , where predicate is the governor and argument the dependent in the original parsed dependency edge 10 .", "Later for sentences in the training and test data, we firstly parse each sentence and find out the dependency edge linking p and its corresponding predicate.", "Then for each candidate 11 n in a sentence, we check the previously created SP knowledge base and find out how many times it appears as the argument of different predicates with the same dependency relation (i.e., nsubj and dobj ).", "The resulted frequency is grouped into the following buckets [1, 2, 3, 4, 5-7, 8-15, 16-31, 32-63, 64+] and we use the bucket id as the final SP knowledge.", "Thus in the previous example: The dog is chasing the cat but it climbs the tree .", "Its parsing result indicates that it ' is the subject of the verb climb'.", "Then for the dog ', the cat ', and the tree ', we check their associations with climb' in the knowledge base and group them in the buckets to form the SP knowledge features.", "Several baselines are compared in this work.", "The first two are conventional unsupervised ones: Recent Candidate , which simply selects the most recent noun phrase that appears in front of the target pronoun.", "Deterministic model (Raghunathan et al., 2010), which proposes one multi-pass seive model with human designed rules for the coreference resolution task.", "Besides the unsupervised models, we also compare with three representative supervised ones: Statistical model, proposed by Clark and Manning (2015), uses human-designed entity-level 9 https://dumps.wikimedia.org/enwiki/ 10 In Stanford parser results, when a verb is a linking verb (e.g., am, is), an 'nsubj' edge is created between its predicative and subject.", "Thus for this case the predicative is treated as the predicate for the subject (argument) in our study.", "11 If a noun phrase contains multiple words, we use the parsed result to locate its keyword and use it to represent the entire noun phrase.", "features between clusters and mentions for coreference resolution.", "Deep-RL model, proposed by Clark and Manning (2016), a reinforcement learning method to directly optimize the coreference matrix instead of the traditional loss function.", "End2end is the current state-of-the-art coreference model (Lee et al., 2018), which performs in an end-to-end manner and leverages both the contextual information and a pre-trained language model (Peters et al., 2018).", "Note that the Deterministic, Statistical, and Deep-RL models are included in the Stanford CoreNLP toolkit 12 , and experiments are conducted with their provided code.", "For End2end, we use their released code 13 and replace its mention detection component with gold mentions for the fair comparison.", "To clearly show the effectiveness of the proposed model, we also present a variation of our model as an extra baseline to illustrate the effect of different knowledge incorporation manner: Feature Concatenation , a simplified version of the complete model that removes the second knowledge processing layer, but directly treats all external knowledge embeddings as features and concatenates them to span representations.", "Following previous work (Lee et al., 2018), we use the concatenation of the 300d GloVe embeddings (Pennington et al., 2014) and the ELMo (Pe-ters et al., 2018) embeddings as the initial word representations.", "Out-of-vocabulary words are initialized with zero vectors.", "Hyper-parameters are 12 https://stanfordnlp.github.io/CoreNLP/coref.html 13 https://github.com/kentonl/e2e-coref set as follows.", "The hidden state of the LSTM module is set to 200, and all the feed-forward networks in our model have two 150-dimension hidden layers.", "The default pruning threshold t for softmax pruning is set to 10 7 .", "All linguistic features (plu-rality and AG) and external knowledge (SP) are encoded as 20-dimension embeddings.", "For model training, we use cross-entropy as the loss function and Adam (Kingma and Ba, 2015) as the optimizer.", "All the aforementioned hyper-parameters are initialized randomly, and we apply dropout rate 0.2 to all hidden layers in the model.", "Our model treats a candidate as the correct reference if its predicted overall score F ( n, p ) is larger than 0.", "The model training is performed with up to 100 epochs, and the best one is selected based on its performance on the development set.", "Table 2 compares the performance of our model with all baselines.", "Overall, our model performs the best with respect to all evaluation metrics.", "Several findings are also observed from the results.", "First, manually defined knowledge and features are not enough to cover rich contextual information.", "Deep learning models (e.g., End2end and our proposed models), which leverage text representations for context, outperform other approaches by a great margin, especially on the recall.", "Second, external knowledge is highly helpful in this task, which is supported by that our model outperforms the End2end model significantly.", "Moreover, the comparison between the two variants of our models is also interesting, where the final two-layer model outperforms the Feature Concatenation model.", "It proves that simply treating external knowledge as the feature, even though they are from the same sources, is not as effective as learning them in a joint framework.", "The reason Figure 5: Effect of different thresholds on candidate numbers.", "behind this result is mainly from the noise in the knowledge source, e.g., parsing error, incorrectly identified relations, etc.", "For example, the plurality of 17% noun phrases are wrongly labeled in the test data.", "As a comparison, our knowledge attention might contribute to alleviate such noise when incorporating all knowledge sources.", "Effect of Different Knowledge To illustrate the importance of different knowledge sources and the knowledge attention mechanism, we ablate various components of our model and report the corresponding F1 scores on the test data.", "The results are shown in Table 3, which clearly show the necessity of the knowledge.", "Interestingly, AG contributes the most among all knowledge types, which indicates that potentially more cases in the evaluation dataset demand on the AG knowledge than others.", "More importantly, the results also prove the effectiveness of the knowledge attention module, which contributes to the performance gap between our model and the Feature Concatenation one.", "Effect of Different Pruning Thresholds We try different thresholds t for the softmax pruning in selecting reliable candidates.", "The effects of different thresholds on reducing candidates and overall performance are shown in Figure 5 and 6 respectively.", "Along with the increase of t , both the max and the average number of pruned candidates drop quickly, so that the space complexity of the model can be reduced accordingly.", "Particularly, there are as much as 80% candidates can be filtered out when t = 10 1 .", "Meanwhile, when referring to Figure 6, it is observed that the model performs stable with the decreasing of candidate numbers.", "Not surprisingly, the precision rises when reducing candidate numbers, yet the recall drops dramatically, eventually results in the drop of F1.", "With the above observations, the reason we set t = 10 7 as the default threshold is straightforward: on this value, one-third candidates are pruned with almost no influence on the model performance in terms of precision, recall, and the F1 score.", "To further demonstrate the effectiveness of incorporating knowledge into pronoun coreference resolution, two examples are provided for detailed analysis.", "The prediction results of the End2end model and our complete model are shown in Table 4.", "There are different challenges in both examples.", "In Example A, Jesus', man', and my son' are all similar (male) noun phrases matching the target pronoun He '.", "The End2end model predicts all of them to be correct references because their context provides limited help in dis-Example A Example B Sentences ... (A large group of people) met (Jesus).", "tinguishing them.", "In Example B, the distance between an accident' and the pronoun it' is too far.", "As a result, the None' result from the End2end model indicates that the contextual information is not enough to make the decision.", "As a comparison, in our model, integrating external knowledge can help to solve such challenges, e.g., for Example A, SP knowledge helps when Plurality and AG cannot distinguish all candidates.", "To clearly illustrate how our model leverages the external knowledge, we visualize the knowledge attention of the correct reference against other candidates 14 via heatmaps in Figure 7.", "Two interesting observations are drawn from the visualization.", "First, given two candidates, if they are significantly different in one feature, our model tends to pay more attention to that feature.", "Take AG as an example, in Example A, the AG features of all candidates consistently match the pronoun 14 Only candidates entered the second layer are considered.", "he' (all male/neutral).", "Thus the comparison between my son' and all candidates pay no attention to the AG feature.", "While in Example B, the target pronoun it' cannot describe human, thus 'fa-ther' and friend' are 0 on the AG feature while hospital' and accident' are", "1. As a result, the attention module emphasizes AG more than other knowledge types.", "Second, The importance of SP is clearly shown in these examples.", "In example A, Plurality and AG features cannot help, the attention module weights higher on SP because son' appears 100 times as the argument of the parsed predicate child' in the SP knowledge base, while other candidates appear much less at that position.", "In example B, as mentioned above, once AG helps filtering 'hospital' and 'accident', SP plays an important role in distinguishing them because accident' appears 26 times in the SP knowledge base as the argument of the fault' from the results of the parser, while hospital' never appears at that position.", "Coreference resolution is a core task for natural language understanding, where it detects mention span and identifies coreference relations among them.", "As demonstrated in (Lee et al., 2017), mention detection and coreference prediction are the two major focuses of the task.", "Different from the general coreference task, pronoun coreference resolution has its unique challenge since the semantics of pronouns are often not as clear as normal noun phrases, in general, how to leverage the context and external knowledge to resolve the coreference for pronouns becomes its focus (Hobbs, 1978; Rahman and Ng, 2011; Emami et al., 2018).", "In previous work, external knowledge including manually defined rules (Hobbs, 1978; Ng, 2005), such as number/gender requirement of different pronouns, and world knowledge (Rahman and Ng, 2011), such as selectional preference (Wilks, 1975; Zhang and Song, 2018), have been proved to be helpful for pronoun coreference resolution.", "Recently, with the development of deep learning, Lee et al. (2017) proposed an end-to-end model that learns contextual information with an LSTM module and proved that such knowledge is helpful for coreference resolution when the context is properly encoded.", "The aforementioned two types of knowledge have their own advantages: the contextual information covers diverse text expressions that are difficult to be predefined while the external knowledge is usually more precisely constructed and able to provide extra information beyond the training data.", "Different from previous work, we explore the possibility of joining the two types of knowledge for pronoun coreference resolution rather than use only one of them.", "To the best of our knowledge, this is the first attempt that uses deep learning model to incorporate contextual information and external knowledge for pronoun coreference resolution.", "In this paper, we proposed a two-layer model for pronoun coreference resolution, where the first layer encodes contextual information and the second layer leverages external knowledge.", "Particularly, a knowledge attention mechanism is proposed to selectively leverage features from different knowledge sources.", "As an enhancement to existing methods, the proposed model combines the advantage of conventional feature-based models and deep learning models, so that context and external knowledge can be synchronously and effectively used for this task.", "Experimental results and case studies demonstrate the superiority of the proposed model to state-of-the-art baselines.", "Since the proposed model adopted an extensible structure, one possible future work is to explore the best way to enhance it with more complicated knowledge resources such as knowledge graphs.", "This paper was partially supported by the Early Career Scheme (ECS, No.26206717) from Research", "Research Grants Council in Hong Kong.", "In addition, Hongming Zhang has been supported by the Hong Kong Ph.D.", "Fellowship and the Tencent Rhino-Bird Elite Training Program.", "We also thank the anonymous reviewers for their valuable comments and suggestions that help improving the quality of this paper." ]
[ "abstain", "objective", "objective", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "objective", "objective", "abstain", "abstain", "method", "abstain", "objective", "objective", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other" ]
[ "Fine-tuning of pre-trained transformer models has become the standard approach for solving common NLP tasks (Devlin et al., 2019).", "Most of the existing approaches rely on a randomly initialized classifier on top of such networks.", "We argue that this fine-tuning procedure is sub-optimal as the pre-trained model has no prior on the specific classifier labels, while it might have already learned an intrinsic textual representation of the task.", "In this paper, we introduce a new scoring method that casts a plausibility ranking task in a full-text format and leverages the masked language modeling head tuned during the pre-training phase.", "We study commonsense reasoning tasks where the model must rank a set of hypotheses given a premise, focusing on the COPA (Gordon et al., 2012), Swag (Zellers et al., 2018), HellaSwag (Zellers et al., 2019) and CommonsenseQA (Talmor et al., 2019) datasets.", "By exploiting our scoring method without fine-tuning, we are able to produce strong baselines (e.g. 80% test accuracy on COPA) that are comparable to supervised approaches.", "Moreover, when fine-tuning directly on the proposed scoring function, we show that our method provides a much more stable training phase across random restarts (e.g 10 standard deviation reduction on COPA test accuracy) and requires less annotated data than the standard classifier approach to reach equivalent performances.", "Recent advances in natural language processing have been made using sequential transfer learning over large pre-trained transformer models.", "From these models, most NLP tasks can be addressed by adding a classifier on top of the transformer embedding outputs (Devlin et al., 2019; Liu et al., 2019) .", "In this paper, we tackle a subset of NLP tasks consisting in plausibility ranking.", "Such tasks can be formalised as follows: given a unique premise p and a set of hypotheses H = { h i } i =1 ...n , the task consists in returning the appropriate hypothesis h H that matches p (see Section 3 for more details).", "A natural task that fits into this problem formulation is commonsense reasoning.", "Thus, it will be the main focus of the present paper.", "Traditionally, this problem is solved by jointly classifying each pair ( p, h i ) i =1 ...n .", "For instance, assuming a Masked Language Modeling (MLM) model is used, an example from the COPA dataset (Gordon et al., 2012) is commonly casted into two distinct examples: [CLS] The man broke his toe.", "[SEP] He dropped a hammer on his foot.", "[SEP] correct [CLS] The man broke his toe.", "[SEP] He got a hole in his sock.", "[SEP] incorrect The special token [CLS] (used for sentence level tasks) is then provided to a classifier in order to predict the label of the given example; [SEP] is a special separator token.", "This format will be referred to as separated-sentence .", "For such a task, the use of the randomly initialized head can appear sub-optimal since the pre-trained model does not integrate any prior on the specific classifier label.", "To validate this intuition, we cast the MLM model inputs into a full-text format.", "Thus, the separation token is dropped and potentially replaced by conjunction words that are fully specific to the task.", "The previously illustrated correct example will be turned into: [CLS] The man broke his toe because he dropped a hammer on his foot [SEP] .", "tuned during the pre-training phase (see Figure 1 for an overview of the proposed approach).", "This method produces strong zero-shot 1 baselines on the COPA (Gordon et al., 2012), Swag (Zellers et al., 2018), HellaSwag (Zellers et al., 2019) and CommonsenseQA (Talmor et al., 2019) datasets.", "Then, we fine-tune this new scoring function with a margin-based loss as proposed in (Li et al., 2019).", "Using RoBERTa LARGE , our results reveal that this new training procedure leads to better accuracy and much more stable training trajectories which is an important feature since large MLM models are known to be unstable on several tasks (Devlin et al., 2019; Phang et al., 2018).", "Finally, we find that a progressive decrease of the training dataset size results in a progressive increase of the accuracy gap between our proposed method and the standard classifier ones.", "This makes our method advantageous in small dataset context.", "In (Trinh and Le, 2018), researchers have shown that a RNN Language Model pretrained on a large amount of data can be used to efficiently score sentences in a zero-shot setting.", "They used the Winograd Schema Challenge (WSC-273) dataset (Levesque et al., 2012) which mostly consists of a pronoun disambiguation task that requires commonsense reasoning.", "In their approach, the pronoun to disambiguate is replaced by the different candidates.", "Then, each version of the sentence is scored using the likelihood of the sequence under the forward autoregressive factorization.", "They showed that targeting the likelihood of the tokens placed after the candidate words performs better than a full-sentence likelihood estimation.", "This result highlights the fact that the choice of the targeted sub-sequence for the likelihood estimation has an important impact on the overall performance of the model.", "More recently, analysis of relational knowledge contained in pre-trained BERT models has been the subject of different studies (Petroni et al., 2019; Poerner et al., 2019).", "Results have shown evidences that BERT models memorize reasoning about entity names and commonsense knowledge, making MLM models appropriate candidates to commonsense oriented tasks.", "From a supervised learning perspective, (Li et al., 2019) proposed to replace the traditional 1 For the following of our paper, we will note as zero-shot setting the use of the pre-trained model without fine-tuning.", "cross-entropy loss with a margin-based one one the COPA dataset.", "The authors argued that cross-entropy based methods are not adapted for plausibility ranking tasks since they force the scores to adopt extreme values (near 0 or 1).", "In contrast, a margin-based objective function appeared to be a natural way to rank a set of hypotheses.", "Both approaches were compared using the [CLS] token of the BERT-base model and a separated-sentence input format.", "The margin-based objective function surpassed the cross-entropy one by increasing the Test set accuracy from 73.4% to 75.4%.", "Adopting a token level scoring approach (Koci-jan et al., 2019) used a BERT model with a mixture between a margin-based and a MLM loss on WSC-273 to score the different pronouns to disambiguate.", "This approach allows the authors to improve the previous state of the art by 8.8%.", "Despite being the closest method to the one proposed in this paper, our approach differs from three points: We generalize the scoring method by targeting different contiguous sub-sequences for the likelihood estimation.", "To do so, different datasets are recasted in a full-text format .", "We also focus on targeting the premise avoiding inner statistical biases of different hypotheses (e.g. word frequencies, punctuation, variable sequence lengths etc...).", "The objective of the present paper is to propose a direct comparison in terms of accuracy and training stability across random restarts between the proposed method and standard classifers.", "we aim to identify the fitting hypothesis h H which correctly matches p .", "The values L p and { L i } i =1 ...n are the sequence lengths of premise and hypotheses respectively.", "In a commonsense settings, such problem corresponds to find premise-hypothesis implications by exploiting some prior commonsense knowledge.", "Since 1 2 3 4 5 [CLS] [MASK] man broke his toe because he dropped a hammer on his foot [SEP] [CLS] The [MASK] broke his toe because he dropped a hammer on his foot [SEP] [CLS] The man broke his [MASK] because he dropped a hammer on his foot [SEP] [CLS] The man [MASK] his toe because he dropped a hammer on his foot [SEP] [CLS] The man broke [MASK] toe because he dropped a hammer on his foot [SEP] a Ro SSM SSM ............... a Ro shared weights 1 2 3 4 5 Figure 1: Overview of the proposed method for the task t = COPA.", "our scoring method consumes input sequences in a full-text format (see Section 3.2), our method is formulated on a commonsense task but not limited to it.", "The proposed Sequence Scoring Method (SSM), takes as input a pair (cid:104) p, h i (cid:105) returns a score representing", "representing the likelihood of h i of being implied by p .", "First, a transform operator T converts (cid:104) p, h i (cid:105) pair into a full-text input.", "Such operator, in it's simplest form, just concatenates the two sequences.", "However, in general T can be constrained on the task t .", "where s i is the resulting full-text input, while c tl , c tm , and c tr are left, middle and right conjunction sequences of the task.", "For example, Swag will have no conjunction, since the correct hypothesis is the natural continuation of the premise, while COPA will have because/so middle conjunctions due to its cause/effect nature (see Section 4).", "order to compute its result.", "Let us consider the masking of a word w which contributes to make sense of the matching between p and h i .", "The intuition is that the confidence of the network in recovering such word is directly related to the score of (cid:104) p, h i (cid:105) .", "Let us define, inspired by the notation of (Song et al., 2019), s \\ wi as the sentence s i with the tokens of w replaced by the [MASK] token.", "where premise words are masked one by one in order to compute their relevance with respect to the given hypothesis.", "Masked word probability is estimated from direct inference on a model pretrained on MLM task.", "The computational complexity of such method grows linearly with L p (re-quiring L p examples per forward pass).", "Alternatively, the target hypothesis score is computed as: S hi = 1 L i L i (cid:88) k =1 log (cid:20) P (cid:18) h ( k ) i | s \\ h ( k ) i i (cid:19)(cid:21) .", "The target hypothesis score needs normalization by L i in order to allow comparison between variable candidate hypothesis length.", "The best hypothesis will be taken as the one maximizing the target premise (or hypothesis) score: h = h j H s.t. max i =1 ...n S pi = S pj .", "As demonstrated in Section 5.2, the target premise score allows for a fairer comparison between different hypotheses.", "In fact, they present inherent differences in terms of statistical frequency of words, sequence length or may exhibit more or less strong inter-dependency between words (e.g. composite words reinforce each other confidence).", "Such variance could introduce a bias in the relative significance of each hypothesis alone (inde-pendently from the premise).", "On the opposite, different probabilities on the same target premise word can only be affected by the change of hypothesis context.", "We can extend the proposed SSM by scoring the reconstruction not only of single words, but of entire n-grams.", "Adding n-grams probabilities to the logarithmic mean combination not only robustifies the scoring methods, but helps to better model the joint probability of (dependent) close words, especially in a zero-shot setting.", "Let us note as p ( u : v ) as the sub-sequence of p spanning between indexes u and v (included).", "The partial target premise score for g-grams (i.e. mask windows of size g ) can be expressed as: S p,gi = L p g +1 (cid:88) k =1 log (cid:104) P (cid:16) p ( k : k + g 1) | s \\ p ( k : k + g 1) i (cid:17)(cid:105) .", "By definition the target premise score in Equation 2 is equivalent to 1-gram partial target premise score (i.e. S pi (cid:44) S p, 1 i ).", "The n-gram sequence scoring accumulates masked language model probabilities from every gram size till n .", "The proposed score function, since it does not imply any addition of a head module, can be directly applied without any retraining (see Section 5.2).", "It can also be directly used when fine-tuning on the task.", "i j =1", "..L p , are batched together in order to compute score S pi in one forward pass.", "The model acts as a siamese network that performs independent computation of target premise score for each hypothesis h i .", "As already noted in (Li et al., 2019), multiple choice tasks (e.g. COPA) are more naturally expressed as learning to rank problems.", "For this reason we adopt as objective function a margin-based loss in contrast to cross-entropy loss.", "Given ground truth sentence index i , the loss is specified as: L = 1 n n (cid:88) i =1 i (cid:54) = i max (0 , S pi + S pi ) , (6) where is a margin threshold hyperparameter.", "According to our preliminary experiments, we do not add a second MLM component in the general loss (as in (Kocijan et al., 2019)), since it always leads to a decrease of the model performance for various weighted contributions of the MLM term.", "The commonsense reasoning datasets that we focus on are COPA (Gordon et al., 2012), Swag (Zellers et al., 2018), HellaSwag (Zellers et al., 2019) and CommonsenseQA (Talmor et al., 2019).", "All these datasets share the premise-hypothesis task format.", "Table 1 shows examples of full-text format and separated-sentence format for all datasets.", "COPA (Choice of Plausible Alternatives) (Gordon et al., 2012) is a commonsense causal reasoning task where two candidate hypotheses are given.", "COPA itself is composed of two sub-tasks: effect samples and cause samples.", "The effect and cause samples have respectively implies and implied by relation with the correct hypothesis.", "The full-text format of COPA is built by using the conjunction words because (resp. so ) as middle conjunctions for cause (resp. effect ) samples.", "Concerning the separated-sentence format, we reverse the premise and hypothesis order for cause samples in Dataset Full-text format Separated-sentence format COPA (effect) [CLS] I knocked on my neighbor's door so my neighbor invited me in.", "order to convert all cause samples into effect samples.", "This has the benefit to present a unique task to the model, and our experiments show that this give better results than keeping cause samples and effect samples unmodified.", "We choose the Super-GLUE split (Wang et al., 2019).", "CommonsenseQA (Talmor et al., 2019) is a multiple-choice commonsense question answering dataset where each question has one correct answer and four distractor answers.", "To create the full-text format, we prepend Q: to the question, A: to the answer, and then concatenate the question and the answer ( stands for space charac-ter).", "For the separated-sentence format, we also use the Q: and A: prefixes to follow the best recommendation from the FairSeq repo on how to fine-tune RoBERTa on CommonsenseQA 2 .", "Since the benchmark Test set is private, for our zero-shot and fine-tuning stability studies we have split the original validation set evenly, treating last 611 samples as Test set Test .", "Swag (Situations With Adversarial Generations) (Zellers et al., 2018) is a multiple choice commonsense dataset about grounded situations.", "Each premise is a video caption with four answer choices about what might happen next in the scene.", "The correct answer is the video caption for the next event in the video.", "The other negative an-2 https://github.com/pytorch/fairseq/tree/ master/examples/roberta/commonsense qa swers are created via Adversarial Filtering: generated by language modeling models and filtered by discriminator models.", "HellaSwag (Zellers et al., 2019) is an evolved version of Swag using better generators and discriminators models for Adversarial Filtering.", "Since the benchmark test set is private, we evaluate our zero-shot setting on the Val set (we do not perform a fine-tuning study on Swag and HellaSwag as explained in Section 5.3).", "In this section we first apply our scoring method in a zero-shot setting on the four aforementioned datasets.", "Then we fine-tune our scoring method while varying the percentage of the training data used and compare it to approaches that use a randomly initialized classifier head.", "We use RoBERTa LARGE (Liu et al., 2019) for our pretrained model as RoBERTa LARGE fine-tuned with a classification layer on top has very competitive results on those datasets.", "Our implementation use PyTorch and the HuggingFace Transformers library (Wolf et al., 2019).", "Before assessing our zero-shot and fine-tuning results, we perform a task probing by evaluating the zero-shot score we obtain by removing the premise from the input and only scoring the hypotheses.", "If the score is significantly better than a random baseline, it means that the task is not actually solved by commonsense reasoning, but by using statistical biases in the hypotheses.", "This probing method has been already used on several Dataset Mode Acc 1 (%) COPA hyp-only 54.6 random 50.0 CommonsenseQA hyp-only 22.0 random 20.0 Swag hyp-only 60.6 random 25.0 HellaSwag hyp-only 50.8 random 25.0 Table 2: Commonsense reasoning task probing.", "datasets to show that the underlying task was not really solved by the top-performing models (Niven and Kao, 2019; Zellers et al., 2019).", "The results of the task probing evaluation are reported in Table 2.", "While COPA and CommonsenseQA have a hypothesis only score close to the random baseline, the score of both Swag and HellaSwag are significantly higher than their random baseline (more than twice).", "This confirms the study from (Zellers et al., 2019) that shows that Swag's false hypotheses were generated using a weak generator, therefore the authors argue that the fine-tuning process on a BERT model on Swag learns to pick up the statistical cues left by the weak generator.", "Our results show that RoBERTa LARGE can leverage these distributional biases without the fine-tuning phase.", "We argue that the human-written pre-training corpora of RoBERTa biases it to give better score to human-written language rather than model-generated sentences.", "As shown in (Holtzman et al., 2019), there is indeed still a strong distributional differences between human text and machine text.", "Furthermore, our result also highlights that HellaSwag still exhibits a strong bias due to its generation scheme when evaluated with RoBERTa LARGE .", "For both COPA and CommonsenseQA, the best performing scoring method uses the target premise and 4-grams settings as shown in Tables 3 and 4.", "Targeting the premise gives better results than targeting the hypothesis, which reinforces our argument that targeting the hypothesis may be harder as the differences between the hypotheses make the score comparison noisier.", "Also, more grams Target Grams Test Acc (%) premise 1 74.0 hypothesis 1 69.8 premise 2 76.2 premise 3 79.0 premise 4 80.0 premise 5 79.4 Table 3: COPA zero-shot results.", "give increasingly better results but the trend inverts after 4-grams, which may be due to the fact that masked models are not trained to mask large chunks of text.", "It is interesting to note that our zero-shot result is significantly better than a BERTLARGE cross-entropy model fined-tuned on the COPA training set (80.0% vs. 70.6% accuracy) (Wang et al., 2019), while being comparable for CommonsenseQA 3 .", "Moreover, when we intentionally switch the so and because conjunction words on COPA to make the samples erroneous, the accuracy drops significantly (64.4%).", "We reckon this is an indicator that our scoring method effectively reuse the pre-learned representation the full-text format of the task.", "Concerning Swag and HellaSwag, the target hypothesis mode is significantly better than the target premise mode (see Table 5), as expected from our task probing work in Section 5.1.", "For example, on HellaSwag, the target hypothesis mode is only 8% better than the hypothesis only mode (58.8% versus 50.8%), which confirms that on this setting our zero-shot method is mainly taking advantage of the bias in the hypotheses.", "Therefore we refrain from doing more zero-shot experiments on both datasets.", "Following the strong bias of Swag and HellaSwag that was shown in Section 5.1 using our scoring method with RoBERTa LARGE , we decide to not", "3 https://www.tau-nlp.org/csqa-leaderboard", "include them into our fine-tuning study to be sure to compare results for which models learn the actual premise-hypothesis commonsense reasoning task.", "In order to make fair comparisons, we train and compare three different model settings:", "A randomly initialized classifier with cross-entropy loss and separated-sentence format ( head CE ).", "The cross-entropy loss is computed on the probability of the correct candidate, normalized over all candidates in the set (see Equation 1 in (Li et al., 2019)).", "The head margin setting is an ablated version of our scoring method to verify that our reuse of the MLM head actually provides a significant advantage over a randomly initialized head.", "For our method, we report results only for the best performing scoring method which is the target premise mode.", "Experiments showed us that varying the number of grams produce comparable results, so we use the 1-gram setting for computational efficiency.", "We reckon that the enriched bidirectional context granted by N-gram score can be directly learned when fine-tuning on the task.", "For each dataset, we train the three model settings for 20 random seeds each.", "For each seed, we pick the best performing model on the validation set and report its accuracy on the Test set.", "We then compute the max accuracy, mean accuracy and standard deviation of each model setting on the Test set.", "For all model settings, following the recommended hyper-parameters to fine-tune RoBERTa LARGE (Liu et al., 2019), we set a learning rate of 1e-5, a warm-up ratio of 6% of the total number of training steps, a linear learning rate decay and a weight decay of 0.01.", "We use a batch size of 8 for COPA (4 for the 10% training percentage setting) and 16 for CommonsenseQA.", "For the margin-based loss ( ours and head margin ), we set = 0 .", "5 after a few trials.", "On both COPA and CommonsenseQA, our method outperforms both the head CE and head margin methods in terms of mean accuracy and max/best accuracy (see Figure 2 and Figure 3).", "Moreover, we find that a progressive decrease of the training dataset size results in a progressive increase of the best accuracy gap between our method and the other ones.", "This confirms our intuition that our methods is the most advantageous when few training data is available.", "For example, when using 1% of training data of CommonsenseQA, our method achieves an accuracy of 56.7% on the Test set (vs. 40.2% for the head CE approach).", "Using the whole training data, our approach still outperforms other methods but by a lower margin (76.4% accuracy versus 75.4% for head CE ).", "In addition, when evaluated on the CommonsenseQA private Test set, our approach gets 71.6% accuracy which is close to RoBERTa LARGE cross-entropy (Liu et al., 2019) under an important hyper-parameter grid search 4 (72.1% accuracy).", "When using 100% of the COPA training set (400 train samples), our method outperforms the head CE setting per 5 points and the head margin setting per 3 points, achieving an accuracy of 92.4% on the Test set.", "This result allows our approach to reach the second place in the Su-4 https://github.com/pytorch/fairseq/tree/ master/examples/roberta/commonsense qa perGLUE leaderboard 5 (Wang et al., 2019) between RoBERTa LARGE (Liu et al., 2019) and the T5 model composed of 11 billions of parameters (Raffel et al., 2019) (respectively 90.6 and 94.8 % accuracy on the Test set).", "We also notice that our method provides a much more stable training relative to the random seed as shown by the box plots in Figure 2", "a) and 3", "a).", "When training on the full COPA dataset, our method exhibits a 10 standard deviation reduction on the test accuracy compared to the head CE setting (1.35% versus 12.8%).", "Our intuition is that the improved stability is due to the better reuse of the pre-trained model priors and the absence of new randomly initialized weights.", "This is important result towards easier experiment comparisons as fine-tuning BERT-like architectures is known to be unstable across random restarts as shown in (Phang et al., 2018).", "In this work, we presented a new method for plausibility ranking tasks, specifically targeting commonsense ranking problem.", "We define a scoring function that leverages the MLM head of large pre-trained bidirectional transformer models.", "We establish strong results in a zero-shot setting on four commonsense reasoning datasets, comparable to supervised approaches.", "We then fine-tune such model using a margin-based loss on the proposed scoring function, and provide a comparative study with state of the art randomly initialized head methods.", "Our study demonstrates that the direct use of MLM over custom head yields increasingly superior performance gain when decreasing training data size.", "The proposed approach outperforms state-of-the-art training methods in terms of both test accuracy and training stability.", "Future works include applying such scoring method on broader classification tasks like Natural Language Inference and Sentiment Analysis.", "We also think that our token-level scoring method could be used during the self-supervised pretraining phase to extend traditional next sentence prediction and sequence ordering tasks, bringing more commonsense knowledge in the model." ]
[ "abstain", "abstain", "method", "objective", "method", "method", "objective", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "method", "other", "method", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "other", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "result", "result", "abstain", "result", "result", "abstain", "result", "other", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "result", "objective", "objective", "abstain", "abstain", "objective" ]
[ "Sequence labeling systems should perform reliably not only under ideal conditions but also with corrupted inputsas these systems often process user-generated text or follow an error-prone upstream component.", "To this end, we formulate the noisy sequence labeling problem, where the input may undergo an unknown noising process and propose two Noise-Aware Training (NAT) objectives that improve robustness of sequence labeling performed on perturbed input: Our data augmentation method trains a neural model using a mixture of clean and noisy samples, whereas our stability training algorithm encourages the model to create a noise-invariant latent representation.", "We employ a vanilla noise model at training time.", "For evaluation, we use both the original data and its variants perturbed with real OCR errors and misspellings.", "Extensive experiments on English and German named entity recognition benchmarks confirmed that NAT consistently improved robustness of popular sequence labeling models, preserving accuracy on the original input.", "We make our code and data publicly available for the research community.", "Sequence labeling systems are generally trained on clean text, although in real-world scenarios, they often follow an error-prone upstream component, such as Optical Character Recognition (OCR; Neudecker, 2016) or Automatic Speech Recognition (ASR; Parada et al., 2011).", "Sequence labeling is also often performed on user-generated text, which may contain spelling mistakes or typos (Derczynski et al., 2013).", "Errors introduced in an upstream task are propagated downstream, diminishing the performance of the end-to-end system (Alex and Burns, 2014).", "While humans can easily cope with typos, misspellings, and the complete omission of letters when reading (Rawlinson, reference text: Singapore sees prestige in hosting WTO .", "2007), most Natural Language Processing (NLP) systems fail when processing corrupted or noisy text (Belinkov and Bisk, 2018).", "Although this problem is not new to NLP, only a few works addressed it explicitly (Piktus et al., 2019; Karpukhin et al., 2019).", "Other methods must rely on the noise that occurs naturally in the training data.", "In this work, we are concerned with the performance difference of sequence labeling performed on clean and noisy input.", "Is it possible to narrow the gap between these two domains and design an approach that is transferable to different noise distributions at test time?", "Inspired by recent research in computer vision (Zheng et al., 2016), Neural Machine Translation (NMT; Cheng et al., 2018), and ASR (Sperber et al., 2017), we propose two Noise-Aware Training (NAT) objectives that improve the accuracy of sequence labeling performed on noisy input without reducing efficiency on the original data.", "Figure 1 illustrates the problem and our approach.", "Our contributions are as follows: We formulate a noisy sequence labeling problem, where the input undergoes an unknown noising process ( 2.2), and we introduce a model to estimate the real error distribution ( 3.1).", "Moreover, we simulate real noisy input with a novel noise induction procedure ( 3.2).", "We propose a data augmentation algorithm ( 3.3) that directly induces noise in the input data to perform training of the neural model using a mixture of noisy and clean samples.", "We implement a stability training method (Zheng et al., 2016), adapted to the sequence labeling scenario, which explicitly addresses the noisy input data problem by encouraging the model to produce a noise-invariant latent representation ( 3.4).", "We evaluate our methods on real OCR errors and misspellings against state-of-the-art baseline models (Peters et al., 2018; Akbik et al., 2018; Devlin et al., 2019) and demonstrate the effectiveness of our approach ( 4).", "To support future research in this area and to make our experiments reproducible, we make our code and data publicly available 1 .", "Figure 2 presents a typical architecture for the neural sequence labeling problem.", "We will refer to the sequence labeling system as F ( x ; ) , abbreviated as F ( x ) 2 , where x = ( x 1 , . . . , x N ) is a tokenized input sentence of length N , and represents all learnable parameters of the system.", "F ( x ) takes x as input and outputs the probability distribution over the class labels y ( x ) as well as the final sequence of labels y = ( y 1 , . . . , y N ) .", "Either a softmax model (Chiu and Nichols, 2016) or a Conditional Random Field (CRF; Lample et al., 2016) can be used to model the output distribution over the class labels y ( x ) from the logits l ( x ) , i.e., non-normalized predictions, and to output the final sequence of labels y .", "As a labeled entity can span several consecutive tokens within a sentence, 1 NAT repository on GitHub: https://github.com/ mnamysl/nat-acl2020 2 We drop the parameter for brevity in the remaining of the paper.", "Nonetheless, we still assume that all components of F ( x ; ) and all expressions derived from it also depend on .", "special tagging schemes are often employed for decoding, e.g., BIOES, where the B eginning, I nside, O utside, E nd-of-entity and S ingle-tag-entity sub-tags are also distinguished (Ratinov and Roth, 2009).", "This method introduces strong dependencies between subsequent labels, which are modeled explicitly by a CRF (Lafferty et al., 2001) that produces the most likely sequence of labels.", "Similar to human readers, sequence labeling should perform reliably both in ideal and sub-optimal conditions.", "Unfortunately, this is rarely the case.", "User-generated text is a rich source of informal language containing misspellings, typos, or scrambled words (Derczynski et al., 2013).", "Noise can also be introduced in an upstream task, like OCR (Alex and Burns, 2014) or ASR (Chen et al., 2017), causing the errors to be propagated downstream.", "To include the noise present on the source side of F ( x ) , we can modify its definition accordingly (Figure 2).", "Let us assume that the input sentence x is additionally subjected to some unknown noising process = P ( x i | x i ) , where x i is the original i -th token, and x i is its distorted equivalent.", "Let V be the vocabulary of tokens and V be a set of all finite character sequences over an alphabet .", "is known as the noisy channel matrix (Brill and Moore, 2000) and can be constructed by estimating the probability P ( x i | x i ) of each distorted token x i given the intended token x i for every x i V and x i V .", "We study the effectiveness of state-of-the-art Named Entity Recognition (NER) systems in handling imperfect input data.", "NER can be considered as a special case of the sequence labeling problem, where the goal is to locate all named entity mentions in unstructured text and to classify them into pre-defined categories, e.g., person names, organizations, and locations (Tjong Kim Sang and De Meulder, 2003).", "NER systems are often trained on the clean text.", "Consequently, they exhibit degraded performance in real-world scenarios where the transcriptions are produced by the previous upstream component, such as OCR or ASR ( 2.2), which results in a detrimental mismatch between the training and the test conditions.", "Our goal is to improve the robustness of sequence labeling performed on data from noisy sources, without deteriorating performance on the original data.", "We assume that the source sequence of tokens x may contain errors.", "However, the noising process is generally label-preserving, i.e., the level of noise is not significant enough to affect the corresponding labels 3 .", "It follows that the noisy token x i inherits the ground-truth label y i from the underlying original token x i .", "To model the noise, we use the character-level noisy channel matrix , which we will refer to as the character confusion matrix ( 2.2).", "3 Moreover, a human reader should be able to infer the correct label y i from the token x i and its context.", "We assume that this corresponds to a character error rate of 20% .", "using the Levenshtein distance metric (Levenshtein, 1966), where P is a corpus of paired noisy and manually corrected sentences ( 2.2).", "The allowed edit operations include insertions, deletions, and substitutions of characters.", "We can model insertions and deletions by introducing an additional symbol into the character confusion matrix.", "The probability of insertion and deletion can then be formulated as P ins ( c | ) and P del ( | c ) , where c is a character to be inserted or deleted, respectively.", "Synthetic noise P is usually laborious to obtain.", "Moreover, the exact modeling of noise might be impractical, and it is often difficult to accurately estimate the exact noise distribution to be encountered at test time.", "Such distributions may depend on, e.g., the OCR engine used to digitize the documents.", "Therefore, we keep the estimated natural error distribution for evaluation and use a simplified synthetic error model for training.", "We assume that all types of edit operations are equally likely: (cid:88) c \\{ } P ins ( c | ) = P del ( | c ) = (cid:88) c \\{ c, } P subst ( c | c ) , where c and c are the original and the perturbed characters, respectively.", "Moreover, P ins and P subst are uniform over the set of allowed insertion and substitution candidates, respectively.", "We use the hyper-parameter to control the amount of noise to be induced with this method 4 .", "Ideally, we would use the noisy sentences annotated with named entity labels for training our sequence labeling models.", "Unfortunately, such data is scarce.", "On the other hand, labeled clean text corpora are widely available (Tjong Kim Sang and De Meulder, 2003; Benikova et al., 2014).", "Hence, we propose to use the standard NER corpora and to induce noise into the input tokens during training synthetically.", "In contrast to the image domain, which is continuous, the text domain is discrete, and we cannot directly apply continuous perturbations for written language.", "Although some works applied distortions at the level of embeddings (Miyato et al., 2017; Yasunaga et al., 2018; Bekoulis et al., 2018), we do not have a good intuition how it changes the meaning of the underlying textual input.", "Instead, we apply our noise induction procedure to generate distorted copies of the input.", "For every 4 We describe the details of our vanilla error model along with the examples of confusion matrices in the appendix.", "input sentence x , we independently perturb each token x i = ( c 1 , . . . , c K ) , where K is the length of x i , with the following procedure (Figure 3): (1) We insert the symbol before the first and after every character of x i to get an extended token x (cid:48) i = ( , c 1 , , . . . , , c K , ) .", "(2) For every character c (cid:48) k of x (cid:48) i , we sample the replacement character c (cid:48) k from the corresponding probability distribution P ( c (cid:48) k | c (cid:48) k ) , which can be obtained by taking a row of the character confusion matrix that corresponds to c (cid:48) k .", "As a result, we get a noisy version of the extended input token x (cid:48) i .", "(3) We remove all symbols from x (cid:48) i and collapse the remaining characters to obtain a noisy token x i .", "token \u0000 \u0000 token toiken \u0000 \u0000 \u0000 \u0000 toiken \u0000 \u0000 token token oken oken token token tokem tokem Figure 3: Illustration of our noise induction procedure.", "We can improve robustness to noise at test time by introducing various forms of artificial noise during training.", "We distinct regularization methods like dropout (Srivastava et al., 2014) and task-specific data augmentation that transforms the data to resemble noisy input.", "The latter technique was successfully applied in other domains, including computer vision (Krizhevsky et al., 2012) and speech recognition (Sperber et al., 2017).", "During training, we artificially induce noise into the original sentences using the algorithm described in 3.2 and train our models using a mixture of clean and noisy sentences.", "Let L 0 ( x, y ; ) be the standard training objective for the sequence labeling problem, where x is the input sentence, y is the corresponding ground-truth sequence of labels, and represents the parameters of F ( x ) .", "We define our composite loss function as follows: L augm ( x, x, y ; ) = L 0 ( x, y ; ) + L 0 ( x, y ; ) , where x is the perturbed sentence, and is a weight of the noisy loss component.", "L augm is a weighted sum of standard losses calculated using clean and noisy sentences.", "Intuitively, the model that would optimize L augm should be more robust to imperfect input data, retaining the ability to perform well on clean input.", "Figure 4a presents a schematic visualization of our data augmentation approach.", "Zheng et al. (2016) pointed out the output instability issues of deep neural networks.", "They proposed a training method to stabilize deep networks against small input perturbations and applied it to the tasks of near-duplicate image detection, similar-image ranking, and image classification.", "Inspired by their idea, we adapt the stability training method to the natural language scenario.", "Our goal is to stabilize the outputs y ( x ) of a sequence labeling system against small input perturbations, which can be thought of as flattening y ( x ) in a close neighborhood of any input sentence x .", "When a perturbed copy x is close to x , then y ( x ) should also be close to y ( x ) .", "Given the standard training objective L 0 ( x, y ; ) , the original input sentence x , its perturbed copy x and the sequence of ground-truth labels y , we can define the stability training objective L stabil as follows: L stabil ( x, x, y ; ) = L 0 ( x, y ; ) + L sim ( x, x ; ) , L sim ( x, x ; ) = D (cid:0) y ( x ) , y ( x ) (cid:1) , where L sim encourages the similarity of the model outputs for both x and x , D is a task-specific feature distance measure, and balances the strength of the similarity objective.", "Let R ( x ) and Q ( x ) be the discrete probability distributions obtained by calculating the softmax function over the logits l ( x ) for x and x , respectively: R ( x ) = P ( y | x ) = softmax (cid:0) l ( x ) (cid:1) , Q ( x ) = P ( y | x ) = softmax (cid:0) l ( x ) (cid:1) .", "We model D as KullbackLeibler divergence ( DKL ), which measures the correspondence between the likelihood of the original and the perturbed input: L sim ( x, x ; ) = (cid:88) i DKL (cid:0) R ( x i ) (cid:107) Q ( x i ) (cid:1) , DKL (cid:0) R ( x ) (cid:107) Q ( x ) (cid:1) = (cid:88) j P ( y j | x ) log P ( y j | x ) P ( y j | x ) , \u0000 ( \u0000 ) \u0000 0 \u0000\u0000\u0000\u0000 \u0000 \u0000 \u0000 ( \u0000 ) \u0000 ( ) \u0000", "where i , j are the token, and the class label indices, respectively.", "Figure 4b summarizes the main idea of our stability training method.", "A critical difference between the data augmentation and the stability training method is that the latter does not use noisy samples for the original task, but only for the stability objective 5 .", "Furthermore, both methods need perturbed copies of the input samples, which results in longer training time but could be ameliorated by fine-tuning the existing model for a few epochs 6 .", "Model architecture We used a BiLSTM-CRF architecture (Huang et al., 2015) with a single Bidirectional Long-Short Term Memory (BiLSTM) layer and 256 hidden units in both directions for f ( x ) in all experiments.", "We considered four different text representations e ( x ) , which were used to achieve state-of-the-art results on the studied data set and should also be able to handle misspelled text and out-of-vocabulary (OOV) tokens: FLAIR (Akbik et al., 2018) learns a Bidi-5 Both objectives could be combined and used together.", "However, our goal is to study their impact on robustness separately, and we leave further exploration to future work.", "6 We did not explore this setting in this paper, leaving such optimization to future work.", "rectional Language Model (BiLM) using an LSTM network to represent any sequence of characters.", "We used settings recommended by the authors and combined FLAIR with GloVe (Pennington et al., 2014; FLAIR + GloVe ) for English and Wikipedia FastText embeddings (Bojanowski et al., 2017; FLAIR + Wiki ) for German.", "BERT (Devlin et al., 2019) employs a Transformer encoder to learn a BiLM from large unlabeled text corpora and sub-word units to represent textual tokens.", "We use the BERTBASE model in our experiments.", "ELMo (Peters et al., 2018) utilizes a linear combination of hidden state vectors derived from a BiLSTM word language model trained on a large text corpus.", "Glove/Wiki + Char is a combination of pre-trained word embeddings (GloVe for English and Wikipedia FastText for German) and randomly initialized character embeddings (Lam-ple et al., 2016).", "Training We trained the sequence labeling model f ( x ) and the final CRF decoding layer on top of the pre-trained embedding vectors e ( x ) , which were fixed during training, except for the character embeddings (Figure 2).", "We used a mixture of the original data and its perturbed copies generated from the synthetic noise distribution ( 3.1) with our noise induction procedure ( 3.2).", "We kept most of the hyper-parameters consistent with Akbik et al. (2018) 7 .", "We trained our models for at most 100 epochs and used early stopping based on the development set performance, measured as an average F1 score of clean and noisy samples.", "Furthermore, we used the development sets of each benchmark data set for validation only and not for training.", "Performance measures We measured the entity-level micro average F1 score on the test set to compare the results of different models.", "We evaluated on both the original and the perturbed data using various natural error distributions.", "We induced OCR errors based on the character confusion matrix ( 3.2) that was gathered on a large document corpus (Namysl and Konya, 2019) using the Tesseract OCR engine (Smith, 2007).", "Moreover, we employed two sets of misspellings released by 7 We list the detailed hyper-parameters in the appendix.", "Belinkov and Bisk (2018) and Piktus et al. (2019).", "Following the authors, we replaced every original token with the corresponding misspelled variant, sampling uniformly among available replacement candidates.", "We present the estimated error rates of text that is produced with these noise induction procedures in Table 5 in the appendix.", "As the evaluation with noisy data leads to some variance in the final scores, we repeated all experiments five times and reported mean and standard deviation.", "Implementation We implemented our models using the FLAIR framework (Akbik et al., 2019) 8 .", "We extended their sequence labeling model by integrating our auxiliary training objectives ( 3.3, 3.4).", "Nonetheless, our approach is universal and can be implemented in any other sequence labeling framework.", "To validate our approach, we trained the baseline models with and without our auxiliary loss objectives ( 3.3, 3.4) 9 .", "We used the CoNLL 2003 (Tjong Kim Sang and De Meulder, 2003) and the GermEval 2014 (Benikova et al., 2014) data sets in this setup 10 .", "The baselines utilized GloVe vectors coupled with FLAIR and character embeddings (FLAIR + GloVe, GloVe + Char), BERT, and ELMo embeddings for English.", "For German, we employed Wikipedia FastText vectors paired with FLAIR and character embeddings (FLAIR + Wiki, Wiki + Char) 11 .", "We used a label-preserving training setup ( = 1 . 0 , train = 10% ).", "Table 1 presents the results of this experiment 12 .", "We found that our auxiliary training objectives boosted accuracy on noisy input data for all baseline models and both languages.", "At the same time, they preserved accuracy for the original input.", "The data augmentation objective seemed to perform slightly better than the stability objective.", "However, the chosen hyper-parameter values were rather ar-8 We used FLAIR v0.4.2.", "9 We experimented with a pre-processing step that used a spell checking module, but it did not provide any benefits and even decreased accuracy on the original data.", "Therefore we did not consider it a viable solution for this problem.", "10 We present data set statistics and sample outputs from our system in the appendix.", "11 This choice was motivated by the availability of pre-trained embedding models in the FLAIR framework.", "12 We did not replicate the exact results from the original papers because we did not use development sets for training, and our approach is feature-based, as we did not fine-tune embeddings on the target task.", "bitrary, as our goal was to prove the utility and the flexibility of both objectives.", "We evaluated the impact of our hyper-parameters on the sequence labeling accuracy using the English CoNLL 2003 data set.", "We trained multiple models with different amounts of noise train and different weighting factors .", "We chose the FLAIR + GloVe model as our baseline because it achieved the best results in the preliminary analysis ( 4.2) and showed good performance, which enabled us to perform extensive experiments.", "Figure 5 summarizes the results of the sensitivity experiment.", "The models trained with our auxiliary objectives mostly preserved or even improved accuracy on the original data compared to the baseline model ( = 0 ).", "Moreover, they significantly outperformed the baseline on data perturbed with natural noise.", "The best accuracy was achieved for train from 10 to 30% , which roughly corresponds to the label-preserving noise range.", "Similar to Heigold et al. (2018) and Cheng et al. (2019), we conclude that a non-zero noise level induced during training ( train > 0) always yields improvements on noisy input data when compared with the models trained exclusively on clean data.", "The best choice of was in the range from 0 .", "5 to 2 .", "0 .", "= 5 .", "0 exhibited lower performance on the original data.", "Moreover, the models trained on the real error distribution demonstrated at most slightly better performance, which indicates that the exact noise distribution does not necessarily have to be known at training time 13 .", "To quantify improvements provided by our approach, we measured sequence labeling accuracy on the subsets of data with different levels of perturbation, i.e., we divided input tokens based on edit distance to their clean counterparts.", "Moreover, we partitioned the data by named entity class to assess the impact of noise on recognition of different entity types.", "For this experiment, we used both the test and the development parts of the English CoNLL 2003 data set and induced OCR errors with our noising procedure.", "Figure 6 presents the results for the baseline and the proposed methods.", "It can be seen that our ap-13 Nevertheless, the aspect of mimicking an empirical noise distribution requires more thoughtful analysis, and therefore we leave to future work.", "proach achieved significant error reduction across all perturbation levels and all entity types.", "Moreover, by narrowing down the analysis to perturbed tokens, we discovered that the baseline model was particularly sensitive to noisy tokens from the LOC and the MISC categories.", "Our approach considerably reduced this negative effect.", "Furthermore, as the stability training worked slightly better on the LOC class and the data augmentation was more accurate on the ORG type, we argue that both methods could be combined to enhance overall sequence labeling accuracy further.", "Note that even if the particular token was not perturbed, its context could be noisy, which would explain the fact that our approach provided improvements even for tokens without perturbations.", "Improving robustness has been receiving increasing attention in the NLP community.", "The most relevant research was conducted in the NMT domain.", "Noise-additive data augmentation A natural strategy to improve robustness to noise is to augment the training data with samples perturbed using a similar noise model.", "Heigold et al. (2018) demonstrated that the noisy input substantially degrades the accuracy of models trained on clean data.", "They used word scrambling, as well as character flips and swaps as their noise model, and achieved the best results under matched training and test noise conditions.", "Belinkov and Bisk (2018) reported significant degradation in the performance of NMT systems on noisy input.", "They built a look-up table of possible lexical replacements from Wikipedia edit histories and used it as a natural source of the noise.", "Robustness to noise was only achieved by training with the same distributionat the expense of performance degradation on other types of noise.", "In contrast, our method performed well on natural noise at test time by using a simplified synthetic noise model during training.", "Karpukhin et al. (2019) pointed out that existing NMT approaches are very sensitive to spelling mistakes and proposed to augment training samples with random character deletions, insertions, substitutions, and swaps.", "They showed improved robustness to natural noise, represented by frequent corrections in Wikipedia edit logs, without diminishing performance on the original data.", "However, not every word in the vocabulary has a corresponding misspelling.", "Therefore, even when noise is applied LD=0 LD=1 LD=2 LD=3 LD 4 Levenshtein Distance (LD) value.", "at the maximum rate, only a subset of tokens is perturbed (20-50%, depending on the language).", "In contrast, we used a confusion matrix, which is better suited to model statistical error distribution and can be applied to all tokens, not only those present in the corresponding look-up tables.", "Robust representations Another method to improve robustness is to design a representation that is less sensitive to noisy input.", "Zheng et al. (2016) presented a general method to stabilize model predictions against small input distortions.", "Cheng et al. (2018) continued their work and developed the adversarial stability training method for NMT by adding a discriminator term to the objective function.", "They combined data augmentation and stability objectives, while we evaluated both methods separately and provided evaluation results on natural noise distribution.", "Piktus et al. (2019) learned representation that embeds misspelled words close to their correct variants.", "Their Misspelling Oblivious Embeddings (MOE) model jointly optimizes two loss functions, each of which iterates over a separate data set (a corpus of text and a set of mis-spelling/correction pairs) during training.", "In contrast, our method does not depend on any additional resources and uses a simplified error distribution during training.", "Adversarial learning Adversarial attacks seek to mislead the neural models by feeding them with adversarial examples (Szegedy et al., 2014).", "In a white-box attack scenario (Goodfellow et al., 2015; Ebrahimi et al., 2018) we assume that the attacker has access to the model parameters, in contrast to the black-box scenario (Alzantot et al., 2018; Gao et al., 2018), where the attacker can only sample model predictions on given examples.", "Adversarial training (Miyato et al., 2017; Yasunaga et al., 2018), on the other hand, aims to improve the robustness of the neural models by utilizing adversarial examples during training.", "The impact of noisy input data In the context of ASR, Parada et al. (2011) observed that named entities are often OOV tokens, and therefore they cause more recognition errors.", "In the document processing field, Alex and Burns (2014) studied NER performed on several digitized historical text collections and showed that OCR errors have a significant impact on the accuracy of the downstream task.", "Namysl and Konya (2019) examined the efficiency of modern OCR engines and showed that although the OCR technology was more advanced than several years ago when many historical archives were digitized (Kim and Cassidy, 2015; Neudecker, 2016), the most widely used engines still had difficulties with non-standard or lower quality input.", "Spellingand post-OCR correction.", "A natural method of handling erroneous text is to correct it before feeding it to the downstream task.", "Most popular post-correction techniques include correction candidates ranking (Fivez et al., 2017; Flor et al., 2019), noisy channel modeling (Brill and Moore, 2000; Duan and Hsu, 2011), voting (Wemhoener et al., 2013), sequence to sequence models (Afli et al., 2016; Schmaltz et al., 2017) and hybrid systems (Schulz and Kuhn, 2017).", "In this paper, we have taken a different approach and attempted to make our models robust without relying on prior error correction, which, in case of OCR errors, is still far from being solved (Chiron et al., 2017; Rigaud et al., 2019).", "In this paper, we investigated the difference in accuracy between sequence labeling performed on clean and noisy text ( 2.3).", "We formulated the noisy sequence labeling problem ( 2.2) and introduced a model that can be used to estimate the real noise distribution ( 3.1).", "We developed the noise induction procedure that simulates the real noisy input ( 3.2).", "We proposed two noise-aware training methods that boost sequence labeling accuracy on the perturbed text:", "(i) Our data augmentation approach uses a mixture of clean and noisy examples during training to make the model resistant to erroneous input ( 3.3).", "(ii) Our stability training algorithm encourages output similarity for the original and the perturbed input, which helps the model to build a noise invariant latent representation ( 3.4).", "Our experiments confirmed that NAT consistently improved efficiency of popular sequence labeling models on data perturbed with different error distributions, preserving accuracy on the original input ( 4).", "Moreover, we avoided expensive re-training of embeddings on noisy data sources by employing existing text representations.", "We conclude that NAT makes existing models applicable beyond the idealized scenarios.", "It may support an automatic correction method that uses recognized entity types to narrow the list of feasible correction candidates.", "Another application is data anonymization (Mamede et al., 2016).", "Future work will involve improvements in the proposed noise model to study the importance of fidelity to real-world error patterns.", "Moreover, we plan to evaluate NAT on other real noise distributions (e.g., from ASR) and other sequence labeling tasks to support our claims further.", "We would like to thank the reviewers for the time they invested in evaluating our paper and for their insightful remarks and valuable suggestions." ]
[ "abstain", "objective", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "objective", "objective", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "objective", "objective", "method", "method", "abstain", "other", "result", "result", "result", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "other", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "objective", "other", "other", "objective", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "abstain", "objective", "method", "objective", "objective", "method", "method", "result", "method", "abstain", "abstain", "abstain", "abstain", "method", "other" ]
[ "This work explores techniques to predict Part-of-Speech (PoS) tags from neural signals measured at millisecond resolution with electroencephalography (EEG) during text reading.", "We show that information about word length, frequency and word class is encoded by the brain at different post-stimulus latencies.", "We then demonstrate that pretraining on averaged EEG data and data augmentation techniques boost PoS single-trial EEG decoding accuracy for Transformers (but not linear SVMs).", "Applying optimised temporally-resolved decoding techniques we show that Transformers outperform linear SVMs on PoS tagging of unigram and bigram data more strongly when information requires integration across longer time windows.", "Electro-/Magnetoencephalography (EEG/MEG), which measures neural activity at millisecond resolution, is a key neuroscientific method to assess how neural representations unfold dynamically in language processing.", "Early event related potential (ERP) studies that rely on averaging EEG activity across multiple trials have shown that EEG signal magnitude and topography depend on word length, frequency and open vs. closed class.", "Word length effects arose in EEG at about 150 ms, frequency effects at 200 ms and word class effects from 400-700 ms (Osterhout et al., 1997; Brown et al., 1999; Neville et al., 1992; Mnte et al., 1998; Segalowitz and Lane, 2000; Mnte et al., 2001; Dufau et al., 2015).", "Recent studies were able to predict these and other (e.g. semantic) aspects based on single trial multi-channel EEG/MEG activity (Ling et al., 2019; Chan et al., 2011; King et al., 2020).", "Importantly, the aim of cognitive neuroscience studies is to dissociate when (i.e. latency) and where (i.e. brain region) specific linguistic information is explicitly encoded in neural activity.", "Neuroscience studies therefore typically use linear decoders and try to disentangle neural activity for linguistic and non-linguistic dimensions that covary in natural language statistics (e.g. word class vs. length).", "By contrast, engineering applications mainly aim to maximise performance accuracy, utilising all available information and more powerful nonlinear classifiers.", "Intriguingly, recent studies have shown that adding human eye tracking data (Barrett et al., 2016) or morphosyntactic information extracted from human functional magnetic resonance imaging (fMRI) signals during sentence reading, can substantially improve PoS induction (Bingel et al., 2016).", "Yet, morphosynactic information obtained from fMRI is limited, because fMRI measures only the slow changes in blood oxygenation, peaking 5-6 s after stimulus onset, rather than the rapid neural activity during language processing.", "Contributions.", "This interdisciplinary paper decodes PoS tags from EEG signals with linear SVMs and Transformers, pursuing several aims relevant for neuroscience and/or engineering.", "Neuroscience-focused Section 3 uses linear SVMs to define the distinct neural representations of word length, frequency and class based on a new EEG data set, in which a single subject reads an extensive syntactically annotated corpus.", "To dissociate these linguistic and non-linguistic aspects, typically correlated in natural language statistics, Section 3 matched the stimulus distributions for each classification task with respect to the confounding dimensions of no interest.", "Consistent with previous reports, we show that word length, frequency and class can be decoded at different post-stimulus latencies based on single trial and trial-averaged data.", "This replication part serves to validate the new EEG data set.", "Methods-focused Section 4 moves beyond open 2201 vs. closed word class decoding that were the focus of previous MEG/EEG studies and decodes 6 PoS tags from EEG activity with linear SVMs and Transformers.", "We show that pretraining on trial-averaged data with subsequent fine-tuning on single-trial data, alongside data augmentation, boosts PoS decoding accuracy from single-trial EEG data selectively for Transformers (but not for linear SVMs).", "Engineering-focused Section 5 finally uses linear SVMs and Transformers together with pretraining and augmentation techniques from Section 4 to assess how PoS information about unigrams and bigrams becomes progressively available in EEG activity across post-stimulus time.", "Comparing EEG decoding from sliding and incremental time windows suggests that Transformers outperform linear SVMs particularly when information needs being integrated across longer time windows.", "Our results raise the possibility of combining PoS-tagging based on EEG decoding with corpora and dependency tree annotation to obtain more reliable morphosyntactic information for low-resource languages.", "Our experiments used a new corpus annotated with EEG data, previously acquired at the University of Birmingham following ethical approval and par-ticipant's informed consent.", "The EEG annotated corpus is available 1 under a public license (CC BY-SA 4.0).", "Data set .", "The stimulus set includes 4,479 sentences (74,953 tokens) selected from the English Web Treebank (Bies et al., 2012), covering the genres weblogs, newsgroups, reviews and Yahoo Answers .", "The mean sentence length is 16.7 words (standard deviation: 12.23).", "75 sessions of EEG data are included over 20 days, each lasting 20-25 minutes, from a single subject who read approximately five and a half iterations of the stimulus set (i.e. 24,323 sentences and 404,205 tokens in total, thereby substantially exceeding current freely accessible data sets, e.g. (Bhattasali et al., 2020).", "Three sessions were excluded because of data corruption.", "The EEG data for separate text passages were divided into training, dev and test sets to avoid any temporal overlap.", "Further, dev and test sets were matched for length of text passages, recording 1 https://edata.bham.ac.uk/617 Figure 1: Example trial and EEG recording.", "dates and sentence position (initial, mid & end).", "The training set contains 83% of the data (19,156 sentences; 317,753 tokens), dev set 8.5% (2,704 sentences; 45,822 tokens) and test set 8.5% (2,463 sentences; 40,630 tokens).", "In a Rapid Serial Visual Presentation (RSVP) paradigm, sentences were presented one word at a time, on average every 240 ms, in a white monospace font on a grey background approximately in the centre of the screen, at the optimal viewing position (Rayner et al., 2016).", "Each word subtended a horizontal angle 0.76 to the left and 11.81 to the right from the centre.", "Sentences were separated by 500 ms of a white central fixation cross (see Figure 1).", "On approximately 20% of the sentences in each session, the participant was prompted to verbalise the previous sentence back to the experimenter.", "An accuracy score of 93% across all sessions confirmed that the participant successfully attended the sentences.", "Stimuli were presented using PsychoPy (Peirce et al., 2019) on an LCD monitor with a resolution of 1920x1080 pixels and 60 Hz refresh rate.", "The subject's head was stabilised with a chin-rest.", "Continuous EEG signals were recorded in 'reference-free' mode at a sampling rate of 1 kHz via BrainVision's PyCorder software using 64 Ag/AgCl active actiCAP slim electrodes arranged in a 1020 layout (ActiCAP, Brain Products GmbH,", "kept below 15 k .", "Data were preprocessed using MNE-Python (Gramfort et al., 2013).", "Individual EEG sessions were band-pass filtered between 1-40 Hz, down-sampled to 250 Hz and re-referenced to average reference.", "Noisy channels were determined based on visual inspection and interpolated.", "Non-neuronal components (e.g. ocular, muscular, electrical) were removed via Independent Component Analysis (ICA) individually for each recording session (an average of 4 components were removed per EEG session).", "EEG signals were extracted from -100 to 700 ms relative to word onset.", "For baseline correction, we subtracted the channel-wise mean from -100 ms to 0 ms from the evoked post-stimulus EEG response ([0 700] ms separately for each word; see Figure 1).", "EEG data were spatially multivariate noise normalised using the noise covariance matrix estimated separately for each target class (Guggenmos et al., 2018).", "Each EEG trial was annotated with the gold part of speech tags of the current and subsequent words, their word lengths, and Zipf-logarithmic frequency scores from the Python package WordFreq (Speer et al., 2018).", "The decoding analyses used linear support vector machines (SVM) (Chang and Lin, 2011) and Transformers, which can capture complex interactions of EEG data across time points.", "All classifiers were trained on the EEG training data, assessed on the dev set and scored on the independent test set.", "Hyperparameters and early stopping were selected based on the dev set.", "We assessed linear SVMs and Transformers on the dev set using 10 different random seed points.", "We show mean classification accuracy with 68% confidence intervals (CI) over those 10 replications on the dev (Table 4, Figure 3) resp.", "test set (Figure 2, 4, 5).", "We compute statistics on test set classification responses from the model that scored the highest on the dev set (e.g. binomial or Wilcoxon signed rank tests).", "For linear SVM, we used an online learning implementation of SCIKIT-LEARN (Pedregosa et al., 2011; Zhang, 2004), based on LIBSVM (Chang and Lin, 2011), with hinge loss and Stochastic Gradient Descent (SGD) optimiser.", "Hyperparameters were set to default except for the SGD regularisation parameter that was increased to = 0 .", "75 , which provided better classification accuracy on the dev set.", "The parameter is inversely proportional to the C parameter in the standard SVM implementation.", "The online implementation also allowed us to select the best model using early stopping.", "The SVM was provided with EEG activity vectors as inputs, i.e. 1 x (EEG channels time points).", "For the Transformer (Vaswani et al., 2017), we conducted a model architecture and hyperparame-ter search (layers, learning rate, MLP dimensions, dropout rate, Encoder vs. Encoder-Decoder) on the dev set.", "The selected model was composed of four encoder-blocks and a final dense layer that projects the output of the last encoder-block onto the PoS tags via a softmax function.", "We used the Adam optimiser and early stopping.", "The implementation is based on the WMT example 2 of Google's novel ML frameworks Flax/Jax .", "Table 2 lists the selected hyperparameters.", "The Transformer received EEG channels x time points as inputs and provided a classification response for the entire time window.", "We performed decoding based on", "(i) EEG for single-trials (i.e. no averaging),", "(ii) EEG averaged across three and", "(iii) ten trials.", "Averaging EEG 2 https://github.com/google/flax/tree/master/examples/wmt 2203 signals across trials increases the signal to noise ratio of the 'samples' (Grootswagers et al., 2017; Guggenmos et al., 2018; Roy et al., 2019; Tuckute et al., 2019), but ignores true variability across EEG data from different words of the same category (Mnte et al., 2001).", "For training (resp. dev) set we generated the same number of samples for 3 and 10 trial averages as for the single trial test (resp. dev) sets via boostrapping.", "For the test set, we averaged data without replacement, so that examples can be entered as independent data points in statistical tests.", "Hence, the number of samples in the test set (but not in the training or dev sets) is smaller for 3 and 10 trial averages than single trials (Table 3).", "For comparison with previous research (Osterhout et al., 1997; Mnte et al., 2001), we decoded word length, frequency and class with linear SVMs in a temporally-resolved fashion from 0 to 700 ms post-stimulus", "post-stimulus EEG, recorded during sentence reading.", "Data set .", "We decoded word class from EEG via binary classification between open class words (i.e. NOUN , VERB , ADJ , and PROPN ) vs. closed class words (i.e. DET , ADP , AUX , PRON , SCONJ and CCONJ ).", "Likewise, for decoding word frequency and length, words were assigned to two classes based on the median values in the data set (i.e. Zipf-frequency > 5.91 = HIGH else LOW ; word length > 4 characters = LONG , else SHORT ).", "EEG decoding analyses were performed for single-trials, averages across 3 and 10 trials.", "To dissociate the distinct contributions word length, frequency and class that are highly correlated in natural language statistics, we decoded one variable by controlling for the other two variables.", "For instance, when decoding open vs. closed class words, we selected a subset of trials such that the joint distributions over the three confounding variables of word frequency (discretised to the nearest 0.25), length (number of characters) and sentence position (i.e. sentence initial, mid, end) were equated for the categories of open and closed class words.", "To minimise confounds arising from the preceding word in the sentence, we balanced the test set with respect to the open/closed class status of the previous word.", "Similarly, we controlled the decoding of word frequency for word length, and the analysis of word length for word frequency, and both analyses for open/closed class and sentence position.", "Table 3 gives the number of examples for length frequency class train 82,424 51,364 45,502 dev 12,402 7,590 5,670 test (single) 10,810 6,590 5,670 test (avg. 3) 3,603 2,196 1,890 test (avg. 10) 1,081 659 567 Table 3: Number of samples across train, dev and test set in the confound-controlled data set.", "each analysis across training, dev and test sets.", "Methods .", "To temporally resolve how the brain encodes word length, frequency and class, we trained and tested linear SVMs on EEG signals separately for sliding windows of 64 ms (i.e. 16 time points) that shift in increments of 4 ms (i.e. one time sample).", "Figure 2 shows the mean accuracy values (averaged across 10 seed points) from the test set (centred on the last bin of each time window (Grootswagers et al., 2017)) with 68% CI.", "The classification responses for the test set from the model that performed best on the dev set were entered into a two-sided binomial test, separately for each time window.", "Solid lines in Figure 2 above the decoding accuracy time courses indicate time points that were significant at ( p < 0 . 05 ) False Discovery Rate (FDR) corrected for multiple comparisons (Rouam, 2013) across time (i.e. 160 tests).", "Results .", "Figure 2 (top rows of A, B, C) shows butterfly plots for the effects of word length, frequency and class across 64 electrodes.", "Our linear SVM decoding analysis replicates the temporal cascade of word length, frequency and class effects previously reported for EEG responses averaged across a large number of trials.", "The word length effect arises early at about 100 ms, previously associated with visual word processing in occipitotemporal cortices (Hauk and Pulvermller, 2004; Pul-vermller et al., 2009; Schuster et al., 2016).", "Word frequency influenced neural processing later from 200 ms onwards with a slight left-hemispheric predominance (Griffiths et al., 2012).", "The word class effect emerged in early and late time windows with the effect at about 550 ms in line with the wellknown P600 as an ERP indicator for syntactic processing (Osterhout and Holcomb, 1992; Hagoort et al., 1993; ter Keurs et al., 1999).", "Word length and frequency effects were stronger than the word class effect; see King et al. (2020).", "As expected, decoding accuracy increased when EEG signals were averaged across trials.", "Thus, carefully controlling each comparison of interest (e.g. word class) for the effects of no interest (e.g. word length and 2204 Figure 2: Butterfly plots for difference waves across all 64 channels (top rows, left), topographies (top rows, right) and time courses of decoding accuracy (bottom rows).", "frequency) enabled us to dissociate word length, frequency and class effects, despite their high correlation in natural language, thereby validating our new annotated EEG data set and analysis procedure.", "Moving beyond open/closed word class decoding, we assessed whether multi-class PoS decoding with SVMs and/or Transformers can be improved by", "(i) data augmentation, i.e. increasing the number of samples in the training set via bootstrapping and re-averaging (only applicable to 3 and 10 trial averages) and", "(ii) pretraining on trial-averages followed by fine-tuning of the model parameters on single-trial EEG data.", "Data set .", "We focused on decoding of 3 open class ( NOUN , VERB , PROPN ) and 3 closed class ( ADP , DET , PRON ) PoS tags.", "From the word class dataset that was controlled for length and frequency effects (section 3), we selected an equal number of examples per PoS class (i.e. train: 3,470, dev: 335, test: 335, i.e. in total 20k data points; word frequency of each word > median frequency Zipf value (5.91).", "The samples for dev and test sets were matched for distribution of word lengths.", "Methods .", "Using this 6-class unigram dataset, we assessed whether data augmentation via bootstrapping and re-averaging increases decoding performance for the 3 and 10 trial averages.", "We sampled 3 (resp. 10) individual trials with replacement from a particular PoS class and averaged them in 3 (resp. 10) trial averages.", "We thus trained SVMs and Transformers (over 20 random seeds) on 4 training set sizes: N size = {20 k , 100 k , 250 k , 500 k } 2 levels of trial averaging (3 vs. 10) resulting in 8 training sets.", "The baseline training (resp. dev) set included as many 3 (resp. 10) trial averages as the initial single-trial training (resp. dev) set.", "Results .", "Data augmentation systematically boosted the decoding accuracy of the Transformer but not of the SVM most likely because of the former's greater model complexity.", "For both 3 and 10 trial averages the Transformer's decoding accuracy on the dev set increased from a training set size of 20k to 100k, peaking at 250k.", "It then declined for an even larger training size of 500k potentially because continued bootstrapping pro-2205 Figure 3: Dev set decoding accuracy (mean across seeds) for SVM (blue) and Transformer (red) separately for 3 and 10 trial averages and four levels of data augmentation: 20k (original), 100k, 250k and 500k.", "gressively generates dependencies amongst training samples thereby limiting their additional benefit beyond 250k.", "We formally assessed whether the Transformer that scored best on the dev set obtained better decoding accuracy for 250K than for the original 20k training set (n.b. we performed this statistical test on the test set, because the 3 and 10 trial averages within the dev set were not independent from one another as a result of boostrapping).", "Indeed, for both 3 and 10 trial averages the Transformer's (but not the SVM's) decoding accuracy was significantly better for 250k than the original 20k training set ( p < 0 . 01 ; Wilcoxon signed-rank test).", "Methods .", "The ultimate goal is to decode PoS from single-trial EEG data (rather than trial av-erages).", "We therefore assessed whether pretraining the SVM and/or Transformer on trial averages with subsequent fine-tuning on single-trial data increases decoding accuracy.", "Pretraining may be beneficial because trial averages have a greater signal to noise ratio.", "Specifically, we assessed the impact of pretraining in a 2 2 factorial design manipulating", "i) pretraining scheme: training in three steps (10-3-1) from 10 trial averages to 3 trial averages to single-trials vs. training in two steps (3-1) from 3 trial averages to single-trials and", "ii) data augmentation: training only on the original 20k vs. SVM Transformer single-trials 31.93 ( 0 . 62) 37.15 ( 0 . 32) 10-3-1 31.74 ( 0 . 51) 38.5 ( 0 . 28) 10-3-1 (250k) 31.89 ( 0 . 67) 39.17 ( 0 . 33) 3-1 32.03 ( 0 . 52) 37.83 ( 0 . 24) 3-1 (250k) 31.79 ( 0 . 58) 39.41 ( 0 . 41) Table 4: Single-trial decoding accuracies (%, mean across seeds 68% CI) on dev set for the SVM and Transformer: without pretraining, with 10-3-1 pretraining, with 10-3-1 and 250k data augmentation, with 3-1 pretraining, with 3-1 and 250k data augmentation the 250k data set, which obtained highest dev set performance in section 4.1.", "We trained both SVMs and Transformers on the 2 2 training conditions using 20 random seeds and report mean accuracy ( 68% CI, across those 20 seeds) in Table 4.", "Results .", "For the SVM, the 3-1 pretraining without data augmentation resulted in the highest dev set accuracy (32.03%), though accuracy was only slightly better than for direct single-trial training (31.93%).", "For the Transformer, the 3-1 pretraining scheme with 250k data augmentation obtained the highest single-trial decoding accuracy (39.41%) on the dev set.", "Indeed, Wilcoxon signed-rank test (Pereira et al., 2009) confirmed that the best dev set Transformer performed significantly better on the test set after 3-1 pretraining than after direct single-trial training ( p < 0 . 01 ).", "Sections 3 and 4 were driven by the neuroscience goal of dissociating neural representations associated with PoS from confounding factors such as word length or frequency, which are typically correlated with PoS in natural language statistics.", "To control for these confounds sections 3 and 4 generated data sets, in which e.g. PoS classes were equated with respect to word length.", "By contrast, Section 5 pursues the engineering goal of maximising PoS decoding accuracy.", "Here, correlations between word frequency, length and PoS class are no longer considered a confound but a useful source of information.", "Capitalising on the optimised 3-1 training scheme with 250k data augmentation from Section 4, section 5 will assess whether PoS information about unigrams and bigrams can be decoded from EEG signals (without any confound controls).", "For both unigrams and bigrams, we will first investigate how PoS information becomes available dynamically across post-stimulus time by training SVMs and Transformers on EEG signals from 64 2206 ms sliding time windows.", "Second, we will assess how SVMs and Transformers integrate PoS information across post-stimulus time by training them on EEG signals from incremental time windows.", "Data Set .", "We selected an equal number of examples for the 6 most frequent PoS tags (i.e. NOUN , VERB , ADP , DET , PRON & PROPN ) from the data set matched for length of text passage, recording dates and sentence position, but not for word length or frequency.", "Each PoS class included the following number of samples train: 28,265 , dev: 2,948, test: 3,183 examples.", "Methods", ".We implemented the 3-1 pretraining with 250k data augmentation (section 4).", "For the sliding window analysis, we trained and tested SVMs and Transformers on EEG signals from 64 ms windows (i.e. 16 time points) that shifted in increments of 16 ms from 0 ms to 700 ms (i.e. resulting in a sequence of 41 decoding accuracies).", "For the incremental window analysis, we successively increased the initial [0 16] ms time window (i.e. 4 samples) by 4 additional sampling points resulting in a temporal sequence of 44 decoding accuracies.", "We computed decoding accuracies (mean across seeds, 68% CI) from the test set.", "Across time windows we compared the decoding accuracies on the test set of the best dev set SVM and Transformer using the Wilcoxon signed rank-test (reported at p < 0 . 05 , FDR-corrected for multiple comparisons across time i.e. 41 resp. 44 tests).", "Results .", "In the sliding window analysis, the decoding accuracies of SVMs and Transformers show two prominent peaks at 200 ms and 400 ms suggesting that PoS decoding relies on several aspects of information encoded in the EEG.", "Based on our confound-controlled analysis (section 3) the first peak reflects word length and frequency information, while the second peak is more closely related to semantic and syntactic aspects of the word.", "The incremental window analysis showed an accuracy benefit of 4.5% for the Transformer starting in the very first [0 16] ms time window.", "This difference in performance between the two models further widened, reaching a maximum advantage of 11.6% around 360 ms. Transformers thus benefit from integrating information about word frequency, length and class that arise at different post-stimulus latencies.", "Moreover, because PoS classes of subsequent words are correlated in natural language, the TransFigure 4: Unigram results: Test set decoding accuracies (mean across seeds 68% CI), aligned with last bin of each time window.", "Top: Incremental window analysis.", "Middle: Sliding window analysis.", "Bottom: ERP for NOUN , i.e. EEG averaged across all examples from the training set.", "Vertical lines indicate word onset times.", "All time windows are significant at p < 0 .", "05 FDR-corrected.", "former may also benefit via self-attention from information about the next word that is presented and progressively encoded in EEG activity about 240 ms after the current (i.e. to be decoded) word.", "Statistical testing confirmed that the Transformer significantly outperformed the SVM for all sliding and incremental windows (Wilcoxon-signed rank test, FDR-corrected at p < 0 . 05 ).", "To define the contributions of successive words to EEG PoS decoding in naturalistic text reading, we designed a bigram data set that artificially removes the correlations between PoS classes of subsequent words, though we note that this does not fully remove correlations between specific word tokens.", "Data set .", "We selected 6 bigrams, in which each first word's PoS is combined equally often with two different PoS classes from the second word: NOUN-PRON , NOUN-VERB , PRON-NOUN , PRON-VERB , VERB-NOUN and VERB-PRON .", "As a result, the PoS class of word 1 is uninformative about the PoS class of word 2 and vice versa.", "Hence, prior to the presentation of word 2, the maximal possible decoding accuracy for a particular bigram is 50%.", "Each bigram class included the 2207 Figure 5: Bigram results: Test set decoding accuracies (mean across seeds 68% CI), aligned with last bin of each time window.", "following number of samples train: 3,470, dev: 322, test: 349 examples.", "Results Similar to the unigram results, the sliding window analysis revealed two prominent accuracy peaks at 200 ms and 500 ms. Yet, the 2nd peak was slightly later than in the unigram analysis and it was higher than the first peak only in the bigram, but not the unigram analysis.", "These differences between unigram and bigram decoding profiles arise, because EEG at 500 ms encodes semantic and syntactic aspects of word 1 and crucial information about word 2 of the bigram.", "As shown in Figure 5, the Transformer significantly outperformed the SVM in the sliding and incremental window analyses.", "Yet, in contrast to the unigram results, the Transformer outperformed the SVM in the incremental analysis only for windows 0-208 ms and 0-336 ms. The Transformer's smaller benefit arises mainly because our balanced design radically reduced the number of examples and thereby the Transformer's generalisation ability.", "It also removed the natural correlations between subsequent words on which the Transformer may have additionally capitalised in the unigram data.", "The confound-controlled analysis dissociated word length, frequency and class effects in EEG.", "This replication of earlier ERP (Osterhout et al., 1997; Mnte et al., 1998) and MEG decoding work (King et al., 2020) validates a new EEG data set for an extensive morphosyntactically gold annotated corpus; c.f. Bhattasali et al. (2020).", "Transformers successfully decoded 6 PoS tags from single trial EEG with data augmentation and 3-1 pretraining ( 40% accuracy), raising the possibility to boost PoS induction with EEG-decoded PoS tags.", "While we acknowledge that our results are limited to EEG data from a single subject, given the spatial smoothness of EEG scalp topographies, we envision pretraining on EEG obtained from different participants.", "Further, because human brains generate similar neural signatures for word classes across different languages (c.f. Yudes (2016); Mnte et al. (2001); Hagoort et al. (2003)), pretraining PoS-EEG decoders on large morphosyntactically annotated EEG datasets for English followed by fine-tuning on a smaller annotated EEG data set for a low-resource language may enable successful generalisation to EEG obtained from reading non-annotated texts in this low-resource language.", "PoS-induction jointly based on annotated texts and EEG signals could thus be transformative for corpus generation of low-resource languages.", "Combining neural signals measured at millisecond resolution with EEG and a linguistically annotated corpus, this work shows to the best of our knowledge the first time that unigram and bigram PoS tags can be decoded successfully from single-trial EEG data.", "Temporally-resolved EEG decoding unraveled how information about linguistic and nonlinguistic aspects evolved dynamically across time.", "Unsurprisingly, Transformers with self-attention mechanisms outperformed SVMs across all experiments.", "In particular, they benefited from integrating information across time, data augmentation and pretraining methods.", "Our work paves the way for future applications that incorporate human brain signals in traditional NLP methods." ]
[ "objective", "result", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "other", "other", "abstain", "other", "other", "objective", "abstain", "abstain", "abstain", "abstain" ]
[ "Aspect Sentiment Triplet Extraction (ASTE) is an emerging sentiment analysis task.", "Most of the existing studies focus on devising a new tagging scheme that enables the model to extract the sentiment triplets in an end-to-end fashion.", "However, these methods ignore the relations between words for ASTE task.", "In this paper, we propose an Enhanced Multi-Channel Graph Convolutional Network model (EMC-GCN) to fully utilize the relations between words.", "Specifically, we first define ten types of relations for ASTE task, and then adopt a biaffine attention module to embed these relations as an adjacent tensor between words in a sentence.", "After that, our EMC-GCN transforms the sentence into a multi-channel graph by treating words and the relation adjacent tensor as nodes and edges, respectively.", "Thus, relation-aware node representations can be learnt.", "Furthermore, we consider diverse linguistic features to enhance our EMC-GCN model.", "Finally, we design an effective refining strategy on EMC-GCN for word-pair representation refinement, which considers the implicit results of aspect and opinion extraction when determining whether word pairs match or not.", "Extensive experimental results on the benchmark datasets demonstrate that the effectiveness and robustness of our proposed model, which outperforms state-of-the-art methods significantly.", "1 1 Introduction Aspect Sentiment Triplet Extraction (ASTE) is a new variant of Aspect-based Sentiment Analysis (ABSA).", "The ASTE task aims to extract aspect sentiment triplets from a sentence, and each triplet contains three elements, namely aspect term, opinion term and their associated sentiment.", "In Figure 1, an example illustrates the definition of ASTE.", "To extract the triplets, previous studies have developed three types of approaches.", "Pipeline approaches (Peng et al., 2020) independently extract elements of the triplet.", "However, such techniques ignore the interaction between them, and potentially lead to error propagation and extra costs.", "To utilize the associations among the multiple subtasks, Mao et al. (2021) and Chen et al. (2021a) formulate the ASTE task as a multi-turn machine reading comprehension (MRC) problem and design a model based on BERT to jointly train multiple subtasks.", "Meanwhile, some efforts devote to extracting the triplets in an end-to-end framework (Xu et al., 2020; Wu et al., 2020a; Zhang et al., 2020; Chen et al., 2021b; Yan et al., 2021), which is constructed mainly by designing new tagging scheme.", "Although previous works have achieved significant fruits, there exists still several challenges.", "Here, two questions arise naturally for ASTE task by our observations.", "1) How to utilize various relations between words to help ASTE task?", "Take Figure 1 as an example; for word pair ( gourmet, food ), gourmet and food belong to the same aspect term gourmet food .", "Likewise, for word pair ( food, delicious ), food is an opinion target of delicious and is endowed with a positive sentiment polarity.", "Therefore, to 2974 effectively extract the aspect term gourmet food , we expect that gourmet can obtain the information of food and vice versa.", "To judge the sentiment polarity of the aspect term, information of the opinion term delicious should be delivered to gourmet food .", "In short, we need to learn task-dependent word representations based on the relations between words.", "2) How to utilize the linguistic features to help ASTE task?", "First, we observe that aspect terms gourmet food and ser-vice are nouns, while opinion terms delicious and poor are adjectives.", "Thus, the word pair composed of a noun and an adjective tend to form aspect-opinion pair.", "Second, from the syntactic dependency tree in Figure 1, different dependency types exist in word pairs.", "For instance, gourmet and food comprise a compound noun because the dependency type between them is compound , while food is the nominal subject of delicious due to the type nsubj .", "Thus, these dependency types can help not only the extraction of aspect and opinion terms but also their matching 2 .", "In addition, we consider the tree-based and relative position distances which describe the relevance of two words.", "In this paper, we propose a novel architecture, Enhanced Multi-Channel Graph Convolutional Network model (EMC-GCN), to answer the aforementioned questions.", "Firstly , we utilize a biaffine attention module to model the relation probability distribution between words in a sentence and use a vector to represent it.", "Each dimension in the vector corresponds to a certain relation type.", "To this end, we can derive a relation adjacency tensor from a sentence.", "Furthermore, our EMC-GCN transforms the sentence to a multi-channel graph by treating words and the relation adjacency tensor as nodes and edges, respectively.", "In order to learn precise relation between words, we impose relation constraint on the relation adjacency tensor.", "Secondly , to exploit linguistic features, including lexical and syntactic information, we obtain the part-of-speech combination, syntactic dependency type, tree-based distance and relative position distance of each word pair in the sentence.", "Similarly, we respectively transform these features into the edges for the multi-channel graphs to further enhance our model.", "Although part of linguistic fea-2 Matching of word-pair denotes that given w i and w j which respectively belong to an aspect term and an opinion term, if the aspect term and the opinion term form a triplet, then word-pair ( w i , w j ) matches.", "tures has been applied in other tasks (Kouloumpis et al., 2011; Sun et al., 2019; Phan and Ogunbona, 2020; Li et al., 2021), to the best of our knowledge, they are rarely used in ASTE task.", "It is non-trivial to explore various linguistic features, adapt and apply them to ASTE in a novel way.", "Thirdly , inspired by the classifier chains method (Read et al., 2011) in multi-label classification task, we devise an effective refining strategy.", "Our strategy considers the implicit results of aspect and opinion extraction for word-pair representation refinement when judging whether word pairs match.", "Our contributions are highlighted as follows: 1) We propose a novel EMC-GCN model for ASTE task.", "EMC-GCN exploits the multi-channel graph to encode relations between words.", "Convolution function over the multi-channel graph is applied to learn relation-aware node representations.", "2) We propose a novel way to fully develop linguistic features to enhance our GCN-based model, including the part-of-speech combination, syntactic dependency type, tree-based distance and relative position distance of each word pair in a sentence.", "3) We propose an effective refining strategy for refined word-pair representation.", "It considers the implicit results of aspect and opinion extraction when detecting if word pairs match.", "4) We conduct extensive experiments on benchmark datasets.", "The experimental results show the effectiveness of our EMC-GCN model.", "Traditional sentiment analysis tasks are sentence-level (Yang and Cardie, 2014; Severyn and Mos-chitti, 2015) or document-level (Dou, 2017; Lyu et al., 2020) oriented.", "In contrast, Aspect-based Sentiment Analysis (ABSA) is an aspect or entity oriented fine-grained sentiment analysis task.", "The most three basic subtasks are Aspect Term Extraction (ATE) (Hu and Liu, 2004; Yin et al., 2016; Xu et al., 2018; Ma et al., 2019; Chen and Qian, 2020; Wei et al., 2020), Aspect Sentiment Classification (ASC) (Tang et al., 2016; Ma et al., 2017; Li et al., 2018; Zhang et al., 2019; Wang et al., 2020; Li et al., 2021) and Opinion Term Extraction (OTE) (Yang and Cardie, 2012, 2013; Fan et al., 2019; Wu et al., 2020b).", "The studies solve these tasks separately and ignore the dependency between these subtasks.", "Therefore, some efforts devoted to couple the two subtasks and proposed effective models to jointly extract aspect-based pairs.", "This kind of work mainly has two tasks: Aspect and Opinion Term Co-Extraction (AOTE) (Wang et al., 2016, 2017; Dai and Song, 2019; Wang and Pan, 2019; Chen et al., 2020b; Wu et al., 2020a) and Aspect-Sentiment Pair Extraction (ASPE) (Ma et al., 2018; Li et al., 2019a,b; He et al., 2019).", "Most recently, Peng et al. (2020) first proposed the ASTE task and developed a two-stage pipeline framework to couple together aspect extraction, aspect sentiment classification and opinion extraction.", "To further explore this task, (Mao et al., 2021; Chen et al., 2021a) transformed ASTE to a machine reading comprehension problem and utilized the shared BERT encoder to obatin the triplets after multiple stages decoding.", "Another line of research focuses on designing a new tagging scheme that makes the model can extract the triplets in an end-to-end fashion (Xu et al., 2020; Wu et al., 2020a; Zhang et al., 2020; Xu et al., 2021; Yan et al., 2021).", "For instance, Xu et al. (2020) proposed a position-aware tagging scheme, which solves the limitations related to existing works by enriching the expressiveness of labels.", "Wu et al. (2020a) proposed a grid tagging scheme, similar to table filling (Miwa and Sasaki, 2014; Gupta et al., 2016), to solve this task in an end-to-end manner.", "Yan et al. (2021) converted ASTE task into a generative formulation.", "However, these approaches generally ignore the relations between words and linguistic features which effectively promote the triplet extraction.", "In this section, we elaborate on the details of EMC-GCN.", "The overview of the EMC-GCN framework # Relation Meaning 1 B-A beginning of aspect term.", "Given an input sentence X = { w 1 , w 2 , , w n } with n words, the goal of our model is to output a set of triplets T = { ( a, o, s ) m } |T | m =1 from the sentence X , where a and o denote aspect term and opinion term, respectively.", "The sentiment polarity s of the given aspect belongs to a sentiment label set S = { POS , NEU , NEG } .", "That is, the sentiment label set comprises of three sentiment polarities: positive, neutral and negative.", "The sentence X has a total number of |T | triplets.", "We define ten types of relations between words in a sentence for ASTE.", "These relations are shown in Table 1. Specifically, four relations or labels, { B-A , I-A , B-O , I-O } aim to extract aspect terms and opinion terms.", "Compared with GTS (Wu et al., 2020a), the relations we defined introduce more accurately boundary information into our model.", "The 2976 Figure 3: Table filling for triplet extraction in a sentence is illustrated.", "B and I denote the beginning of and inside of the term respectively, while -A and -O subtags aim to determine the role of the term, i.e., an aspect or an opinion.", "The A and O relations in Table 1 are used to detect whether the word pair formed by two different words belongs to the same aspect or opinion term, respectively.", "The goal of the three sentiment relations { POS , NEU , NEG } is not only to detect whether a word-pair matches or not, but also judge the sentiment polarity of the aspect-opinion pair.", "Thus, we can construct a relation table for each labelled sentence with table filling method (Miwa and Sasaki, 2014; Gupta et al., 2016).", "In Figure 3, we show the word pairs and their relations in an example sentence.", "Here, each cell corresponds to a word pair with a relation.", "The decoding details of the ASTE task are shown in Algorithm 1. For simplicity, we use the upper triangular table to decode triplets.", "Firstly, we use the predicted relations of all word pairs ( w i , w i ) only based on the main diagonal, to extract aspect terms and opinion terms.", "Secondly, we need to judge whether the extracted aspect terms and opinion terms match.", "Particularly, for an aspect term a and an opinion term o , we count predicted relations of all word pairs ( w i , w j ), where w i a and w j o .", "If there exists any sentiment relation in predicted relations, the aspect term and the opinion term are considered to be paired, otherwise these two are not paired.", "Finally, for judging the sentiment polarity of the aspect-opinion pair, the most predicted Algorithm 1 Triplet Decoding for ASTE Input: The predicted results P of a sentence X with length n .", "BERT (Devlin et al., 2019) has demonstrated its effectiveness in various tasks.", "We utilize BERT as the sentence encoder to extract hidden contextual representations.", "Given an input sentence X = { w 1 , w 2 , ..., w n } with n tokens, the encoding layer outputs the hidden representation sequence H = { h 1 , h 2 , ..., h n } at the last Transformer block.", "We utilize a biaffine attention module to capture the relation probability distribution of each word pair in a sentence, since the biaffine attention has been proven effective in syntactic dependency parsing (Dozat and Manning, 2017).", "The biaffine attention process is formulated as, h ai = MLP a ( h i ) (1) h oj = MLP o ( h j ) (2) g i,j = h ai TU 1 h oj + U 2 (cid:0) h ai h oj (cid:1) + b (3) r i,j,k = exp ( g i,j,k ) (cid:80) ml =1 exp ( g i,j,l ) (4) R = Biaffine (MLP a ( H ) , MLP o ( H )) (5) where multi-layer perceptron is used.", "The score vector r i,j R 1 m models relations between w i and w j , m is the number of relation types and r i,j,k denotes the score of the k -th relation type for word pair ( w i , w j ).", "The adjacency tensor R R n n m 2977 models relations between words, and each channel corresponds to a relation type.", "U 1 , U 2 and b are trainable weights and bias.", "denotes concatenation.", "Eq.", "(5) collects process of Eqs.", "(1) to (4).", "Motivated by CNN, GCN is an efficient CNN variant that operates directly on graphs (Kipf and Welling, 2017).", "A graph contains nodes and edges and GCN applies the convolution operation on those nodes connected directly by edges to aggregate relevant information.", "Given a sentence with n words, the general approach is to use the syntactic dependency tree to construct an adjacency matrix A R n n representing a graph for the sentence (Zhang et al., 2019; Sun et al., 2019).", "The element A ij denotes the edge of node pair ( w i , w j ).", "Specifically, A ij = 1 if the i -th node is directly connected to the j -th node, and A ij = 0 otherwise.", "A few studies (Guo et al., 2019; Chen et al., 2020a; Li et al., 2021) construct soft edges by attention mechanism for graph.", "The edge of any node pair ( w i , w j ) is a probability that indicates the correlation degree between nodes w i and w j .", "To model various relations between words, our EMC-GCN extend the vanilla GCN with a multichannel adjacency tensor R ba R n n m which is constructed by the aforementioned biaffine attention module.", "Each channel of the adjacency tensor represents the modeling of a relation between words defined in Table 1. Then, we utilize a GCN to aggregate information along each channel for each node.", "We formulate the process as follows, (cid:101) H bak = (cid:16) R ba : , : ,k HW k + b k (cid:17) (6) H ba = f ( (cid:101) H ba 1 , (cid:101) H ba 2 , ..., (cid:101) H bam ) (7) where R ba : , : ,k R n n denotes the k -th channel slice of R ba .", "W k and b k are the learnable weight and bias.", "is an activation function (e.g., ReLU).", "An average pooling function f ( ) is applied over the node hidden representations of all channels.", "To enhance our EMC-GCN model, we introduce four types of linguistic features for each word pair, shown in Figure 4, including the part-of-speech combination, syntactic dependency type, tree-based distance, and relative position distance.", "For syntactic dependency type, we add a self dependency type for each word pair ( w i , w i ).", "In particular, we randomly initialize four adjacency tensors Figure 4: Four types of features for a sentence.", "based on these features, namely R psc , R dep , R tbd and R rpd .", "Take syntactic dependency type feature as an example.", "If a dependency arc exists between w i and w j and the dependency type is nsubj , then R depi,j, : is initialized to the embedding of nsubj by looking up a trainable embedding table; otherwise we initialize R depi,j, : with an m -dimensional zero vector.", "Subsequently, the graph convolution operation is repeated using these adjacency tensors to obtain node representations H psc , H dep , H tbd and H rpd .", "Finally, we respectively apply the average pooling function and concatenation operation to all node representations and all edges formally as, H = f (cid:16) H ba , H psc , H dep , H tbd , H rpd (cid:17) (8) R = R ba R psc R dep R tbd R rpd (9) where H = { h 1 , h 2 , ..., h n } and R = { r 1 , 1 , r 1 , 2 , ..., r n,n } denote node representations and edge representations of word pairs.", "In order to precisely capture the relations between words, we impose a constraint on the adjacent tensor obtained from biaffine module, i.e.,", "the relation set.", "Likewise, we impose the relation constraint on four adjacent tensors produced by linguistic features.", "The constraint costs denote as L psc , L dep , L tbd and L rpd .", "3.4.6 Refining Strategy and Prediction Layer To obtain the representation of word pair ( w i , w j ) for label prediction, we concatenate their node representations h i , h j and their edge representation r ij .", "Moreover, motivated by the classifier chains (Read et al., 2011) method in multi-label classification task, we devise an effective refining strategy, which consider the implicit results of aspect and opinion extraction when judging whether word pairs match.", "Specifically, assuming that w i is a word in an aspect term and w j is a word in an opinion term, word pair ( w i , w j ) is more likely to be predicted as an sentiment relation, i.e., POS , NEU or NEG .", "Otherwise, they are unlikely to match.", "Thus, we introduce the r ii and r jj to refine the representation s ij of word pair ( w i , w j ) , i.e., s ij = h i h j r ij r ii r jj (11) Finally, we feed the word pair representation s ij into a linear layer, followed by a softmax function to produce a label probability distribution p ij , i.e., p ij = softmax( W p s ij + b p ) (12) where W p and b p are the learnable weight and bias.", "Our goal is to minimize the objective function as,", "L = L p + L ba + ( L psc + L dep + L tbd + L rpd ) (13)", "where coefficients and are for adjusting the influence of corresponding relation constraint loss.", "The standard cross-entropy loss L p is used for the ASTE task, i.e., L p = n (cid:88) i n (cid:88) j (cid:88) c C I ( y ij = c ) log( p i,j | c ) .", "We evaluate our method on two ABSA datasets.", "Both of them are from the SemEval ABSA Challenges (Pontiki et al., 2014, 2015, 2016).", "The first dataset D 1 3 comes from Wu et al. (2020a).", "The 3 https://github.com/NJUNLP/GTS Dataset 14res 14lap 15res 16res #S #T #S #T #S #T #S #T D 1 train 1,259 2,356 899 1,452 603 1,038 863 1,421 dev 315 580 225 383 151 239 216 348 test 493 1,008 332 547 325 493 328 525 D 2 train 1266 2338 906 1460 605 1013 857 1394 dev 310 577 219 346 148 249 210 339 test 492 994 328 543 322 485 326 514 Table 2: Statistics for two groups of experiment datasets.", "second dataset D 2 4 is annotated by Xu et al. (2020), which is a corrected version of dataset proposed by Peng et al. (2020).", "Statistics for these two groups of datasets are shown in Table 2. 4.2 Baselines We compare our EMC-GCN with state-of-the-art baselines.", "These models are briefly grouped into three categories.", "1) Pipeline methods: CMLA+, RINANTE+, Li-unified-R, and Peng-two-stage are proposed by Peng et al. (2020).", "Peng-two-stage+IOG and IMN+IOG are proposed by Wu et al. (2020a).", "2) End-to-end methods: GTS-CNN, GTS-BiLSTM, GTS-BERT (Wu et al., 2020a), OTE-MTL (Zhang et al., 2020), JET-BERT (Xu et al., 2020), S 3 E 2 (Chen et al., 2021b) and BART-ABSA (Yan et al., 2021).", "3) MRC-based methods: BMRC (Chen et al., 2021a) is a multi-turn MRC-based model, which is end-to-end in the training phase, but works in pipeline during the inference phase.", "We use the BERT-base-uncased version 5 as our sentence encoder.", "AdamW optimizer (Loshchilov and Hutter, 2018) is used with a learning rate of 2 10 5 for BERT fine-tuning and a learning rate of 10 3 for the other trainable parameters.", "The dropout rate is set to 0.5.", "The hidden state dimensionality of BERT and GCN are set to 768 and 300, respectively.", "The EMC-GCN model is trained in 100 epochs with a batch size of 16.", "To control the influence of relation constraint, we set the hyperpa-rameter and to 0.1 and 0.01, respectively.", "Note that the number of channels equals to the number of relations we defined, which is immutable due to the relation constraint we proposed.", "All sentences are parsed by Stanza (Qi et al., 2020).", "We save 4 https://github.com/xuuuluuu/ SemEval-Triplet-data/tree/master/ASTE-Data-V2-EMNLP2020 5 https://github.com/huggingface/ transformers 2979 Model 14res 14lap 15res 16res P R F1 P R F1 P R F1 P R F1 Peng-two-stage+IOG 58.89 60.41 59.64 48.62 45.52 47.02 51.70 46.04 48.71 59.25 58.09 58.67 IMN+IOG 59.57 63.88 61.65 49.21 46.23 47.68 55.24 52.33 53.75 --GTS-CNN 70.79 61.71 65.94 55.93 47.52 51.38 60.09 53.57 56.64 62.63 66.98 64.73 GTS-BiLSTM 67.28 61.91 64.49 59.42 45.13 51.30 63.26 50.71 56.29 66.07 65.05 65.56 S 3 E 2 69.08 64.55 66.74 59.43 46.23 52.01 61.06 56.44 58.66 71.08 63.13 66.87 GTS-BERT 70.92 69.49 70.20 57.52 51.92 54.58 59.29 58.07 58.67 68.58 66.60 67.58 BMRC -70.01 -57.83 -58.74 -67.49 Our EMC-GCN 71.85 72.12 71.98 61.46 55.56 58.32 59.89 61.05 60.38 65.08 71.66 68.18 Table 3: Experimental results on D 1 (Wu et al., 2020a).", "the model parameters according to the best performance of the model on the development set.", "The reported results are the average on five runs with different random seeds.", "The main experimental results are reported in Tables 3 and 4. Under the F1 metric, our EMC-GCN model outperforms all pipeline, end-to-end and MRC-based methods on the two groups of datasets.", "We observe that end-to-end and MRC-based methods achieve more significant improvements than pipeline methods do, as they establish the correlations between these subtasks and alleviate the problem of error propagation by jointly training multiple subtasks.", "Note that the tagging schemes of OTE-MTL and GTS-BERT are similar to table filling.", "Compared with GTS-BERT, our EMC-GCN significantly surpasses its performance by an average of 1.96% and 2.61% F1-score on D 1 and D 2 , respectively.", "This improvement is attributed to that our EMC-GCN can leverage the relations between words and linguistic knowledge for word representation learning.", "Another finding is that Model 14res 14lap 15res 16res EMC-GCN 71.78 58.81 61.93 68.33 w/o Ten Relations 70.68 57.71 59.85 66.48 w/o Linguistic Features 71.22 58.38 60.62 67.15 w/o Relation Constraint 70.59 57.28 59.83 67.89 w/o Refining Strategy 70.62 56.72 60.23 67.31 Table 5: F1 scores of ablation study on D 2 .", "those methods with BERT encoder, such as JET-BERT, GTS-BERT and BMRC, generally achieve better performance than other methods with BiL-STM encoder.", "We suppose the reason is that BERT has been pre-trained on large-scale data and can provide a strong language understanding ability.", "To investigate the effectiveness of different modules in EMC-GCN, we conduct ablation study on the second dataset D 2 .", "The experimental results are shown in Table 5. w/o Ten Relations denotes that EMC-GCN uses the same tagging schema as GTS (Wu et al., 2020a) with six labels.", "Without 2980 Model 14res 14lap POS NEU NEG POS NEU NEGEMC-GCN 74.69 19.65 62.43 67.74 19.14 56.20 w/o Refining Strategy 74.98 17.39 59.87 67.31 16.08 52.74 Table 6: F1 scores of three sentiment relations on D 2 .", "the four relations { B-A , I-A , B-O , I-O } , EMC-GCN loses boundary information of terms, the performance drops significantly.", "w/o Linguistic Features means that we remove the four types of features from EMC-GCN.", "Without the enhancement of linguistic features, the performance of our EMC-GCN is slightly degraded on 14res and 14lap, but decreased by 1.31% and 1.18% on 15res and 16res, respectively.", "As 15res and 16res contain less training data, the linguistic features can provide additional information when the training data is insufficient, which is helpful to the prediction of the model.", "w/o Relation Constraint indicates that we remove the relation constraint loss between the adjacency tensor R ba and the golden label.", "Thus, each channel in the adjacency tensor cannot precisely describe the relation dependency between words.", "As a result, the performance of EMC-GCN w/o Relation Constraint on four sub datasets is significantly dropped.", "w/o Refining Strategy denotes that we remove the implicit results of aspect and opinion extraction r ii and r jj from word pair representation s ij .", "Since the adjacency tensor has a relation constraint with the golden label, we can suppose r ii as a predicted label or relation probability distribution of word pair ( w i , w i ) on the main diagonal.", "Thus, we leverage the aspect and opinion extraction implicit results as prior information to help predict the label of word pair ( w i , w j ) .", "To sum up, each module of our EMC-GCN contributes to the entire performance on the ASTE task.", "The purpose of refining strategy is to facilitate the word pair matching process based on the aspect and opinion extraction implicit results.", "To verify the idea, we conduct comparative experiments of three sentiment relations { POS , NEU , NEG } on 14rest and 14lap of D 2 .", "The results of are shown in Table 6. Note that the function of the three sentiment relations is to detect whether a word-pair matches or not and identify the sentiment polarity of the aspect-opinion pair.", "The results show that the performance of w/o Refining Strategy has declined markedly and the refinement strategy works as we Figure 5: Visualization of POS and NEG relation channels of adjacency tensor R ba obtained from the biaffine attention.", "To investigate the effect of relations between words, we visualize the channel slice of adjacency tensor R ba corresponding to a specific relation.", "Consider the sample sentence, air has higher resolution but the fonts are small. from 14lap dataset.", "This sentence comprises two triplets, { ( resolution , higher , POS ) , ( fonts , small , NEG ) } .", "As shown in the left of Figure 5, the visualized adjacency information of higher and resolution corresponds to the POS relation channel.", "In the visualization, higher and resolution are highly related to each other.", "As a result, they convey their own information to each other.", "Similarly, in the right of Figure 5, fonts can receive the node representation and negative sentiment of small in the NEG relation channel.", "Meanwhile, small can also obtain the information of the opinion target it describes.", "Thus, our EMC-GCN model can readily predict the correct labels of word pairs ( fonts , small ) and ( resolution , higher ).", "To further analyze the role of linguistic features on ASTE task, we visualize adjacency tensors of four linguistic features.", "We use the l 2 norm of feature vector in the adjacency tensor to represent the relevance score of the corresponding word pair.", "In Figure 6, the first one is visualization of adjacency tensor R psc from part-of-speech combination feature and we observe that the score between adjective and noun is higher, because adjective and noun easily form an aspect-opinion pair, while the score between adjectives is lower, since the two adjectives are usually not related and are likely to be bring noise to each other.", "In visualization of R dep , we find that each word only has a score with the words it directly depends on, and computes different relevance scores according to different syntactic dependency types.", "The visualization of R tbd shows that the relevance score calculated for each word with other words at different tree-based distances.", "The visualization of R rpd demonstrates that the relevance of two adjacent words is greater than that of long-distance word pairs.", "In summary, all linguistic features we devised contribute to ASTE task.", "A case study is given in Figure 7. In this example, the aspect terms and opinion terms are highlighted in blue and yellow, respectively.", "The red line indicates the aspect term and opinion term match, and form a triplet with positive sentiment.", "The golden opinion term light is hard to identify by GTS-BERT and BMRC, while easy is predicted correctly by all methods, since light is farther from transport than easy .", "Thus, they ignore the triplet ( transport , light , positive ), while our EMC-GCN can precisely extract it.", "We argue the key factor is that light and transport can establish significant connections through sentiment relation and linguistic features.", "In this paper, we propose an EMC-GCN architecture for ASTE task.", "To exploit relations between words, we first devise a multi-channel graph structure for modeling different relation type of each word pair.", "Then, we utilize graph convolution operation over all channels to learn relation-aware node representations.", "Furthermore, we consider linguistic features to enhance the GCN-based model.", "Finally, we design an effective refining strategy on EMC-GCN for better extracting triplets.", "Extensive experiments on benchmark datasets show that our EMC-GCN model consistently outperforms all baselines.", "In the future, we will analyse roles of linguistic features and effects of their combinations.", "This work was supported in part by the National Key R&D Program of China under Grant 2019YFF0303300 and Subject II under Grant 2019YFF0303302, in part by the National Natural Science Foundation of China under Grants 61906018 and 62076032, in part by the 111 Project under Grant B08004, and in part by the Fundamental Research Funds for the Central Universities under Grant 2021RC36.", "We appreciate constructive feedback from the anonymous reviewers." ]
[ "abstain", "abstain", "abstain", "objective", "objective", "method", "abstain", "method", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "method", "objective", "method", "abstain", "abstain", "method", "abstain", "result", "method", "abstain", "method", "abstain", "abstain", "result", "objective", "abstain", "abstain", "objective", "objective", "abstain", "method", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "objective", "method", "method", "method", "result", "abstain", "other", "other" ]
[ "Over the last few years two promising research directions in low-resource neural machine translation (NMT) have emerged.", "The first focuses on utilizing high-resource languages to improve the quality of low-resource languages via multilingual NMT.", "The second direction employs monolingual data with self-supervision to pre-train translation models, followed by fine-tuning on small amounts of supervised data.", "In this work, we join these two lines of research and demonstrate the efficacy of monolingual data with self-supervision in multilingual NMT.", "We offer three major results:", "(i) Using monolingual data significantly boosts the translation quality of low-resource languages in multilingual models.", "(ii) Self-supervision improves zero-shot translation quality in multilingual models.", "(iii) Leveraging monolingual data with self-supervision provides a viable path towards adding new languages to multilingual models, getting up to 33 BLEU on WMT ro-en translation without any parallel data or back-translation.", "Recent work has demonstrated the efficacy of multilingual neural machine translation (multilingual NMT) on improving the translation quality of low-resource languages (Firat et al., 2016; Aha-roni et al., 2019) as well as zero-shot translation (Ha et al., 2016; Johnson et al., 2017; Arivazhagan et al., 2019b).", "The success of multilingual NMT on low-resource languages relies heavily on transfer learning from high-resource languages for which copious amounts of parallel data is easily accessible.", "However, existing multilingual NMT approaches often do not effectively utilize the abundance of monolingual data, especially in low-resource languages.", "On the other end of the spectrum, self-supervised learning methods, consuming 0.01 0.1 1 10 100 1000 en cs fr ru zh es fi de et lv lt ro hi kk tr gu Parallel Monolingual Figure 1: Number of parallel and monolingual training samples in millions for each language in WMT training corpora.", "only monolingual data, have achieved great success on transfer learning (Devlin et al., 2019) and unsupervised NMT (Lample et al., 2018; Artetxe et al., 2018) without fully benefiting from the rich learning signals offered by the bilingual data of multiple languages.", "In this work, we propose to combine the bene-ficial effects of multilingual NMT with the self-supervision from monolingual data.", "Compared with multilingual models trained without any monolingual data, our approach shows consistent improvements in the translation quality of all languages, with greater than 10 BLEU points improvements on certain low-resource languages.", "We further demonstrate improvements in zero-shot translation, where our method has almost on-par quality with pivoting-based approaches, without using any alignment or adversarial losses.", "The most interesting aspect of this work, however, is that we introduce a path towards effectively adding new unseen languages to a multilingual NMT model, showing strong translation quality on several language pairs by leveraging only monolingual data with self-supervised learning, without the need for any parallel data for the new languages.", "We propose a co-training mechanism that combines supervised multilingual NMT with monolingual data and self-supervised learning.", "While several pre-training based approaches have been studied in the context of NMT (Dai and Le, 2015; Conneau and Lample, 2019; Song et al., 2019), we proceed with Masked Sequence-to-Sequence (MASS) (Song et al., 2019) given its success on unsupervised and low-resource NMT, and adapt it to the multilingual setting.", "MASS adapts the masked de-noising objective (De-vlin et al., 2019; Raffel et al., 2019) for sequence-to-sequence models, by masking the input to the encoder and training the decoder to generate the masked portion of the input.", "To utilize this objective function for unsupervised NMT, Song et al. (2019) enhance their model with additional improvements, including language embeddings, target language-specific attention context projections, shared target embeddings and softmax parameters and high variance uniform initialization for target attention projection matrices 1 .", "We use the same set of hyper-parameters for self-supervised training as described in (Song et al., 2019).", "However, while the success of MASS relies on the architectural modifications described above, we find that our multilingual NMT experiments are stable even in the absence of these techniques, thanks to the smoothing effect of multilingual joint training.", "We also forego the separate source and target language embeddings in favour of pre-pending the source sentences with a < 2 xx > token (John-son et al., 2017).", "We train our models simultaneously on supervised parallel data using the translation objective and on monolingual data using the MASS objective.", "To denote the target language in multilingual NMT models we prepend the source sentence with the < 2 xx > token denoting the target language.", "We use the parallel and monolingual training data provided with the WMT corpus, for 15 languages to and from English.", "The amount of parallel data available ranges from more than 60 million sentence pairs as in En-Cs to roughly 10k sentence pairs as in En-Gu.", "We also collect additional monolingual data from WMT news-crawl, news-commentary, common-crawl, europarl-v9, news-discussions and wikidump datasets in all 16 languages including English.", "2 The amount of monolingual data varies from 2 million sentences in Zh to 270 million in De.", "The distribution of our parallel and monolingual data is depicted in Figure 1. 3.2 Data Sampling Given the data imbalance across languages in our datasets, we use a temperature-based data balancing strategy to over-sample low-resource languages in our multilingual models (Arivazhagan et al., 2019b).", "We use a temperature of T = 5 to bal-ance our parallel training data.", "When applicable, we sample monolingual data uniformly across languages since this distribution is not as skewed.", "For experiments that use both monolingual and parallel data, we mix the two sources at an equal ratio (50% monolingual data with self-supervision and 50% parallel data).", "All experiments are performed with the Transformer architecture (Vaswani et al., 2017) using the open-source Tensorflow-Lingvo implementation (Shen et al., 2019).", "Specifically, we use the Transformer Big model containing 375M parameters (6 layers, 16 heads, 8192 hidden dimension) (Chen et al., 2018) and a shared source-target Sen-tencePiece model (SPM) 3 (Kudo and Richardson, 2018).", "We use a vocabulary size of 32k for the bilingual models and 64k for the multilingual mod-2 Followed the versions recommended by WMT'19 shared task, as in http://statmt.org/wmt19/translation-task.html 3 https://github.com/google/sentencepiece BLEU -10.0 -5.0 0.0 5.0 10.0 15.0 cs fr ru zh es fi de et lv lt ro hi kk tr gu Multilingual NMT Multilingual NMT + Mono.", "els.", "Different SPMs are trained depending on the set of languages supported by the model.", "We evaluate the performance of the models using SacreBLEU (Post, 2018) on standard WMT validation and test sets (Papineni et al., 2002).", "The performance of our bilingual baselines for all 30 English-centric language pairs are reported in Table 1. We compare the performance of bilingual models, multilingual models trained with just supervised data for 30 language pairs (15 languages to and from English) and multilingual models trained with a combination of supervised and monolingual data in Figure 2. High-Resource Translation Our results suggest that a single multilingual model is able to match the quality of individual bilingual models with a gap of less than 2 BLEU points for most high-resource languages, with the exception of Chinese (Zh).", "The slight quality regression is not surprising, given the large number of languages competing for capacity within the same model (Arivazhagan et al., 2019b).", "We find that adding additional monolingual data improves the multilingual model quality across the board, even for high-resource language pairs.", "Low-Resource Translation From Figure 2, we observe that our supervised multilingual NMT model significantly improves the translation quality for most low and medium-resource languages compared with the bilingual baselines.", "Adding additional monolingual data leads to an additional improvement of 1-2 BLEU for most medium-resource languages.", "For the lowest-resource languages like Kazakh (kk), Turkish (tr) and Gujarati (gu), we can see that multilingual NMT alone is not sufficient to reach high translation quality.", "The addition of monolingual data has a large positive impact on very low resource languages, significantly improving quality over the supervised multilingual model.", "These improvements range from 3-5 BLEU in the en xx direction to more than 5 BLEU for the xx en translation.", "Zero-Shot Translation We next evaluate the effect of training on additional monolingual data on zero-shot translation in multilingual models.", "Table 2 demonstrates the zero-shot performance of our multilingual model that is trained on 30 language pairs, and evaluated on French(fr)-German(de) and German(de)-Czech(cs), when trained with and without monolingual data.", "To compare with the existing work on zero-shot translation, we also evaluate the performance of multilingual models trained on just the relevant languages (en-fr-de for fr-de translation, en-cs-de for cs-de translation).", "We observe that the additional monolingual data significantly improves the quality of zero-shot translation, often resulting in 3-6 BLEU increase on all zero-shot directions compared to our multilingual baseline.", "We hypothesize that the additional monolingual data seen during the self-supervised training process helps better align representations across languages, akin to the smoothing effect in semi-supervised learning (Chapelle et al., 2010).", "We leave further exploration of this intriguing phenomenon to future work.", "Inspired by the effectiveness of monolingual data in boosting low-resource language translation quality, we continue with a stress-test in which we completely remove the available parallel data from our multilingual model, one language at a time, in order to observe the unsupervised machine translation quality for the missing language.", "Results of this set of experiments are detailed in Table 3. We find that simply adding monolingual data for a new language to the training procedure of a multilingual model is sufficient to obtain strong translation quality for several languages, often attaining within a few BLEU points of the fully supervised multilingual baseline, without the need for iterative back-translation.", "We also notice significant quality improvements over models trained with just self-supervised learning using monolingual data for a variety of languages.", "On WMT ro-en, the performance of our model exceeds XLM (Conneau and Lample, 2019) by over 1.5 BLEU and matches bilingual MASS (Song et al., 2019), without utilizing any back-translation.", "This suggests that jump-starting the iterative back-translation process from multilingual models might be a promising avenue to supporting new languages.", "Our work builds on several recently proposed techniques for multilingual NMT and self-supervised representation learning.", "While massively multilingual models have obtained impressive quality improvements for low-resource languages as well as zero-shot scenarios (Aharoni et al., 2019; Arivazhagan et al., 2019a), it has not yet been shown how these massively multilingual models could be extended to unseen languages, beyond the pipelined approaches (Currey and Heafield, 2019; Lakew et al., 2019).", "On the other hand, self-supervised learning approaches have excelled at down-stream cross-lingual transfer (Devlin et al., 2019; Raffel et al., 2019; Conneau et al., 2019), but their success for unsupervised NMT (Conneau and Lample, 2019; Song et al., 2019) currently lacks robustness when languages are distant or monolingual data domains are mismatched (Neubig and Hu, 2018; Vulic et al., 2019).", "We observe that these two lines of research can be quite complementary and can compensate for each other's deficiencies.", "7 Conclusion and Future Directions We present a simple framework to combine multilingual NMT with self-supervised learning, in an effort to jointly exploit the learning signals from multilingual parallel data and monolingual data.", "We demonstrate that combining multilingual NMT with monolingual data and self-supervision", "(i) improves the translation quality for both low and high-resource languages in a multilingual setting,", "(ii) leads to on-par zero-shot capability compared with competitive bridging-based approaches and", "(iii) is an effective way to extend multilingual models to new unseen languages.", "Future work should explore techniques like iterative back-translation (Hoang et al., 2018) for further improvement and scaling to larger model capacities and more languages (Arivazhagan et al., 2019b; Huang et al., 2019) to maximize transfer across languages and across data sources." ]
[ "abstain", "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "objective", "objective", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "other", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "other", "other", "abstain", "method", "objective", "other", "other", "other", "other" ]
[ "Recent parameter-efficient language model tuning (PELT) methods manage to match the performance of fine-tuning with much fewer trainable parameters and perform especially well when training data is limited.", "However, different PELT methods may perform rather differently on the same task, making it nontrivial to select the most appropriate method for a specific task, especially considering the fast-growing number of new PELT methods and tasks.", "In light of model diversity and the difficulty of model selection, we propose a unified framework, UNIPELT, which incorporates different PELT methods as submodules and learns to activate the ones that best suit the current data or task setup via gating mechanism.", "On the GLUE benchmark, UNIPELT consistently achieves 1~4% gains compared to the best individual PELT method that it incorporates and outperforms fine-tuning under different setups.", "Moreover, UNIPELT generally surpasses the upper bound that takes the best performance of all its submodules used individually on each task, indicating that a mixture of multiple PELT methods may be inherently more effective than single methods.", "1 1 Introduction As pre-trained language models (PLMs) (Devlin et al., 2019) grow larger and larger (Brown et al., 2020), it becomes increasingly infeasible to perform conventional fine-tuning, where separate replicas of the model parameters are modified per single task.", "To solve the issue, there has recently been a surge of studies on p arametere fficient l anguage model t uning (PELT), namely how to effectively tune the PLMs with fewer trainable parameters.", "Existing PELT research generally aims at achieving performance comparable to fine-tuning with Work was done during internship at Meta AI.", "as few trainable parameters as possible, which has seen significant progress the task-specific trainable parameters used in most recent approaches (Lester et al., 2021; Guo et al., 2021) are almost negligible compared to the total parameters of the PLM (<1%).", "A more challenging yet less studied problem is whether one can achieve better performance than fine-tuning with fewer parameters.", "Recent studies (He et al., 2021; Li and Liang, 2021; Karimi Mahabadi et al., 2021b) find that some PELT methods are more effective than fine-tuning on certain tasks when training data is limited, possibly due to the reduced risk of overfitting.", "However, as found in our experiments (Table 1), different PELT methods exhibit diverse characteristics and perform rather differently on the same task, which 6253 makes it nontrivial to select the most appropriate method for a specific task, especially considering the fast-growing number of new PELT methods and tasks (Ding and Hu, 2021).", "In light of the diverse performance of PELT methods and the cost of selecting the best method, we propose a unified PELT framework, named UNIPELT, which incorporates different PELT methods as submodules and learns to dynamically activate the (combination of) submodules that best suit the current data or task setup.", "As a result, model selection is no longer needed and consistently better performance is achieved under different setups.", "The activation of each submodule in UNIPELT is controlled by gating mechanism , which learns to favor (assign more weight to) the submodules that positively contribute to a given task.", "In addition, since the number of parameters introduced by each submodule is generally small, combining multiple methods leads to negligible losses in model efficiency.", "We select four representative PELT methods for our study adapter (Houlsby et al., 2019), prefix-tuning (Li and Liang, 2021), LoRA (Hu et al., 2021), and BitFit (Ben Zaken et al., 2021), which largely cover the major categories of PELT methods.", "We perform two sets of analysis that carefully examines", "(i) the characteristics of individual PELT methods and", "(ii) their effectiveness when coordinated by UNIPELT under various setups.", "2 Extensive experiments on the GLUE benchmark (Wang et al., 2019), with 32 setups (8 tasks 4 data sizes) and 1,000+ runs, not only reveal the diverse behavior of PELT methods, but also show that UNIPELT is more effective and robust than using each method alone in various task and data setups.", "Specifically, UNIPELT consistently improves the best submodule that it incorporates by 1~4 points and even outperforms fine-tuning, achieving the best average performance on the GLUE benchmark under different setups.", "Moreover, UNIPELT generally surpasses the upper bound that takes the best performance of all its submodules used individually on each task, which suggests that UNIPELT maintains (near) optimal performance under different setups.", "The fact that UNIPELT outperforms the upper bound also implies that a mixture of PELT methods involving different parts of the PLM architecture may be inherently more effective than 2 BitFit is not included in UNIPELT as it typically performs the worst in our preliminary experiments.", "Contributions .", "(1) We conduct a comprehensive study of representative PELT methods and carefully examine their differences and commonalities in terms of performance and characteristics.", "(2) We propose a unified PELT framework that can incorporate existing methods as submodules and automatically learn to activate the appropriate submodules for a given task.", "(3) Our proposed framework achieves better average performance than fine-tuning and the PELT methods that it incorporates under various setups, often performing the best and never the worst at per-task level, exhibiting supe-rior effectiveness and robustness with negligible losses in model efficiency.", "PLMs can be used as feature extractors where only the top layers or prediction head are fine-tuned without additional parameters (Lee et al., 2019).", "However, such fine-tuning approaches generally lead to degenerate model performance that is much worse than fine-tuning all parameters (Lee et al., 2019; Pfeiffer et al., 2021).", "A recent method BitFit (Ben Zaken et al., 2021) only tunes the bias terms of the PLM and is shown to achieve performance comparable to fine-tuning on certain tasks when training data is limited.", "Therefore, we select BitFit as the representative of this category for analysis.", "Alternatively, one may fix the entire PLM and introduce a small number of new trainable parameters.", "Notable examples in this category include adapter (Houlsby et al., 2019) and its extensions (Pfeif-fer et al., 2021; Karimi Mahabadi et al., 2021b), prefix-tuning (Li and Liang, 2021) and its extensions (Lester et al., 2021), and additive methods (Guo et al., 2021; Hu et al., 2021).", "Next, we will briefly describe these methods to facilitate the introduction of our proposed framework.", "An illustration is shown in Fig. 1 for better understanding.", "Adapter .", "Adapter (Houlsby et al., 2019) adds a trainable bottleneck layer after the feedforward network in each Transformer layer of the PLM.", "A bottleneck layer consists of a down+up projection pair that shrinks and recovers the size of token hidden 6254 states.", "Mathematically, if we denote the output of the f eedforward network after residual connection and layer n ormalization as h FN with hidden size D hidden and bottleneck size D mid , then the output of a bottleneck layer h A is: h A = W (cid:124) up ( W (cid:124) down h FN ) , (1) where W down RD hidden D mid , W up RD mid D hidden , is a nonlinear activation function, and the bias terms are omitted for brevity.", "The parameters in layer normalization and the final prediction head sometimes are also fine-tuned depending on the specific adapter variants.", "Adapter has shown to be on par with fine-tuning and sometimes exhibits better effectiveness in the low-resource setting (He et al., 2021).", "Later studies extend adapter to multi-lingual (Pfeiffer et al., 2020b) and multi-task (Karimi Mahabadi et al., 2021b) settings, or further reduce its trainable parameters (Karimi Mahabadi et al., 2021a), which can be easily incorporated into UNIPELT as a replacement of the vanilla adapter.", "Prefix-tuning .", "Prefix-tuning (Li and Liang, 2021) prepends a number of task-specific trainable vectors to the input of multi-head attention in each Transformer layer, which the original tokens can attend to as if they were virtual tokens.", "Specifically, we denote the original sequence length L 0 , the number of trainable vectors ( i.e. , prefix length) L , and the Transformer layer input h in RD hidden L 0 .", "First, three linear projections WQ , WK , WV RD hidden D hidden transform h in into Query Q , Key K , and Value V .", "Then, two prefix matrices PK and PV RD hidden L are prepended to K and V .", "To stabilize optimization, the prefix matrix P is reparameterized by a feedforward network: P (cid:48) = W (cid:124) up ( W (cid:124) down P ) , (2) where W down RD hidden D mid , W up RD mid 2 N layer D hidden , and N layer denotes the number of Transformer layers.", "The parameters of this network can be discarded after training, and only 2 N layer prefix matrices RD hidden L are needed (2 matrices for each layer).", "Prefix-tuning is originally evaluated on natural language generation and we adapt it to understanding tasks.", "A follow-up method named prompt-tuning (Lester et al., 2021) further reduces task-specific parameters by limiting the prefix to the first layer but only performs competitively with very large model sizes (billions of total parame-ters), and is thus not considered in our study.", "Note that prefix-tuning (or prompt-tuning) is different from prompt-based fine-tuning methods (Schick and Schtze, 2021; Gao et al., 2021) (see App. A for specific differences).", "Additive Methods .", "Additive PELT methods treat the model parameters after fine-tuning as an addition of the pre-trained parameters pre-trained and task-specific differences task , where pre-trained is fixed and a new (sub)set of model parameters are added on top: task = pre-trained + task .", "There are various ways to parameterize task , leading to different additive methods such as LoRA (Hu et al., 2021), diff pruning (Guo et al., 2021), and side-tuning (Zhang et al., 2020).", "We take LoRA as a representative and incorporate it into UNIPELT.", "Other methods are conceptually similar and can be incorporated in the same fashion.", "LoRA introduces trainable low-rank matrices and combines them with the original matrices in the multi-head attention.", "Specifically, two matrices W down RD hidden D mid and W up RD mid D hidden are added for the query and key projections along with the original matrix WQ and WK RD hidden D hidden : Q = ( W (cid:124) Q + W (cid:124) up W (cid:124) down ) h in , (3) where is a fixed scalar hyperparameter for scaling the task-specific differences.", "The form of the trainable matrices in LoRA is quite similar to those in adapter or prefix-tuning, but there is no activation function in between.", "Given a large PLM M with size |M| that cannot be fine-tuned directly due to computational or storage cost, suppose that we have a list of PELT methods { m i } , the trainable parameters of which are negligible ( i.e. , (cid:80) i | m i | (cid:28) |M| ), our goal is to design a unified PELT framework that incorporates { m i } as submodules and learns to dynamically activate (up-weight) different submodules when appropriate under different scenarios, such that one could achieve satisfactory results in terms of both model effectiveness and robustness without the hassle of permuting all the method task data combinations.", "Motivation & Intuition .", "During the analysis of individual PELT methods, we observe that different PELT methods exhibit diverse characteristics and perform rather differently on the same task.", "For example, prefix-tuning generally performs well on natural language inference tasks regardless of the size of training data.", "Also, as can be seen in Fig. 1 and Sec. 2, different PELT methods often involve different parts of the PLM architecture ( e.g. , before multi-head attention for prefix-tuning and after feedforward layer for adapter), making it feasible to combine multiple PELT methods without (directly) interfering with each other.", "In light of the two observations above, we propose a unified PELT framework, UNIPELT, which takes a hybrid approach by incorporating multiple PELT methods as submodules.", "At a high level, UNIPELT improves over single PELT methods due to two factors.", "First, UNIPELT learns to activate (upweight) the submodules that best suit the current task or specific data sample and deactivate (down-weight) the rest.", "Second, we find that UNIPELT generally performs better than taking the best performance of all its submodules used individually on each task, suggesting that there could be some compounding effects that lead to better model effectiveness when multiple PELT methods (that modify different parts of the PLM) are used.", "Next, we will introduce how different PELT methods can be incorporated into UNIPELT via gating mechanism.", "Gating Mechanism .", "To achieve fine-grained control of submodule (de)activation, we add a trainable gate G m i for each submodule m i { A, P, L } in every Transformer layer (see Fig. 1).", "The letters A, P, L stand for Adapter, Prefix-tuning, and LoRA, respectively.", "Intuitively, if m i is useful for a given data task setup (or a particular instance), the gate output for m i would be higher such that m i plays a more important role.", "The actual interplay of submodules, however, is more complicated given the interdependency of the submodules and the compounding effects of multiple layers.", "Specifically, for adapter, there is a residual connection between the feedforward network and the adapter submodule that sums the adapter input (be-fore normalization) h F and output h A as its final output: h (cid:48) A = h A + h F .", "We design a gating function GA (0 , 1) that estimates the importance of adapter by its direct input h FN using a feedforward network with sigmoid activation and then scales its output: h (cid:48) A = GA h A + h F .", "The adapter submodule is effectively bypassed if GA 0 .", "Similarly, for prefix-tuning, we design a gating function GP (0 , 1) that is applied to the prefix vectors ( PK and PV ) with the representation of the original tokens ( K and V ) intact.", "In this way, the impact of the prefix would be diminished if the gate output of the prefix-tuning submodule is low.", "3 The gating function GP is estimated by the Transformer layer input h in with another feedforward network.", "As for LoRA, we note that there is already a constant scaling factor in its original design that resembles the purpose of our gating mechanism.", "We thus simply make the factor learnable per layer by a third feedforward network that takes h in as input instead of specifying a constant manually: task = pre-trained + GL task .", "Despite the seeming simplicity of UNIPELT, we note that it is nontrivial for a unified approach to work well under different scenarios.", "Naively combining different PELT methods as a hybrid approach could lead to mixed or worse performance than using individual methods, as observed in both our experiments and prior studies (Hu et al., 2021).", "We conduct extensive experiments with 8 tasks 4 data sizes 7 methods 5 runs per setup, along with additional analysis for particular methods, resulting in 1,000+ runs in total.", "Task Setup .", "We conduct experiments on the General Language Understanding Evaluation (GLUE) benchmark (Wang et al., 2019), which involves four types of natural language understanding tasks including linguistic acceptability (CoLA), sentiment analysis (SST-2), similarity and paraphrase tasks (MRPC, STS-B, QQP), and natural language inference (MNLI, QNLI, RTE).", "We exclude the WNLI dataset following prior studies (Houlsby et al., 2019; Devlin et al., 2019).", "Data Setup .", "We mainly consider a low-resource setting where training data is limited and the performance of different methods varies much.", "We sample a small subset of the training set for each task with size K = { 100 , 500 , 1000 } .", "As it is infeasible to submit considerable runs to the GLUE 3 Prefix-tuning cannot be fully eliminated as adapter or LoRA due to the softmax operation in multi-head attention.", "leaderboard (2 submissions/day), we take 1,000 samples on the training set as the development set to select the best checkpoint and use the original development set as the test set.", "To reduce variance, we shuffle the data with 5 random seeds and report the average performance.", "Additionally, we consider a high-resource setting where the whole training set is used and the best performance on the GLUE development set is reported.", "Compared Methods .", "We mainly compare UNIPELT with fine-tuning and four representative PELT methods: adapter (Houlsby et al., 2019), prefix-tuning (Li and Liang, 2021), BitFit (Ben Zaken et al., 2021), and LoRA (Hu et al., 2021).", "For completeness, we consider two model variants UNIPELT (AP) and UNIPELT (APL), which incorporate 2 and 3 PELT methods, respectively.", "Implementation Details .", "We use BERT base (De-vlin et al., 2019) as the base model in the experiments.", "Consistent results are observed in our preliminary experiments with BART large (Lewis et al., 2020) (provided in App. C).", "We implement and evaluate all the methods in the same codebase to ensure a fair comparison.", "We largely follow the default hyperparameters of different methods and keep them the same on all the tasks for generalizability.", "We set the prefix length L = 10 , adapter bottleneck size D mid = 48 , LoRA rank D mid = 8 if not specified otherwise.", "4 More implementation and hyperparameter details can be found in App.", "B. 4.2 Analysis of Individual PELT Methods In Table 1, we show the performance of different methods on the GLUE benchmark with various sizes of training data.", "The results on the development sets are generally consistent with the test sets and provided in App.", "D. Although the average performance of different methods over 8 tasks is sometimes similar, the differences between tasks are quite significant under certain setups and can be as large as 5~9 points on a specific task ( e.g. , STS-B and MNLI, K = 500 ) even when excluding cases where some methods fail to learn effectively ( e.g. , prefix-tuning on QQP, K = 100 ).", "4 While these hyperparameters may lead to differences in trainable parameters, we keep them for analysis as they are used by the official implementation.", "Also, we observe that more trainable parameters do not guarantee better results.", "individual PELT method more closely.", "Analysis of Adapter .", "The performance of adapter is relatively stable there is no significantly better or worse result than fine-tuning consistent on different tasks or sizes of training data.", "In general, adapter is slightly worse than fine-tuning in most cases.", "We do not observe that adapter consistently outperforms fine-tuning in the low-resource setting as in He et al. (2021), possibly because they tune model hyperparameters on each task, which could be computationally prohibitive when there are considerable tasks.", "For example, they choose the bottleneck size D mid from {64, 128, 256}, while D mid = 48 is fixed across tasks in our experiments.", "Also, we only add one adapter in each Transformer layer instead of two following Pfeiffer et al. (2021).", "These two differences result in 62.4%~90.5% fewer parameters than the adapter used in He et al. (2021).", "To further study the effect of bottleneck size D mid in adapter, we increase D mid and re-evaluate adapter on a setup that it performs poorly (CoLA, K = 100 ).", "As shown in Fig. 2, the performance of adapter is increased gradually and becomes significantly better only when D mid = 256 , which involves 5.3 trainable parameters than the adapter used originally ( D mid = 48 ), 4.3 than UNIPELT (AP), and 3.4 than UNIPELT (APL), suggesting that a larger bottleneck size could be beneficial when adapter learns ineffectively.", "On the other hand, there are certain tasks ( e.g. , STS-B) that adapter largely outperforms competitive methods such as prefix-tuning and LoRA regardless of the size of training data, suggesting that one should favor adapter over other PELT methods under certain scenarios as well.", "poorly with K = { 100 , 500 } and becomes on par with fine-tuning when K reaches 1000.", "We also observe that prefix-tuning fails to learn effectively on certain tasks when the training data is limited ( e.g. , K = 100 on SST-2 and K = 500 on QQP), leading to unsatisfactory performance and (or) large variance across different runs.", "Similar phenomena have been observed in a concurrent study (Gu et al., 2021) on few-shot prompt-tuning.", "To ensure that the poor performance of prefix-tuning is not due to its fewer trainable parameters (based on its default setting), we further increase the prefix length to L = 50 such that its trainable parameters are comparable to adapter, and reevaluate prefix-tuning on all 8 tasks with K = 100 .", "For the 4 tasks where prefix-tuning ( L = 10 ) performs poorly (SST2, CoLA, STS-B, and QQP), while its performance is significantly improved on 3 tasks, it also performs significantly worse on the other task (STS-B), which suggests that training instability in the low-resource regime is still an issue for prefix-tuning even with more trainable parameters.", "5 Besides, prefix-tuning ( L = 50 ) still lags behind adapter or UNIPELT (AP) on 3 of the 4 tasks.", "Furthermore, the average performance of prefix-tuning ( L = 50 ) on 8 tasks is even slightly worse than with L = 10 , which indicates that increasing prefix length may not be a panacea for all the scenarios.", "A larger L also leads to significant training/inference slowdown due to the costly multi-head attention.", "More broadly, such results suggest that using more trainable parameters does not guarantee better performance.", "On the bright side, prefix-tuning performs well on certain tasks such as natural language inference (RTE and MNLI) with various sizes of training data, which suggests that one should also prefer prefix-tuning in certain cases.", "Analysis of BitFit & LoRA .", "Tuning only the bias terms of the model does not lead to very satisfactory results in our experiments BitFit never performs the best and generally performs the worst in different data and task setups.", "Therefore, we do not consider BitFit in the following experiments and exclude BitFit as a submodule of UNIPELT.", "As for LoRA, there are a few setups where LoRA fails to learn effectively as well, such as STS-B and QQP ( K = { 100 , 500 } ), leading to high variance across runs.", "Apart from that, LoRA performs 5 Tuning other hyperparameters like learning rate does not appear to alleviate the issue either.", "quite competitively despite using fewer trainable parameters than methods like adapter, especially when K = 1000 , achieving the best or 2nd best performance on 4 of 8 tasks.", "As LoRA has a scaling factor that can be seen as a static gating function under our formulation, we further investigate its importance by evaluating LoRA with different .", "As shown in Fig. 3, LoRA is quite sensitive to the scaling factor and there seems to be no single optimal value that works well across multiple task and data setups.", "Such findings suggest that gating is critical and motivate us to use more fine-grained and dynamic control for UNIPELT.", "Besides, we observe that increasing consistently results in faster convergence, possibly because the trainable parameters would receive larger gradient updates with a larger .", "Next, we will turn to our proposed framework UNIPELT, which incorporates multiple existing PELT methods as submodules.", "Low-Resource Performance .", "Overall, UNIPELT (APL) and UNIPELT (AP) consistently achieve the best and second best average performance on both the development and test sets regardless of the number of training samples.", "The gains are generally 1~4% over the submodule that performs the best (when used individually).", "Such results demonstrate the advantages of our hybrid approach regarding model effectiveness and generalizability.", "At the per-task level, UNIPELT (APL) and UNIPELT (AP) perform the best or second best on 7/6/7 of 8 tasks when trained with 100/500/1,000 samples, and never perform the worst in any setup.", "When comparing the two variants, UNIPELT (APL) outperforms UNIPELT (AP) on 4/6/8 of 8 tasks when trained with 100/500/1,000 samples.", "Such results indicate that UNIPELT is quite robust and performs reliably under different scenarios.", "The improvements of UNIPELT over its submodules are generally larger when having fewer training samples, suggesting that UNIPELT performs especially well in the low-resource regime.", "In particular, on the tasks where other PELT methods fail to learn effectively such as CoLA and QQP ( K = 100 ), UNIPELT manages to achieve performance better than fine-tuning.", "UNIPELT vs. Upper Bound .", "In Table 2, we show the comparison of UNIPELT and the upper bound that takes the best performance of its submodules on each task.", "We observe that both UNIPELT (AP) and UNIPELT (APL) perform similarly or even better than their upper bound, which suggests that UNIPELT successfully learns to leverage different submodules and maintains (near) optimal performance under different setups.", "The fact that UNIPELT can outperform the upper bound also hints that a mixture of PELT methods (involving different parts of the PLM) might be inherently more effective than single methods (with a limited scope of the PLM architecture).", "High-Resource Performance .", "In Table 3, we list the performance of different methods when all training samples are used.", "UNIPELT again achieves the best overall performance.", "The gains are not as significant as in the low-resource setting, which is somewhat expected as existing PELT methods typically perform on par with fine-tuning given abundant training data and the potential of improvement is not as high.", "That said, the performance of UNIPELT is still the best or 2nd best on all 8 tasks, and generally comparable to the best submodule used individually on each task.", "Besides, simply combining multiple PELT methods without gating does not work well in the high-resource setting although UNI PELT-NoGate never performs the worst in each task, its average performance is unsatisfactory (-0.89 vs. UNIPELT).", "We benchmark the efficiency of PELT methods and list in Table 4 their number of trainable parameters and training/inference time relative to fine-tuning.", "Parameter Efficiency .", "As the trainable parameters in PELT methods are almost negligible, combining multiple methods does not lead to significant losses in parameter efficiency.", "UNIPELT still has few trainable parameters compared to fine-tuning (0.99%~1.26%).", "The parameters can be further reduced if one uses more parameter-efficient variants ( e.g. , Karimi Mahabadi et al. (2021a)), which can be easily swapped with the vanilla version used in our current framework.", "Also, note that more trainable parameters do not always lead to better performance, as shown in our experiments and prior studies (He et al., 2021; Pfeiffer et al., 2021).", "Training and Inference Efficiency .", "Due to parameter efficiency, all PELT methods train 30%~50% faster than fine-tuning and incorporating multiple PELT methods into UNIPELT does not suffer from slower training.", "On the other hand, the inference time of PELT methods is generally longer since they involve more FLOPs.", "UNIPELT has a slightly larger inference overhead (4%~11% compared to its slowest submodule), which we argue is insignificant since larger models that may achieve similar performance gains ( e.g. , BERT large ) need around 300% inference time (Wolf et al., 2020).", "Parameter-Efficient Tuning of PLMs .", "As it is increasingly infeasible to train and store full copies of large PLMs for various downstream tasks, how to efficiently tune the PLMs with few trainable parameters becomes critical.", "Existing PELT methods can be largely divided into two categories based on whether new trainable parameters are introduced.", "Specifically, one may either train a subset of the model parameters such as the prediction head (Lee et al., 2019) and bias terms (Ben Zaken et al., 2021), or introduce task-specific parameters to different parts of the PLM such as before multi-head attention (Li and Liang, 2021) or after feedforward layer (Houlsby et al., 2019).", "As the number of PELT methods keeps increasing, the purpose of UNIPELT is to better understand and leverage the distinctions of various methods instead of proposing yet another method.", "Mixture-of-Experts .", "UNIPELT is also related to approaches that involve a high-capacity network and activate (upweight) different parts of the network given different inputs.", "One notable example is Mixture-of-Experts (MoE) (Jacobs et al., 1991; Shazeer et al., 2017), which maintains a set of experts (neural networks) and one or more trainable gates that select a combination of the experts specific to each input.", "Despite being conceptually similar, UNIPELT is different from MoE: the submodules in UNIPELT are not combined explicitly by summation like MoE but in sequential order and affect each other implicitly.", "Moreover, the experts are diverse in UNIPELT while usually homogeneous or identical in MoE methods.", "In this paper, we present a comprehensive study of representative parameter-efficient language model", "tuning (PELT) methods and propose a unified framework, which incorporates different PELT methods as submodules and learns to activate the most appropriate submodules for a given task or data setup.", "Our proposed framework consistently outperforms conventional fine-tuning as well as the submodules that it incorporates under different setups, and generally surpasses the upper bound that takes the best performance of each submodule used individually on each task.", "Our findings suggest that a mixture of multiple PELT methods that involve different parts of the PLM may be favorable regarding both model effectiveness and robustness.", "For future work, we will try to better understand the discrepancy of various PELT methods in different scenarios.", "We also plan to investigate a multi-task setting where multiple submodules can be activated and cooperate at the task level.", "We thank Xiang Lisa Li, Hai Ye, Rabeeh Karimi Mahabadi, Junxian He, Yiqing Xie, Yaqing Wang, and Liyuan Liu for helpful discussions and feedback.", "We thank anonymous reviewers for valuable comments and suggestions." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "result", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "result", "abstain", "abstain", "abstain", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "objective", "abstain", "abstain", "objective", "other", "other" ]
[ "Opinion entity extraction is a fundamental task in fine-grained opinion mining.", "Related studies generally extract aspects and/or opinion expressions without recognizing the relations between them.", "However, the relations are crucial for downstream tasks, including sentiment classification, opinion summarization, etc.", "In this paper, we explore A spect-O pinion P air E xtraction (AOPE) task, which aims at extracting aspects and opinion expressions in pairs.", "To deal with this task, we propose S ynchronous D ouble-channel R ecurrent N etwork (SDRN) mainly consisting of an opinion entity extraction unit, a relation detection unit, and a synchronization unit.", "The opinion entity extraction unit and the relation detection unit are developed as two channels to extract opinion entities and relations simultaneously.", "Furthermore, within the synchronization unit, we design E ntity S ynchronization M echanism (ESM) and R elation S ynchronization M echanism (RSM) to enhance the mutual benefit on the above two channels.", "To verify the performance of SDRN, we manually build three datasets based on SemEval 2014 and 2015 benchmarks.", "Extensive experiments demonstrate that SDRN achieves state-of-the-art performances.", "Opinion entity extraction, which aims at identifying aspects and/or opinion expressions in review sentences, is an important task in fine-grained opinion mining.", "Recently, there have been considerable studies focused on this task.", "Specifically, Liu et al. (2012), Li and Lam (2017) and Li et al. (2018) explored aspect term extraction, and Fan et al. (2019) extracted opinion phrases with given aspects.", "Meanwhile, many studies dealt with aspect and opinion Corresponding author.", "term co-extraction (Xu et al., 2013; Liu et al., 2015; Wang et al., 2017; Yu et al., 2019; Wang and Pan, 2019; Dai and Song, 2019).", "These studies have shown the importance of opinion entity extraction and achieved great progress.", "However, they neglect to recognize the relations between aspects and opinion expressions.", "While aspect-opinion relation detection is one of the key parts of an opinion mining system (Hu and Liu, 2004; Popescu and Etzioni, 2005; Zhuang et al., 2006), it is neglected or assumed given beforehand, which leaves a significant gap to subsequent opinion mining tasks.", "For instance, as shown in Figure 1, we can obtain the aspect { food } and the opinion expressions { nice-looking, delicious } from opinion entity extraction.", "Although both nice-looking and delicious express positive sentiment, they further describe food from the appearance and taste perspectives, respectively.", "Therefore, only with the relations between aspects and opinion expressions, e.g., the pair (cid:104) food , delicious (cid:105) , can the more fine-grained subsequent tasks be executed, such as pair-level sentiment classification, pair-level opinion clustering, etc.", "To bridge the gap between opinion entity extraction and subsequent tasks, we explore Aspect-Opinion Pair Extraction (AOPE) task, which aims at extracting aspects and opinion expressions along with their relations.", "Specially, AOPE is not only necessary for subsequent tasks, but also beneficial to both opinion entity extraction and relation detection.", "However, the studies on AOPE are very limited.", "Early works (Hu and Liu, 2004; Zhuang et al., 2006) approach aspect-opinion pair extraction in a pipeline manner by dividing it into two isolated tasks.", "Yang and Cardie (2013), Klinger and Cimiano (2013b) and Katiyar and Cardie (2016) attempted to extract opinion entities and relations jointly without considering the interaction between opinion entity extraction and relation detection, which limits the performance.", "Therefore, AOPE remains a rather challenging task.", "First, the relational structure of aspects and opinion expressions within a sentence can be complicated, requiring the model to be effective and flexible in detecting relations.", "For example, the relations can be one-to-many, many-to-one, and even embedded or overlapped.", "Second, opinion entity extraction and relation detection are not two independent tasks as in other multitask learning problems but rely on each other, hence posing a key challenge on how to fuse and learn the two subtasks properly.", "Third, how to synchronize opinion entity extraction with relation detection and make them mutually promotion is another primary challenge.", "To address the aforementioned challenges, we propose S ynchronous D ouble-channel R ecurrent N etwork (SDRN).", "Specifically, we first utilize Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2019) to learn context representations.", "Then, the double-channel recurrent network, which consists of an opinion entity extraction unit and a relation detection unit, is constructed to extract aspects, opinion expressions, and relations simultaneously.", "To enable the information interaction between the above two channels, we design a synchronization unit which contains E ntity S ynchronization M echanism (ESM) and R elation S ynchronization M echanism (RSM).", "Extensive experiments verify that our model achieves state-of-the-art performances.", "In summary, our contributions are three-fold: We explore AOPE task, which is valuable and critical for downstream tasks but remains under-investigated.", "We propose an end-to-end neural model, SDRN 1 .", "By adopting BERT as the encoding 1 https://github.com/NKU-IIPLab/SDRN layer, SDRN can learn richer context semantics.", "By designing the double-channel network and two synchronization mechanisms, SDRN could process opinion entity extraction and relation detection jointly and make them mutually beneficial.", "We manually build three datasets based on SemEval 2014 and 2015 benchmarks for AOPE task.", "Extensive experiments are conducted to verify that our model achieves state-of-the-art performances.", "Aspect-opinion pair extraction is a critical task in fine-grained opinion mining.", "Early studies approach this task in a pipeline manner.", "Hu and Liu (2004) used association mining to identify aspects and extract the adjacent adjectives as opinions.", "Zhuang et al. (2006) extracted aspects and opinion expressions first, and then mined the relations with dependency relation templates.", "Popescu and Etzioni (2005) proposed an unsupervised model to extract aspects and corresponding opinions from reviews with pre-defined rules.", "Although the above methods achieved great progress, they generally suffered from error propagation.", "To avoid error propagation, recent studies propose joint learning methods.", "Klinger and Cimiano (2013a) adopted Imperatively Defined Factor graph (IDF) to analyze the inter-dependencies between aspects and opinion expressions.", "Klinger and Cimiano (2013b) presented a joint inference model based on IDF to extract aspect terms, opinion terms, and their relations.", "Yang and Cardie (2013) employed Integer Linear Programming (ILP) to identify opinion-related entities and their associated relations jointly.", "However, these works generally based on shallow machine learning methods and depended on hand-crafted features.", "To automatically capture features, neural network methods have been applied to various fine-grained opinion mining tasks.", "Xu et al. (2018) used Convolutional Neural Network (CNN) to extract aspects.", "Wang et al. (2016), Wang et al. (2017),Yu et al. (2019) and Wang and Pan (2019) used deep learning methods to deal with aspect and opinion term co-extraction.", "Li et al. (2018) focused on aspect term extraction and adopted attention mechanism to exploit the latent relations between aspect and opinion terms.", "Hu et al. (2019) took BERT to extract aspects and corresponding sentiments.", "For AOPE, Katiyar and Cardie (2016) explored LSTM-based models to jointly extract opinion entities and their relations with three optimization methods.", "But this method neglects to learn the interaction between opinion entity extraction and relation detection.", "Given a review sentence S , A spectO pinion P air E xtraction (AOPE) task aims to obtain a collection of aspect-opinion pairs C = [ (cid:104) a m , o m (cid:105) ] Mm =1 from S , where a m and o m represent the aspect and the opinion expression, respectively 2 .", "To deal with AOPE task, we propose S ynchronous D ouble-channel R ecurrent N etwork (SDRN).", "The overall framework of SDRN is illustrated in Figure 2. Specifically, we first adopt BERT as the encoding layer to learn the context representations.", "Then, an opinion entity extraction unit and a relation detection unit are constructed as double channels to extract aspects, opinion expressions, and relations simultaneously.", "Furthermore, a synchronization unit is designed to enable information interaction between the double channels.", "To capture high-level representations, we recurrently execute the above units.", "After multiple recurrent steps, we adopt an inference layer to obtain aspect-opinion pairs.", "Given a review sentence S , we first tokenize it using the WordPiece vocabulary (Wu et al., 2016) and add tokens [CLS] and [SEP] to the beginning and the end of the tokenized sentence, respectively.", "As a result, we obtain the input sequence X = { x 1 , x 2 , ..., x N } with N tokens for each sentence.", "Inspired by the success of BERT (Devlin et al., 2019), we adopt it as the encoder to learn the contextual semantics.", "For each token x i , the initial embedding e i is constructed by summing the corresponding token embedding e wi , segment embedding e si , and position embedding e pi .", "Then, the embedding sequence E = { e 1 , e 2 , ..., e N } is fed into BERT, which consists of stacked Transformer blocks with multiple self-attention heads (Vaswani et al., 2017).", "We take the output of the last Transformer block as the context representation sequence H s = { h s 1 , h s 2 , ..., h sN } .", "The opinion entity extraction unit, which aims at extracting the aspects and the opinion expressions, is developed as a channel of SDRN.", "To deal with this sequence labeling task, we couple Conditional Random Field (CRF) (Lafferty et al., 2001) upon the encoding layer, which serves as the opinion entity extraction unit.", "Formally, CRF adopts a state score matrix P RN K to model the mappings between tokens and labels, and a transition score matrix Q RK K to model the relations between adjacent labels, where K denotes the dimension of the label space 3 .", "For a sequence of predicted labels Y t = (cid:8) y t 1 , y t 2 , ..., y tN (cid:9) at the t -th recurrent step, we define its score as follows: S ( X, Y t ) = N (cid:88) i =1 Q y ti 1 ,y ti + N (cid:88) i =1 P t i,y t i , (1) P t = H o t W p + b p , (2) where H o t = (cid:110) h o t, 1 , h o t, 2 , ..., h o t,N (cid:111) denotes the input hidden representation sequence at the t -th recurrent step for the opinion entity extraction unit, which is calculated with the context representation sequence H s and the relation synchronization semantics R t 1 .", "The details will be described in 3 Following the BIO tagging scheme, we define five labels, including BA (beginning of aspect), IA (inside of aspect), BP (beginning of opinion expression), IP (inside of opinion expression), and O (others).", "Section 3.3.2.", "The matrices W p R d o K and b p RN K are model parameters, where d o denotes the dimension of hidden representation h o t,i .", "Then, the probability of the predicted sequence Y t can be calculated as follows: p (cid:0) Y t | X (cid:1) = exp( S ( X, Y t )) (cid:80) (cid:101) Y t Y tX exp( S ( X, (cid:101) Y t )) , (3) where Y tX denotes all possible label sequences.", "During training, we maximize the likelihood probability p ( Y | X ) of gold label sequence at the last step.", "During decoding, we use the Viterbi algorithm to find the label sequence with the maximum score.", "To extract opinion entities and relations simultaneously, we design a relation detection unit as another channel of SDRN.", "Considering the complicated relations between aspects and opinion expressions, we devise a supervised self-attention mechanism as the relation detection unit to flexibly model token-level relations without the sequential limitation.", "At the t -th recurrent step, we first compute the attention matrix G t RN N whose element g ti,j represents the degree of correlation between the i -th token and the j -th token as follows: g ti,j = exp (cid:16) (cid:16) h rt,i , h rt,j (cid:17)(cid:17) (cid:80) Nk =1 exp (cid:16) (cid:16) h rt,i , h rt,k (cid:17)(cid:17) , (4) (cid:0) h rt,i , h rt,j (cid:1) = tanh (cid:0) h rt,i W 1 r + h rt,j W 2 r (cid:1) W 3 r , (5) where is a score function, and h rt,i denotes the input hidden representation of the i -th token for the relation detection unit.", "Note that the hidden representation sequence H rt = (cid:110) h rt, 1 , h rt, 2 , ..., h rt,N (cid:111) is calculated with the context representation sequence H s and the entity synchronization semantics U t 1 .", "The details will be described in Section 3.3.1.", "The matrices W 1 r R d r d r , W 2 r R d r d r , and W 3 r R d r 1 are model parameters, where d r is the dimension of hidden representation h rt,i .", "At the last step T , we further introduce supervision information into the calculation of the attention matrix GT by maximizing the likelihood probability as follows: p ( Z | X ) = N (cid:89) i =1 N (cid:89) j =1 p ( z i,j | x i , x j ) , (6) where the standard relation matrix Z RN N consists of element z i,j , and the relation probability p ( z i,j | x i , x j ) can be calculated as follows: p ( z i,j | x i , x j ) = (cid:26) g Ti,j , if z i,j = 1 1 g Ti,j , if z i,j = 0 , (7) where z i,j = 1 denotes the fact that there is a relation between the i -th token and the j -th token, and vice versa.", "With this supervision information, the attention can be guided to capture the correlations between the tokens more effectively.", "Since the above two channels are interdependent, it is important to synchronize their information and make them mutually beneficial.", "To this end, we design E ntity S ynchronization M echanism (ESM) and R elation S ynchronization M echanism (RSM) to update the hidden representation sequences H o t and H rt by exchanging the high-level information.", "Considering that opinion entities are generally phrases, both opinion entity semantics and token-level interactions are crucial in detecting relations.", "For instance, given an aspect hot dog' and an opinion expression tasty' , there is no relation between hot' and tasty' when only token-level interaction is considered, but it is easy to detect the relation if we utilize the semantics of aspect hot dog' .", "Accordingly, we design ESM to capture the corresponding entity semantics for each token and integrate these semantics into the hidden representation sequence H rt +1 .", "Specifically, based on the predicted label sequence Y t and its probability obtained from the opinion entity extraction unit, each entity semantics u t,i of the i -th token at the t -th recurrent step can be calculated as follows: u t,i = N (cid:88) j =1 ( B ti,j ) h sj , (8) ( B ti,j ) = B ti,j (cid:80) Nk =1 B ti,k , (9) where B ti,j is the label probability of the j -th token if the i -th token and the j -th token belong to the same entity; otherwise, B ti,j is zero.", "And ( ) is a normalization function.", "To integrate both the context representation h s i and the entity semantics u t,i , we calculate the hidden representation h rt +1 ,i as follows: h rt +1 ,i = ( u t,i W 4 r + h si W 5 r ) , (10) where W 4 r R d s d r and W 5 r R d s d r are model parameters, d s is the dimension of context representation, and is the activation function which can be tanh or sigmoid function.", "Note that we use zero matrix to initialize the entity semantics sequence U 0 = { u 0 , 1 , u 0 , 2 , ..., u 0 ,N } .", "Since the relations between opinion entities can provide clues for opinion entity extraction, it's important to encode the relation semantics.", "For example, if overrated' is used to modify pizza' , this relation could provide guidance to extract the aspect pizza' and the opinion expression overrated' .", "Thus, we design RSM to capture the semantics which reflect the relations and update the hidden representation sequence H o t +1 .", "Concretely, at the t -th recurrent step, we can calculate the relation semantics r t,i of the i -th token with the correlated degree g ti,j from the relation detection unit: r t,i = N (cid:88) j =1 ( ( g ti,j )) h sj , (11) ( g ti,j ) = (cid:26) g ti,j , if g ti,j (cid:62) 0 , if g ti,j < , (12) where ( ) is the same normalization function as", "Eq.(9).", "To avoid noise, we utilize ( ) to filter correlated scores below the given threshold .", "Then, we combine the relation semantics r t,i and context representation h si to obtain the hidden representation h o t +1 ,i : h o t +1 ,i = (cid:0) r t,i W 1 o + h si W 2 o (cid:1) , (13) where W 1 o R d s d o and W 2 o R d s d o are model parameters.", "Similar to ESM, the initial relation semantics sequence R 0 = { r 0 , 1 , r 0 , 2 , ..., r 0 ,N } is set to zero.", "Particularly, the integration methods used in ESM and RSM can also make the proposed SDRN easy to optimize, which is similar to the shortcut connections (He et al., 2016).", "To synchronously learn the proposed two channels, we fuse the loss functions from the two channels.", "For opinion entity extraction unit, given the gold label sequence Y , we minimize the negative log-likelihood loss function at the last step as follows: LE = log (cid:88) (cid:101) Y YTX exp (cid:16) S (cid:16) X, (cid:101) Y (cid:17)(cid:17) S ( X, Y ) .", "For the relation detection unit, we convert the gold annotation to a one-hot matrix, where 0 denotes no relations, and 1 represents the existence of relations between two tokens.", "Then, we minimize the cross-entropy loss between the predicted distribution p ( z i,j | x i , x j ) at the last step and the gold distribution p ( z i,j | x i , x j ) as follows: LR = N (cid:88) i =1 N (cid:88) j =1 p ( z i,j | x i , x j ) log [ p ( z i,j | x i , x j )] .", "(15)", "Then, the two parts are combined to construct the loss objective of the entire model: L ( ) = LE + LR .", "The optimization problems in Eq.", "(16) can be solved by using any gradient descent method.", "In this paper, we adopt the BERTAdam method.", "Because SDRN synchronously processes opinion entity extraction and relation detection, an inference layer is introduced to generate aspect-opinion pairs based on the results of the two channels.", "With the label sequence YT predicted by the opinion entity extraction unit at the last recurrent step, we can obtain the aspect set A = { a 1 , a 2 , ..., a l A } with l A aspects and the opinion set O = { o 1 , o 2 , ..., o l O } with l O opinion expressions.", "Then, the relations between aspects and opinion expressions can be calculated according to the weight matrix GT from the relation detection unit.", "For instance, given an aspect a = (cid:8) x i aS , ..., x i aE (cid:9) and an opinion expression o = (cid:8) x i oS , ..., x i oE (cid:9) , the correlated degree between them can be calculated as follows: = 1 2 1 | a | i aE (cid:88) k = i aS i oE (cid:88) l = i oS g k,l + 1 | o | i oE (cid:88) l = i oS i aE (cid:88) k = i aS g l,k , (17) where | a | and | o | denote the length of aspect and opinion expression.", "The pair (cid:104) a, o (cid:105) is extracted only if is higher than a given threshold .", "To evaluate the effectiveness of SDRN, we conduct extensive experiments on five benchmark datasets from SemEval 2014 4 (Pontiki et al., 2014), SemEval 2015 5 (Pontiki et al., 2015), MPQA version 2.0 corpus 6 (Wiebe et al., 2005), and J.D. Power and Associates Sentiment Corpora 7 (JDPA) (Kessler et al., 2010).", "The statistics of these benchmark datasets are shown in Table 1. For SemEval 2014 and 2015 datasets, we manually build relations between aspects and opinion expressions because the original datasets only contain the gold standard annotation for aspects.", "Note that we follow the annotations for opinion expressions provided by Wang et al. (2016) and Wang et al. (2017).", "We adopt the BERT BASE8 model, which consists of 12 Transformer blocks with 12 self-attention heads, as the encoding layer of SDRN.", "The dimensions of both the embeddings and the context representation in BERTBASE are 768.", "To enhance the information interaction between the double channels, we set the recurrent step to 2. During training, we use the BERTAdam optimizer with 0.1 warmup rate.", "The learning rate is set to 2e-5 and 0.001 for fine-tuning BERT and training our model, respectively.", "Meanwhile, we set the batch size to 10 and the dropout rate to 0.5.", "With the cross-validation, other hyper-parameters are set as follows: d o = 250 , d r = 250 , = 0 .", "1 , and = 0 .", "5 .", "4 http://alt.qcri.org/semeval2014/task4/ 5 http://alt.qcri.org/semeval2015/task12/ 6 http://www.cs.pitt.edu/mpqa/ 7 http://verbs.colorado.edu/jdpacorpus/ 8 https://github.com/google-research/bert 4.3 Evaluation We use F 1 -score to evaluate the performance of SDRN.", "We consider a predicted aspect-opinion pair is correct if the gold standard annotations contain a pair the same as the prediction.", "Besides, following Katiyar and Cardie (2016), we report Binary Overlap F 1 -score for MPQA dataset.", "To achieve the comprehensive and comparative analysis of SDRN, we compare it with two kinds of models, including Pipeline methods 9 and Joint methods.", "For Pipeline methods, we first select five advanced extraction models to recognize opinion entities.", "Then, we train the relation detection unit (RD) separated from SDRN with BERT to detect relations.", "The details about RD are described in Section 3.2.2.", "The outputs of the extraction models are fed into the RD model to predict relations and obtain aspect-opinion pairs.", "The details of the five extraction models are described as follows: HAST (Li et al., 2018) exploits two useful clues, namely opinion summary and aspect detection history, to extract the aspects with the help of opinion information.", "Note that HAST can also extract aspects and opinion expressions simultaneously.", "DE-CNN (Xu et al., 2018) is a simple but outstanding CNN model employing two types of pre-trained embeddings, including general-purpose and domain-specific embeddings.", "We trained two DE-CNN models for aspect and opinion expression extraction, respectively.", "IMN (He et al., 2019) is an interactive multitask learning network which jointly learns multiple tasks, including aspect and opinion term co-extraction, aspect-level sentiment classification, etc.", "SPAN (Hu et al., 2019) is a span-based extraction framework based on BERT.", "We trained two SPAN models for aspect and opinion expression extraction, respectively.", "9 The Pipeline models are expressed in the form of { * } + { # } ', where *' means the opinion entity extraction method and #' is the relation detection method.", "To sufficiently verify the performance of SDRN, we also compare it with Joint models: IDF (Klinger and Cimiano, 2013b), CRF+ILP (Yang and Cardie, 2013), and LSTM+SLL+RLL (Katiyar and Cardie, 2016).", "The details can be found in Section 2. 4.5 Experimental Results We demonstrate and analyze the experimental results to answer the following research questions: How does SDRN perform compared with the baselines on AOPE task?", "Can the performance of opinion entity extraction subtask be improved by the joint learning with relation detection?", "Does the synchronization unit promote the information interaction and further enhance the joint learning?", "The comparison results of aspect-opinion pair extraction are shown in Table 2 and Table 3. According to the results, SDRN consistently obtains the state-of-the-art performances on five datasets.", "Compared to the best pipeline model, SDRN outperforms SPAN+RD by 2.31%, 1.14% and 3.39% on 14-Res, 14-Lap and 15-Res, respectively.", "This indicates that the joint model can effectively avoid the error propagation led by pipeline models.", "Furthermore, SPAN+RD outperforms other baselines, which shows that BERT can capture rich context representations.", "Besides, HAST+RD, IMN+RD and RINANTE+RD, which utilize the aspect and opinion term co-extraction models, achieve better performances than DE-CNN+RD.", "This shows that it is helpful to detect relations with considering latent relations between aspects and opinion expressions during the extraction phase.", "We also compare SDRN with joint models on JDPA and MPQA datasets, and the results are reported using 10-fold cross validation.", "According to Table 3, our model brings significant improvements without any hand-crafted features.", "Particularly, for pair extraction, the results of IDF Joint are 7.4% and 10.5% inferior to IDF Pipeline on JDPA Camera and JDPA Car datasets.", "This illustrates that joint models may worse than pipeline models without adequate information interaction between opinion entity extraction and relation detection.", "Although our task aims to identify the aspect-opinion pairs, it is interesting to investigate the performance of opinion entity extraction.", "Hence, we compare SDRN with representative aspect and opinion expression extraction methods.", "The results are shown in Table 4. It is clearly shown that SDRN achieves state-of-the-art results on three datasets, which proves that the opinion entity extraction can be significantly improved by joint training with relation detection.", "Besides, the aspect and opinion term co-extraction models generally superior to aspect term extraction models, which demonstrates that joint extracting aspects and opinion expressions can benefits each other.", "HAST and SPAN are special cases of aspect term extraction models, because HAST extracts aspects with the help of opinion semantics, and SPAN adopts BERT as the backbone model.", "To investigate the efficacy of the synchronization unit composed of ESM and RSM, we perform ablation study and list the results in the second block of Table 2. Concretely, for SDRN w/o ESM', we drop ESM and simply update the relation hidden representation H rt via a fully-connection layer.", "Similarly, SDRN w/o RSM' drops RSM and adopts a fully-connection layer to update the entity hidden representation H ot .", "For SDRN w/o ESM&RSM', we simultaneously do the above two operations.", "4: Experimental results of opinion entity extraction ( F 1 score, %).", "A and O represent the aspect extraction and the opinion expression extraction, respectively.", "The methods with ' are aspect and opinion term co-extraction models, and others are aspect term extraction models.", "The results with *' are reproduced by us, and others are copied from the released paper.", "Note that the improvements over baselines are significant ( p < 0 . 05 ).", "Compared with Pipeline models, SDRN w/o ESM&RSM' is less competitive, which demonstrates that merely joint learning is not superior to the pipeline manner.", "By utilizing ESM or RSM, the performance is improved, which shows that either ESM or RSM is helpful.", "Specially, the contribution of ESM is slightly larger than RSM.", "Moreover, with the two synchronization mechanisms, SDRN surpasses all the baselines.", "In Figure", "3(a), we verify the convergence of SDRN.", "The result shows that our model generally achieves convergence around 15 epochs.", "Besides, we present I l e f t t h e F ou r S ea s on s v e r y d i s a ppo i n t e d .", "3(b).", "It can be observed that the performance of SDRN increases first and then becomes steady or slightly declining as the step number increases.", "For 15-Res, the limitation of training data may be the cause of performance decline.", "And the best results are generally obtained with two steps on three datasets, indicating that SDRN with two steps is enough to exploit the interaction information.", "In order to verify the relation detection capability SDRN, we visualize the attention scores in Figure 4. It is shown that SDRN can accurately capture the", "relations between aspects and opinion expressions, even with complex reviews.", "To clearly analyze the effect of the joint learning and the synchronization unit, some predictions of SDRN, SDRN w/o ESM&RSM' and SPAN+RD are listed in Table 5. It can be concluded that SPAN+RD suffers the problem of error propagation.", "For example, it divides selection of food' into selection' and food' in Review #2, and misses laid-back' in Review #3.", "With the pipeline way, it is impossible to obtain a correct pair once there is an incorrect extraction of entities at the first step.", "Due to the lack of information interaction, SDRN w/o ESM&RSM' is generally faced with relation detection errors when relations are complex.", "For example, it extracts error pair (receiver, superlatives) in Review #1, and fails to detect the relations between decor' and laid-back' in Review #3.", "In contrast, our model can effectively avoid the above problems.", "In this paper, we explored Aspect-Opinion Pair Extraction (AOPE) task and proposed Synchronous Double-channel Recurrent Network (SDRN).", "Specifically, the opinion entity extraction unit and the relation detection unit are designed to extract aspects, opinion expressions and their relations simultaneously.", "The two units update themselves in a recurrent manner and form two channels, respectively.", "Meanwhile, the synchronization unit is devised to integrate high-level interaction information and enable the mutual benefit on opinion entity extraction and relation detection.", "Extensive experiments showed that our model achieves state-of-the-art performances.", "This research is supported by the National Natural Science Foundation of China under grant No. 61976119, the Natural Science Foundation of Tianjin under grant No. 18JCYBJC15800, and the Major Program of Science and Technology of Tianjin under grant No. 18ZXZNGX00310." ]
[ "abstain", "abstain", "abstain", "objective", "objective", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method", "result", "objective", "objective", "objective", "objective", "method", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "other", "abstain", "other", "method", "method", "other", "other", "other", "abstain", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "other", "abstain", "method", "method", "abstain", "abstain", "method", "other", "abstain", "method", "other", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "result", "other" ]
[ "Abstractive summarisation is notoriously hard to evaluate since standard word-overlap-based metrics are biased towards specific words in the human reference.", "We introduce a new evaluation metric which abstracts away from the word-level and instead is based on fact-level content weighting, i.e. relating the facts of the document to the facts of the summary.", "We follow the assumption that a good summary will reflect all relevant facts, i.e. the ones present in the ground truth (human-generated reference summary).", "We confirm this hypothesis by showing that our weightings are highly correlated to human perception and compare favourably to the recent manual highlight-based metric of Hardy et al. (2019).", "Text summarisation compresses long textual documents into short summaries while retaining the most important information from the source.", "In contrast to extractive summarisation, which directly copies the most relevant fragments, abstractive summarization retains the most important facts and expresses them via paraphrasing, aggregating and even inferring new facts.", "Recent advances in neural decoders led to a number of single-document summarisation systems that exhibit some level of abstraction in their outputs, usually in the simplest form of paraphrasing (See et al. (2017); Narayan et al. (2018); Liu and Lapata (2019), inter alia ).", "Evaluating abstractive summarisation remains an open challenge (Schluter, 2017; Kryscinski et al., 2019): First, decoders are amenable to pathoge-niessuch as hallucination and/or omission of important information, which are hard to capture using existing evaluation metrics (Cao et al., 2018; Rohrbach et al., 2018; Dusek et al., 2020).", "Second, most datasets used for abstractive summarisation only contain a single reference summary, e.g. (Narayan et al., 2018; Volske et al., 2017), which most existing automatic metrics evaluate against, e.g. ROUGE using exact n-gram overlap (Lin, 2004), and thus tend to downvote paraphrases.", "We propose a new evaluation metric based on content weighting, where we abstract away from the particular surface form of the target summary, but represent it as facts using Semantic Role Labelling (SRL).", "In this way, we aim to better capture the semantic correctness of a summary, i.e. be more sensitive to hallucinations and omissions.", "1 In particular, we weight the facts present in the source document according to the facts selected by a human-written summary.", "This alignment is conducted using contextual, rather than token-level, embeddings, e.g., BERT (Devlin et al., 2019).", "For evaluation, we measure whether an automatically generated summary is able to capture the same facts as the target.", "We also show that the computed weights correlate well with human perception.", "Our code is available at https://github.", "com/XinnuoXu/CorrFA_for_Summarizaion .", "The problem of reference bias has been addressed in several ways.", "First, metrics based on token-level or wider context embedding similarities which aim to better capture paraphrases but remain largely word-oriented, e.g. (Sun and Nenkova, 2019; Zhang et al., 2019; Zhao et al., 2019; Clark et al., 2019).", "Goodrich et al. (2019) come close to our approach by using entity and relation extraction, but their approach is limited to texts that lend themselves to be represented by RDF triples.", "An alternative is manual evaluation against the source document.", "This entails selecting content either using domain experts, e.g., the PYRAMID method (Nenkova and Passonneau, 2004), factoids 1 Note that we do not make any claims about fluency, which we assume is less of a problem for neural text generation.", "FACT1-tweet : [ARG0: the queen] has [V: tweeted] [ARG1: her thanks] [ARG2: to people who sent her 90th birthday messages on social media] FACT2-send : the queen has tweeted her thanks to [ARG0: people] [R-ARG0: who] [V: sent] [ARG1: her 90th birthday messages] [ARGM-LOC on social media] FACT1-tweet ARG0 V ARG1 ARG2 the queen had tweeted her thanks SRL Propositions Tree MR ARG0 V ARG1 ARGM-LOC people R-ARG0 who sent her 90th birthday messages on social media FACT2-send Figure 1: List of SRL propositions and corresponding tree MR with two facts for the sentence The queen has tweeted her thanks to people who sent her 90th birthday messages on social media .", "(Teufel and van Halteren, 2004), or via crowdsourcing (Shapira et al., 2019; Hardy et al., 2019).", "However, evaluation based on a small human-labelled test set is noisy, time consuming, and costly.", "Xe-nouleas et al. (2019) propose a referenceless metric, which only checks properties of the summary, not its relation to the original document.", "Sun and Nenkova (2019) compare average token and sentence ELMo embeddings against the document and claim good (system-level) correlations.", "Another option to avoid reference bias is question-based evaluation, either elicited manually (Clarke and Lapata, 2010; Narayan et al., 2018) or automatically (Scialom et al., 2019).", "However, it requires reference summaries as base for generating questions, thus only checking the summary contents indirectly.", "We represent facts in a sentence by adapting SRL (Palmer et al., 2005), which roughly captures who did what to whom in terms of predicates and their arguments.", "Given a list of parsed propositions for a sentence, 2 each predicate-argument structure is considered as one separate fact , where the predicate stands for the event and its arguments are mapped to actors, recipients, time, place, etc (see Fig. 1).", "Following a simple observation that arguments can function as separate predicates themselves, we construct a hierarchical tree structure for the whole sentence.", "We create the tree meaning representa-2 We use the SRL implementation of He et al. (2018) found in https://allennlp.org with 86.49 test F1 on the Ontonotes 5.0 dataset.", "tion (MR) from the list of facts by choosing the fact with the largest coverage as the root and recursively build sub-trees by replacing arguments with their corresponding sub-facts (ARG2 in FACT1 is replaced by FACT2 in Fig. 1).", "3 3.2 Automatic Content Weighting We compute argument and fact weights by measuring the similarity of facts/arguments in the original document and the target summary based on their BERT word embeddings (for content words only) and their distance in the tree MR. We denote tokens of a document D and its summary S as t D = (cid:8) t D 1 , t D 2 , t Dn (cid:9) and t S = (cid:8) t S 1 , t S 2 , t Sm (cid:9) .", "To get their corresponding contextual embeddings e Dk and e Sk , we concatenate the two texts, 4 feed them into a pre-trained BERT model (Devlin et al., 2019) and take the contextualized embedding output from its last Transformer layer.", "Argument-based weighting: We first represent the summary and the document as two sequences of leaf arguments 5 (cid:8) AD 1 , AD 2 , ADN (cid:9) and (cid:8) AS 1 , AS 2 , ASM (cid:9) respectively, and weight the i -th leaf argument in the document as: w ai = avg j =1 ...M cosdist (cid:16) E Di , E Sj (cid:17) (1) i.e. the average embedding cosine distance to all arguments in the summary.", "Argument embeddings E Di and E Sj are average embeddings of content-word tokens belonging to the arguments: 6 E i = avg k A i ,k (cid:54) stops e k (2) { D, S } , stops denotes a list of stopwords.", "Fact-based weighting: We can represent the summary and the document as two sequences of facts (cid:8) FD 1 , FD 2 , FDN (cid:48) (cid:9) and (cid:8) FS 1 , FS 2 , FSM (cid:48) (cid:9) , and weight the i -th fact in the document by its average distance to facts in the summary: w fi = avg j 1 ...M (cid:48) d fij (3) 3 We avoid using sentence-level MRs such as AMR (Ba-narescu et al., 2013), since current state-of-the-art performance of parsers is far behind compared to the simpler SRL task.", "4 By concatenating, the information in each text can be embedded in each other through self-attention.", "This is useful since the summary sometimes contains additional and/or common-sense knowledge not captured in the document.", "5 For example, in Fig. 1, ARG0, V, ARG1 in FACT1, and all the arguments in FACT2 are leaf arguments in the sentence, whereas ARG2 in FACT1 is not.", "6 For example, in Fig. 1, her and thanks are two tokens directly attached to the argument ARG1 of FACT1.", "Thus, the embedding for ARG1 of FACT1 is the average embedding of these two tokens.", "The fact-level distance d fij is defined on top of argument weighting: d fij = avg A Dl F Di ,A Sk F Sj il jk (cid:98) cosdist (cid:16) E Dl , E Sk (cid:17) (cid:99) > (4) It is computed as the average cosine distance over embeddings of all leaf arguments in the subtrees of fact F Di in the document and fact F Sj in the summary, which is (1) filtered by a threshold to discard argument pairs with weak semantic relation 7 and (2) weighted by MR tree distances of arguments to facts: il = 1 treedist ( F i ,A l ) .", "8 4 Content-weighting-based Metrics We now use these weights to introduce two metrics: Corr-F (fact-level) and Corr-A (argument-level).", "Let w fgold and w fcand denote the fact-level content weights calculated using the procedure from Section 3 based on human-reference and system-generated summaries, respectively.", "Similarly, w agold , and w acand denote the argument-level weights.", "Corr-F is then the Pearson Correlation Coefficient (PCC) between w fgold and w fcand .", "Corr-A is PCC between w agold and w acand .", "In other words, Corr-F and Corr-A indicate whether the generated summary focuses on the informative main points in the document (i.e. the same points as the reference summary), on two different levels of granularity.", "We validate our Corr-F and Corr-A metrics by collecting human judgements.", "In the following, we (1) collect content highlights from human judges using the Amazon Mechanical Turk platform 9 and calculate manual content weighting based on them, (2) calculate correlations of the manual content weights with our automatic content weights, (3) compare our metrics against existing reference-based ROUGE (Lin, 2004) and BERTScore (Zhang et al., 2019), as well as the referenceless manual HROUGE score (Hardy et al., 2019).", "10 We use the extreme summarisation dataset (XSum; Narayan et al., 2018), which consists of 7 In this work, we set the threshold to 0.6.", "8 E.g., in Fig. 1, treedist(FACT1, ARG1: her thanks) = 1, treedist(FACT1, ARG0: people) = 2, treedist(FACT2, ARG0: people) = 1.", "9 Using the interface from https://github.com/ sheffieldnlp/highres .", "10 Note that Corr-F/A are calculated with content weighting with respect to the reference.", "Therefore, strictly speaking, Corr-F/A are different to all existing metrics but still share some properties with them.", "We show the correlation between Corr-F/A and existing metrics in terms of relative system ranking, rather than a head-to-head metrics comparison.", "BBC articles and accompanying single-sentence summaries, i.e. sub-headlines of the original articles, professionally written by the authors of the articles.", "Due to the abstractive nature of the summaries, factoid content selection on phrase level is required beyond sentence-level extraction or token-level matching, making this dataset a popular test bed for abstractive summarisation.", "We use the outputs of three recent abstractive summarization systems as evaluation targets for our metrics:", "(i) the Pointer-Generator model (PTGEN ; See et al., 2017);", "(ii) the Topic-aware Convolutional Sequence-to-Sequence model (TCONV S2S; Narayan et al., 2018) and", "(iii) the abstractive summarization model using pretrained BERT encoders (BERTSUMABS ; Liu and Lapata, 2019).", "11 5.1 Manual Annotation Collection Manual Content Highlighting: By extending the framework of Hardy et al. (2019), we collect manual content highlights on fact and argument levels, where we present human judges with the source document and the gold summary, with one fact/argument typeset in bold.", "The judges are required to select phrases or sentences in the document that support the bolded fact/argument (see Figure 4-9 in Appendix B).", "In both cases, judges are allowed to select parts of the text with any granularity.", "We limit the number of allowed continuous chunks and the maximum number of words to encourage highlights of fact/argument level.", "12 We employ 3 judges per document in both cases.", "We use the same 50 articles and gold summaries sampled from the XSum test set as Hardy et al. (2019).", "Manual Content Weighting Calculation: Argument Level: Given a document D and a summary S , we define the weight of each token t Dk with respect to a summary argument A Sj as: w tkj = NumH (cid:0) t D k , AS j (cid:1) NumA (cid:0) A Sj (cid:1) (5) NumH ( t Dk , A Sj ) denotes the number of times token t k was selected and NumA ( A Sj ) is the total number of annotators who were shown A Sj bolded.", "We use token weights to compute manual argument-level weights w aman (parallel to Eq. 1): w aman,i = avg j =1 ...M avg t Dk A Di w tkj (6) 11 For the first two, we use candidate summaries provided by the authors.", "For the third, we generated summaries by training a model with code and data offered by the authors.", "12 We allow 4 chunks of max.", "50 words total for fact-level and 5 chunks of max.", "20 words for argument-level annotation.", "Fact Level: By adapting Eq.", "5, we calculate a weight w tki for each token in document D w.r.t. bolded fact F Si in the summary S .", "The weight w fij between fact F Di in the document and F Sj in its summary is calculated using Eq.", "6.", "We use Eq.", "3 to get the manual fact content weighting w fman .", "Correlation: We evaluate how automatic content weighting w agold and w fgold correlates with manual content weighting w aman and w fman .", "Using the Pearson Correlation Coefficient directly over the content weights (PCC-W), we evaluate the correlation between content weights assigned by human judges and automatically calculated weights PCC ( w gold , w man ) .", "As a more extreme form of weighting, we compute the correlation between content selected (i.e. ignoring computed weights) by human judges and the automatic mechanism (PCC-S); we set the value to 1 if the weight is over 0, meaning the fact/argument is selected.", "While content-weighting correlations are just moderate, content-selection correlations are strong, especially the fact-based (Table 1).", "In other words, the automatic method attends to facts human judges consider important, but weighs them differently.", "System-level Agreement: We check system-level agreement on Corr-F and Corr-A metrics when using automatic vs. manual content weighting (Table 2): We compute fact/argument-level content weights w cand for each system (cf. Section 4).", "We then calculate Corr-F and Corr-A of w cand against both w man (manual weighting) and w gold (automatic weighting) on the 50 articles with human annotation introduced in Section 5.1.", "The Corr-F metric shows the same system-level ordering for both manual and automatic content weighting.", "Furthermore, both manual and automatic content weighting agree that TCONV S2S and PTGEN achieve similar performance but are strongly outperformed by BERTSUMABS .", "Corr-F/A vs. referenceless metrics: HROUGE score (Hardy et al., 2019) is a content-weighting-based referenceless evaluation metric.", "Unlike our Model Corr-F Corr-A Manual content weighting w cand vs. w man TCONV S2S 0.2274 0.2464 PTGEN 0.2180 0.2433 BERTSUMABS 0.2508 0.2662 Automatic content weighting w cand vs. w gold TCONV S2S 0.6203 0.6280 PTGEN 0.5822 0.5727 BERTSUMABS 0.6714 0.6533 Table 2: System-level scores for manual and automatic content weighting on 50 human-annotated documents.", "approach, it operates on token level and is entirely based on manual annotation.", "The evaluation results in Table 3 show that Corr-F/A's ranking is identical to HROUGE's unigram and bigram precision, with Corr-F also assigning similar proportions.", "13 Corr-F/A vs. reference-based metrics: ROUGE (Lin, 2004) and BERTScore (Zhang et al., 2019) are both reference-based metrics, which compute a similarity score for each token in the candidate sentence with each token in the reference sentence.", "However, instead of exact matches as used in ROUGE, BERTScore computes token similarity using contextual embeddings.", "Comparing to ROUGE and BERTScore on the full XSum test set (see Table 4) shows full agreement on system ordering for both metrics.", "We now provide examples demonstrating the strength and weaknesses of Corr-F/A by analysing system outputs where BERTScore and Corr-F/A demonstrate different ordering.", "Strengths: (1) Corr-F/A are more sensitive to content-level hallucination than BERTScore.", "Summaries with facts/arguments never mentioned in the original document get much lower Corr-F/A scores than summaries with content that appears in the document verbatim or as a paraphrase.", "Example 1 in Table 5 shows Corr-F/A penalizing the incorrect fact to become the next president generated by BERTSUMABS , while giving higher scores to TCONV S2S which paraphrased abdicate with 13 We computed HROUGE for BERTSUMABS using https://github.com/sheffieldnlp/highres .", "step down.", "(2) Corr-F/A better identify paraphrases , especially those containing extra content mentioned in the document but not in the ground-truth summary.", "Example 2 in Table 5 shows that Corr-F/A do not penalize BERTSUMABS for generating the argument after only eight games in charge, which is mentioned in the document.", "Weaknesses: (1) Corr-F is weaker in identifying token-level hallucination, 14 as in Example 3 in Table 5.", "Corr-F gives a higher score to TCONV S2S output with one hallucinated token robotic.", "However, Corr-A's more fine-grained approach works slightly better in this case.", "(2) Corr-F/A tend to under-score summaries containing content mentioned in the ground truth but only touched briefly in the document.", "In Example 4 in Table 5, Corr-F/A score the output of TCONV S2S lower, even though it correctly captures an academy for children with mental health, which is mentioned only once in the document.", "In sum, Corr-F/A is less dependent on the reference summary by also considering the source document, and thus has less of a reference bias than BERTScore.", "In addition, Corr-F/A helps to identify ungrounded facts, i.e. content-level hallucinations, which is important for identifying misinformation in automated news reporting.", "tem outputs with Corr-F/A calculated using a lower-performing SRL tool (He et al., 2017).", "15 The results are shown as Corr-F/A(L) in Table 4 and show full agreement with Corr-F/A in terms of system ordering.", "However, the better performing original SRL system widens the margin between systems.", "We present an automatic evaluation framework for abstractive summarisation, which is low-cost and robust, as it does not rely on expert annotators nor is susceptible to crowdsourcing noise.", "Using fact representations, we are able to capture semantically similar, but at the same time distant in surface form, content in the summary that aligns with arbitrarily far-apart parts of the input document, casting our metric to be directly interpretable.", "Our metric is more sensitive to perturbations of the facts in the target summary, which resemble common hallucination phenomena of neural decoders (see Figure 2-3 in Appendix A for examples).", "In the future, we intend to investigate different meaning representation formalisms, such as AMR (Banarescu et al., 2013) and Dynamic Syntax (Kempson et al., 2001) and extend to other datasets (e.g. multiple-reference summarization) and tasks (e.g. response generation in dialogue).", "This research received funding from the EPSRC project MaDrIgAL (EP/N017536/1) and Charles University project PRIMUS/19/SCI/10.", "We would like to acknowledge the AWS Cloud Credits for Research programme.", "15 F1 on the Ontonotes 5.0 dataset is 81.6% ( = 4 . 89 )." ]
[ "abstain", "objective", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "method", "result", "other", "abstain", "other", "other", "method", "other", "method", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "method", "other", "abstain", "method", "method", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "result", "abstain", "other", "other", "other" ]
[ "With the development of biomedical language understanding benchmarks, Artificial Intelligence applications are widely used in the medical field.", "However, most benchmarks are limited to English, which makes it challenging to replicate many of the successes in English for other languages.", "To facilitate research in this direction, we collect real-world biomedical data and present the first Chinese Biomedical Language Understanding Evaluation (CBLUE) benchmark: a collection of natural language understanding tasks including named entity recognition, information extraction, clinical diagnosis normalization, and an associated online platform for model evaluation, comparison, and analysis.", "To establish evaluation on these tasks, we report empirical results with the current 11 pre-trained Chinese models, and experimental results show that state-of-the-art neural models perform far worse than the human ceiling 1 .", "Our benchmark is released at https://tianchi.aliyun.", "com/dataset/dataDetail?dataId=95414&lang=en-us .", "Artificial intelligence is gradually changing the landscape of healthcare, and biomedical research (Yu et al., 2018).", "With the fast advancement of biomedical datasets, biomedical natural language processing (BioNLP) has facilitated a broad range Equal contribution and shared co-first authorship.", "of applications such as biomedical text mining, which leverages textual data in Electronic Health Records (EHRs).", "A key driving force behind such improvements and rapid iterations of models is the use of general evaluation datasets and benchmarks (Gijsbers et al., 2019).", "Pioneer benchmarks, such as BLURB (Gu et al., 2020), PubMedQA (Jin et al., 2019), and others, have provided us with the opportunity to conduct research on biomedical language understanding and developing real-world applications.", "Unfortunately, most of these benchmarks are developed in English, which makes the development of the associated machine intelligence Anglo-centric.", "Meanwhile, other languages, such as Chinese, have unique linguistic characteristics and categories that need to be considered.", "Even though Chinese speakers account for a quarter of the world population, there have been no existing Chinese biomedical language understanding evaluation benchmarks.", "To address this issue and facilitate natural language processing studies in Chinese, we take the first step in introducing a comprehensive C hinese B iomedical L anguage U nderstanding E valuation ( CBLUE ) benchmark with eight biomedical language understanding tasks.", "These tasks include named entity recognition, information extraction, clinical diagnosis normalization, short text classification, question answering (in transfer learning setting), intent classification, semantic similarity, and so on.", "We evaluate several pre-trained Chinese language models on CBLUE and report their performance.", "The current models still perform by far worse than the standard of single-human perfor-7888 mance, leaving room for future improvements.", "We also conduct a comprehensive analysis using case studies to indicate the challenges and linguistic differences in Chinese biomedical language understanding.", "We intend to develop a universal GLUElike open platform for the Chinese BioNLP community, and this work helps accelerate research in that direction.", "Overall, the main contributions of this study are as follows: We propose the first Chinese biomedical language understanding benchmark, an open-ended, community-driven project with diverse tasks.", "The proposed benchmark serves as a platform for the Chinese BioNLP community and encourages new dataset contributions.", "We report a systematic evaluation of 11 Chinese pre-trained language models to understand the challenges derived by these tasks.", "We release the source code of the baselines as a toolkit for future research purposes.", "Several benchmarks have been developed to evaluate general language understanding over the past few years.", "GLUE (Wang et al., 2019b) is one of the first frameworks developed as a formal challenge affording straightforward comparison between task-agnostic transfer learning techniques.", "SuperGLUE (Wang et al., 2019a), styled after GLUE, introduce a new set of more difficult language understanding datasets.", "Other similarly motivated benchmarks include DecaNLP (McCann et al., 2018), which recast a set of target tasks into a general question-answering format and prohibit task-specific parameters, and SentEval (Conneau and Kiela, 2018), which evaluate explicitly fixed-size sentence em-beddings.", "Non-English benchmarks include Rus-sianSuperGLUE (Shavrina et al., 2020) and CLUE (Xu et al., 2020), which is a community-driven benchmark with nine Chinese natural language understanding tasks.", "These benchmarks in the general domain provide a north star goal for researchers and are part of the reason we can confidently say we have made great strides in our field.", "For BioNLP, many datasets and benchmarks have been proposed (Wang et al., 2020; Li et al., 2016; Wu et al., 2019) which promote the biomedical language understanding (Beltagy et al., 2019; Lewis et al., 2020; Lee et al., 2020).", "Tsatsaronis et al. (2015) propose biomedical language understanding datasets as well as a competition on large-scale biomedical semantic indexing and question answering.", "Jin et al. (2019) propose PubMedQA, a novel biomedical question answering dataset collected from PubMed abstracts.", "Pappas et al. (2018) propose BioRead, which is a publicly available cloze-style biomedical machine reading comprehension (MRC) dataset.", "Gu et al. (2020) create a leaderboard featuring the Biomedical Language Understanding & Reasoning Benchmark (BLURB).", "Unlike a general domain corpus, the annotation of a biomedical corpus needs expert intervention and is labor-intensive and time-consuming.", "Moreover, most of the benchmarks are based on English; ignoring other languages means that potentially valuable information may be lost, which can be helpful for generalization.", "In this study, we focus on Chinese to fill the gap and aim to develop the first Chinese biomedical language understanding benchmark .", "Note that Chinese biomedical text is linguistically different from English and has its domain characteristics, necessitating an evaluation BioNLP benchmark designed explicitly for Chinese.", "CBLUE consists of 8 biomedical language understanding tasks.The task descriptions and statistics of CBLUE are shown Table 1.", "Unlike CLUE (Xu et al., 2020) as shown in Table 2, CBLUE has a diverse data source (the annotation is expen-sive), richer task setting, thus, more challenging for NLP models.", "We introduce the design principle of CBLUE as follows: 1) Diverse tasks : CBLUE contain widespread token-level, sequence-level, sequence-pair tasks.", "2) Variety of differently distributed data : CBLUE collect data from various sources, including clinical trials, EHRs, medical forum, textbooks, and search engine logs with a real-world distribution.", "3) Quality control in long-term maintenance : We asked domain experts (doctors from Class A tertiary hospitals) to annotate datasets and carefully review data to ensure data quality.", "CMeEE For this task, the dataset is first released in CHIP2020 2 (Hongying et al., 2020).", "Given a pre-defined schema, the task is to identify and extract entities from the given sentence and classify them into nine categories: disease, clinical manifestations, drugs, medical equipment, medical procedures, body, medical examinations, microorganisms, and department.", "CMeIE For this task, the dataset is also released in CHIP2020 (Guan et al., 2020).", "The goal of the task is to identify both entities and relations in a sentence following the schema constraints.", "There are 53 relations defined in the dataset, including 10 synonymous sub-relationships and 43 other sub-relationships.", "CHIP-CDN For this task, the dataset is to standardize the terms from the final diagnoses of Chinese electronic medical records.", "Given the original phrase, the task is to normalize it to standard terminology based on the International Classification of Diseases (ICD-10) standard for Beijing Clinical Edition v601.", "CHIP-CTC For this task, the dataset is to classify clinical trials eligibility criteria, which are fundamental guidelines of clinical trials defined to identify whether a subject meets a clinical trial or not (Zong et al., 2021).", "All text data are collected from the website of the Chinese Clinical Trial Registry (ChiCTR) 3 , and a total of 44 categories are defined.", "The task is like text classification; although it is not a new task, studies and corpora for the Chinese clinical trial criterion are still limited , and we hope to promote future research for social benefits.", "CHIP-STS For this task, the dataset is for sentence similarity in the non-i.i.d. (non-independent and identically distributed) setting.", "Specifically, the task aims to evaluate the generalization ability between disease types on Chinese disease questions and answer data.", "Given question pairs related to 5 different diseases (The disease types in the training and testing set are different), the task is to determine whether the semantics of the two sentences are similar.", "KUAKE-QIC For this task, the dataset is for intent classification.", "Given search engine queries, 3 http://chictr.org.cn/ 7890 the task is to classify each of them into one of 11 medical intent categories defined in KUAKE-QIC.", "Those include diagnosis, etiology analysis, treatment plan, medical advice, test result analysis and others.", "KUAKE-QTR For this task, the dataset is used to estimate the relevance of the title of a query document.", "Given a query (e.g., Symptoms of vitamin B deficiency), the task aims to find the relevant title (e.g., The main manifestations of vitamin B deficiency).", "KUAKE-QQR For this task, the dataset is used to evaluate the relevance of the content expressed in two queries.", "Similar to KUAKE-QTR, the task aims to estimate query-query relevance, which is an essential and challenging task in real-world search engines.", "Since machine learning models are mostly data-driven, data plays a critical role, and it is pretty often in the form of a static dataset (Gebru et al., 2018).", "We collect data for different tasks from diverse sources, including clinical trials, EHRs, medical books, and search logs from real-world search engines.", "As biomedical data may contain private information such as the patient's name, age, and gender, all collected datasets are anonymized and reviewed by the IRB committee of each data provider to preserve privacy.", "We introduce the data collection details followingly.", "Clinical trial eligibility criteria text is collected from ChiCTR, a non-profit organization that provides information about clinical trial registration for public research use.", "In each trial registry file, eligibility criteria text is organized as a paragraph in the inclusion criteria and exclusion criteria.", "Some meaningless texts are excluded, and the remaining texts are annotated to generate the CHIP-CTC dataset.", "We obtain the final diagnoses of the medical records from several Class A tertiary hospitals and sample a few diagnosis items from different medical departments to construct the CHIP-CDN dataset for research purposes.", "The diagnosis items are randomly sampled from the items which are not covered by the common medical synonyms dict.", "Due to the COVID-19 pandemic, online consultation has become more and more popular via the Internet.", "To promote data diversity, we select the online questions by patients to build the CHIP-STS dataset.", "Note that most of the questions are chief complaints.", "To ensure the authority and practicability of the corpus, we also select medical textbooks of Pediatrics (Wang et al., 2018), Clinical Pediatrics (Shen and Gui, 2013) and Clinical Practice 4 .", "We collect data from these sources to construct the CMeIE and CMeEE datasets.", "We also collect search logs from real-world search engines like the Alibaba QUARK Search Engine 5 .", "First, we filter the search queries in the raw search logs by the medical tag to obtain candidate medical texts.", "Then, we sample the documents for each query with non-zero relevance scores (i.e., to determine if the document is relevant to the query).", "Specifically, we divide all the documents into three categories, namely high, middle, and tail documents, and then uniformly sample the data to guarantee diversity.", "We leverage the data from search logs to construct KUAKE-QTC, KUAKE-QTR, and KUAKE-QQR datasets.", "Each sample is annotated by three to five domain experts , and the annotation with the majority of votes is taken to estimate human performance.", "During the annotation phase, we add control questions to prevent dishonest behaviors by the domain experts.", "Consequently, we reject any annotations made by domain experts who fail in the training phase and do not adopt the results of those who achieved low performance on the control tasks.", "We maintain strict and high criteria for approval and review at least 10 random samples from each worker to decide whether to approve or reject all their HITs.", "We also calculate the average inter-rater agreement between annotators using Fleiss' Kappa scores (Fleiss, 1971), finding that five out of six annotations show almost perfect agreement ( = 0 . 9 ).", "Utility-preserving Anonymization Biomedical data may be considered as a breach in the privacy of individuals because they usually contain sensitive information.", "Thus, we conduct utility-preserving anonymization following (Lee et al., 2017) to anonymize the data before releasing the benchmark.", "Real-world Distribution To promote the generalization of models, all the data in our CBLUE benchmark follow real-world distribution without up/downsampling.", "As shown in Figure", "1(a), our dataset follows long-tail distribution following Zipf's law and will inevitably be long-tailed.", "However, long-tail distribution has no significant effect on performance.", "Further, some datasets, such as CMedIE, have label hierarchy with both coarse-grained and fine-grained relation labels, as shown in Figure", "1(b).", "Diverse Tasks Setting Our CBLUE benchmark includes eight diverse tasks, including named entity recognition, relation extraction, and single-sentence/sentence-pair classification.", "Besides the independent and i.i.d. scenarios, our CBLUE benchmark also contains a specific transfer learning scenario supported by the CHIP-STS dataset, in which the testing set has a different distribution from the training set.", "We provide a leaderboard for users to submit their own results on CBLUE.", "The evaluation system will give final scores for each task when users submit their prediction results.", "The platform offers 60 free GPU hours from Aliyun 6 to help researchers develop and train their models.", "Our CBLUE benchmark was released online on April 1, 2021.", "Up to now, more than 900 researchers have applied the dataset, and over 300 teams have submitted their model predictions to our platform, including medical institutions (Peking Union Medical College Hospital, etc.), universities (Tsinghua University, Zhejiang University, etc.), and AI companies (Baidu, Huawei, etc.).", "We will continue to maintain the benchmark by adding new tasks.", "To make it easier to use the CBLUE benchmark, we also offer a toolkit implemented in PyTorch (Paszke et al., 2019) for reproducibility.", "Our toolkit supports mainstream pre-trained models and a wide range of target tasks.", "6 https://tianchi.aliyun.com/ notebook-ai/ 7892 Model CMeEE CMeIE CDN CTC STS QIC QTR QQR Avg.", "Baselines We conduct experiments with baselines based on different Chinese pre-trained language models.", "We add an additional output layer (e.g., MLP) for each CBLUE task and fine-tune the pre-trained models.", "Models We evaluate CBLUE on the following public available Chinese pre-trained models: BERT-base (Devlin et al., 2018).", "We use the base model with 12 layers, 768 hidden layers, 12 heads, and 110 million parameters.", "BERT-wwm-ext-base (Cui et al., 2019).", "A Chinese pre-trained BERT model with whole word masking.", "RoBERTa-large (Liu et al., 2019).", "Compared with BERT, RoBERTa removes the next sentence prediction objective and dynamically changes the masking pattern applied to the training data.", "RoBERTa-wwm-ext-base/large.", "RoBERTa-wwm-ext is an efficient pre-trained model which integrates the advantages of RoBERTa and BERT-wwm.", "ALBERT-tiny/xxlarge (Lan et al., 2019).", "AL-BERT is a pre-trained model with two objectives: Masked Language Modeling (MLM) and Sentence Ordering Prediction (SOP).", "ZEN (Diao et al., 2019).", "A BERT-based Chinese text encoder enhanced by N-gram representations, where different combinations of characters are considered during training.", "Mac-BERT-base/large (Cui et al., 2020).", "Mac-BERT is an improved BERT with novel MLM as a correction pre-training task.", "PCL-MedBERT 7 .", "A pre-trained medical language model proposed by the Peng Cheng Laboratory.", "We implement all baselines with PyTorch (Paszke et al., 2019).", "All the training details can be found in the appendix.", "We report the results of our baseline models on the CBLUE benchmark in Table 3.", "We notice that larger pre-trained models obtain better performance.", "Since Chinese text is composed of terminologies, carefully designed masking strategies may be helpful for representation learning.", "However, we observe that models which use whole word masking do not always yield better performance than others in some tasks, such as CTC, QIC, QTR, and QQR, indicating that tasks in our benchmark are challenging and more sophisticated technologies should be developed.", "Further, we find that ALBERT-tiny achieves comparable performance to base models in CDN, STS, QTR, and QQR tasks, illustrating that smaller models may also perform well in specific tasks.", "We think this is caused by the different distribution between pretraining corpus and Chinese medical text; thus, large PTLMs may not obtain satisfactory performance.", "Finally, we notice that PCL-MedBERT, which tends to be state-of-the-art in Chinese biomedical text processing tasks, and does not perform as well as we expected.", "This further demonstrates the difficulty 7 https://code.ihub.org.cn/projects/ 1775 7893 CMeEE CMeIE CDN CTC STS QIC QTR QQR Trainedannotation annotator 1 69.0 62.0 60.0 73.0 94.0 87.0 75.0 80.0 annotator 2 62.0 65.0 69.0 75.0 93.0 91.0 62.0 88.0 annotator 3 69.0 67.0 62.0 80.0 88.0 83.0 71.0 90.0 avg 66.7 64.7 63.7 76.0 91.7 87.0 69.3 86.0 majority 67.0 66.0 65.0 78.0 93.0 88.0 71.0 89.0 best model 62.4 55.9 59.3 70.9 85.6 85.5 62.9 84.7 Table 4: Human performance of two-stage evaluation scores with the best-performed model.", "avg refers to the mean score from the three annotators.", "majority indicates the performance taken from the majority vote of amateur humans.", "Bold text denotes the best result among human and model prediction.", "For all of the tasks in CBLUE, we ask human amateur annotators with no medical experience to label instances from the testing set and compute the annotators' majority vote against the gold label annotated by specialists.", "Similar to SuperGLUE (Wang et al., 2019a), we first need to train the annotators before they work on the testing data.", "Annotators are asked to annotate some data from the development set; then, their annotations are validated against the gold standard.", "Annotators need to correct their annotation mistakes repeatedly so that they can master the specific tasks.", "Finally, they annotate instances from the testing data, and these annotations are used to compute the final human scores.", "The results are shown in Table 4 and the last row of Table 3.", "In all tasks, humans have better performance.", "We choose two datasets: CMeEE and KUAKE-QIC, a sequence labeling and classification task,", "respectively, to conduct case studies.", "As shown in Figure 2, we report the statistics of the proportion of various types of error cases 8 .", "For CMeEE, we notice that entity overlap 9 , ambiguity 10 , need domain knowledge 11 , annotation error 12 are major reasons that result in the prediction failure.", "Furthermore, there exist many instances with entity overlap , which may lead to confusion for the named entity recognition task.", "While in the analysis for KUAKE-QIC, almost half of bad cases are due to multiple triggers 13 and colloquialism .", "Colloquialism 14 is natural in search queries, which means that some descriptions of the Chinese medical text are too simplified, colloquial, or inaccurate.", "We show some cases on CMeEE in Table 5.", "In the second row, we notice that given the instance of 8 See definitions of errors in the appendix. 9 There exist multiple overlapping entities in the instance. 10 The instance has a similar context but different meaning, which mislead the prediction. 11 There exist biomedical terminologies in the instance which require domain knowledge to understand. 12 The annotated label is wrong. 13 There exist multiple indicative words which mislead the prediction. 14 The instance is quite different from written language (e.g., with many abbreviations) 7894 Sentence Word Label RO MB B 12% 19% Ite Pro Pro The results of blood biochemical analysis show that vitamin B lack rate is about 12% to 19%. blood biochemical analysis Ite Pro Pro Bod O Bod The rash can be reduced by the host producing specific anti-toxin antibodies. anti-toxin antibodies Bod O Bod 1. , , Sym, Sym, Sym O Sym, Sym, Sym According to the structure and function of genetic material, genetic diseases are divided into five categories: 1. Chromosomal diseases refer to abnormal chromosome number or chromosome structure abnormalities, including deletions, translocations, inversions... deletions, translocations, inversions Sym, Sym, Sym O Sym, Sym, Sym Table 5: Case studies in CMeEE. We evaluate roberta-wwm-ext and PCL-MedBERT on 3 sampled sentences, with their gold labels and model predictions. Ite (medical examination items), Pro (medical procedure), Bod (body), and Sym (clinical symptoms) are labeled for medical named words. O means that the model fails to extract the entity from sentences. RO=roberta-wwm-ext, MB=PCL-MedBERT. Query Model Gold BERT BERT-ext MedBERT Does it matter if the ratio of lymphocytes is high and the ratio of neutrophils is low? Diagnosis Diagnosis Test results analysis Test results analysis Consultation: When do children usually get chickenpox? Other Other Other Diseasedescription 160 40 The systolic blood pressure of the elderly is 160, and the diastolic blood pressure is only more than 40. What is the reason? How to treat? Diagnosis Diagnosis Diagnosis Treatment Table 6: Case studies in KUAKE-QIC. We evaluate the performance of baselines with 3 sampled instances. The correlation between Query and Title is divided into 3 levels (0-2), which means poorly related or unrelated ', related ' and strongly related '. BERT = BERT-base, BERT-ext = BERT-wwm-ext-base, MedBERT = PCL-MedBERT. ( Rash can be reduced by the host producing specificanti-toxin antibodies. ), ROBERTA and PCL-MedBERT obtain different predictions.", "The reason is that there exist medical terms such as ( anti-toxin antibodies ).", "ROBERTA can not identify those tokens correctly, but PCL-MedBERT, pre-trained on the medical corpus, can successfully make it.", "Moreover, PCL-MedBERT can extract entities , , ( eletions, translocations, inversions ) from the long sentences, which is challenging for other models.", "We further show some cases on KUAKE-QIC in Table", "6. In the first case, we notice that both BERT and BERT-ext fail to obtain the intent label of the query ? ( Does it matter if the ratio of lymphocytes is high and the ratio of neutrophils is low? ), while MedBERT can obtain the correct prediction.", "Since ( ratio of lymphocytes ) and ( ratio of neutrophils ) are biomedical terms, and the general pre-trained language model has to leverage domain knowledge to understand those phrases.", "As shown in Table 5 and Table 6, compared with other languages, the Chinese language is very colloquial even in medical texts.", "Furthermore, polysemy is prevalent in chinese language.", "The meaning of a word changes according to its tone, which usually causes confusion and difficulties for machine reading.", "In summary, we conclude that tasks in CBLUE are not easy to solve since the Chinese language has unique characteristics , and more robust models should be developed.", "In this paper, we present a Chinese Biomedical Language Understanding Evaluation (CBLUE) benchmark.", "We evaluate 11 current language representation models on CBLUE and analyzed their results.", "The results illustrate the limited ability of state-of-the-art models to handle some of the more challenging tasks.", "In contrast to English benchmarks such as GLUE/SuperGLUE and BLURB, whose model performance already matches human performance, we observe that this is far from the truth for Chinese biomedical language understanding.", "We want to express gratitude to the anonymous reviewers for their hard work and kind comments.", "This work is funded by Special Project of New Generation Artificial Intelligence of the Ministry of Science and Technology of China (2021ZD0113402), National Natural Science Foundations of China (61876052 and U1813215), National Natural Science Foundation of Guangdong, China (2019A1515011158), Strategic Emerging Industry Development Special Fund of Shen-zhen (20200821174109001), Pilot Project in 5G + Health Application of Ministry of Industry and Information Technology & National Health Commission (5G + Luohu Hospital Group: an Attempt to New Health Management Styles of Resi-dents), Zhengzhou collaborative innovation major special project (20XTZX11020), Zhejiang Provincial Natural Science Foundation of China (No. LGG22F030011), Ningbo Natural Science Foundation (2021J190), and Yongjiang Talent Introduction Programme (2021A-156-G).", "We collected all the data with authorization from the organization that owned the data and signed the agreement.", "We release the benchmark following the CC BY-NC 4.0 license.", "All collected datasets are anonymized and reviewed by the IRB committee of each data provider to preserve privacy.", "Since we collect data following real-world distribution, there may exist popularity bias that cannot be ignored." ]
[ "abstain", "abstain", "objective", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "method", "objective", "objective", "objective", "method", "objective", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "abstain", "objective", "other", "other", "method", "method", "abstain", "method" ]
[ "We study how masking and predicting tokens in an unsupervised fashion can give rise to linguistic structures and downstream performance gains.", "Recent theories have suggested that pretrained language models acquire useful inductive biases through masks that implicitly act as cloze reductions.", "While appealing, we show that the success of the random masking strategy used in practice cannot be explained by such cloze-like masks alone.", "We construct cloze-like masks using task-specific lexicons for three different classification datasets and show that the majority of pretrained performance gains come from generic masks that are not associated with the lexicon.", "To explain the empirical success of these generic masks, we demonstrate a correspondence between the masked language model (MLM) objective and existing methods for learning statistical dependencies in graphical models.", "Using this, we derive a method for extracting these learned statistical dependencies in MLMs and show that these dependencies encode useful inductive biases in the form of syntactic structures.", "In an unsupervised parsing evaluation, simply forming a minimum spanning tree on the implied statistical dependence structure outperforms a classic method for unsupervised parsing (58.74 vs. 55.91 UUAS).", "Pretrained masked language models (Devlin et al., 2019; Liu et al., 2019b) have benefitted a wide range of natural language processing (NLP) tasks (Liu, 2019; Wadden et al., 2019; Zhu et al., 2020).", "Despite recent progress in understanding what useful information is captured by MLMs (Liu et al., 2019a; Hewitt and Manning, 2019), it remains a mystery why task-agnostic masking of words can capture linguistic structures and transfer to downstream tasks.", "One popular justification of MLMs relies on viewing masking as a form of cloze reduction.", "Cloze reductions reformulate an NLP task into a prompt question and a blank and elicit answers by filling in the blank (Figure 1).", "When tested by cloze reductions pretrained MLMs and left-to-right language models (LMs) have been shown to possess abundant factual knowledge (Petroni et al., 2019) and display impressive few-shot ability (Brown et al., 2020).", "This success has inspired recent hypotheses that some word masks are cloze-like and provide indirect supervision to downstream tasks (Saunshi et al., 2020; Lee et al., 2020).", "For example, a sentiment classification task (Pang et al., 2002) can be reformulated into filling in like or hate in the cloze I [MASK] this movie .", "Such cloze-like masks provide a clear way in which an MLM can implicitly learn to perform sentiment classification.", "While this hypothesis is appealing, MLMs in practice are trained with uniform masking that does not contain the special structure required by cloze-like masks most of the time.", "For example, predicting a generic word this in the cloze I like [MASK] movie would not offer task-specific supervision.", "We quantify the importance of cloze-like and generic masks by explicitly creating cloze-like masks using task-specific lexicons and comparing models pretrained on these masks.", "These experiments suggest that although cloze-like masks can be helpful, the success of uniform masking cannot be explained via cloze-like masks alone.", "In fact, we demonstrate that uniform masking performs as well as a negative control where we explicitly remove cloze-like masks from the mask distribution.", "To address this mismatch between theory and practice, we offer a new hypothesis of how generic masks can help downstream learning.", "We propose a conceptual model for MLMs by drawing a correspondence between masking and graphical model neighborhood selection (Meinshausen and Bhlmann, 2006).", "Using this, we show that MLM objectives are designed to recover statistical dependencies in the presence of latent variables and propose an estimator that can recover these learned dependencies from MLMs.", "We hypothesize that statistical dependencies in the MLM objective capture useful linguistic dependencies and demonstrate this by using recovered statistical dependencies to perform unsupervised parsing, outperforming an actual unsupervised parsing baseline ( 58 . 74 vs 55 . 91 UUAS; Klein and Manning, 2004).", "We release our implementation on Github 1 .", "Theories inspired by Cloze Reductions.", "Cloze reductions are fill-in-the-blank tests that reformulate an NLP task into an LM problem.", "Existing work demonstrates that such reductions can be highly effective for zero/few-shot prediction (Rad-ford et al., 2019; Brown et al., 2020) as well as relation extraction (Petroni et al., 2019; Jiang et al., 2020).", "These fill-in-the-blank tasks provide a clear way by which LMs can obtain supervision about downstream tasks, and recent work demonstrates how such implicit supervision can lead to useful representations (Saunshi et al., 2020).", "More general arguments by Lee et al. (2020) show these theories hold across a range of self-supervised settings.", "While these theories provide compelling arguments for the value of pre-training with cloze tasks, they do not provide a clear reason why uniformly random masks such as those used in BERT provide such strong gains.", "In our work, we quantify this gap using lexicon-based cloze-like masks and show that cloze-like masks alone are unlikely to account for the complete success of MLM since generic and non-cloze masks are responsible for a substantial part of the empirical performance of MLMs.", "Theories for vector representations.", "Our goal of understanding how masking can lead to useful inductive biases and linguistic structures is closely related to that of papers studying the theory of word embedding representations (Mikolov et al., 2013; Pennington et al., 2014; Arora et al., 2015).", "Existing work has drawn a correspondence between word embeddings and low-rank factorization of a pointwise mutual information (PMI) matrix (Levy and Goldberg, 2014) and others have shown that PMI is highly correlated with human semantic similarity judgements (Hashimoto et al., 2016).", "While existing theories for word embeddings cannot be applied to MLMs, we draw inspiration from them and derive an analogous set of results.", "Our work shows a correspondence between MLM objectives and graphical model learning through conditional mutual information, as well as evidence that the conditional independence structure learned by MLMs is closely related to syntactic structure.", "Probing Pretrained Representations.", "Recent work has applied probing methods (Belinkov and Glass, 2019) to analyze what information is captured in the pretrained representations.", "This line of work shows that pretrained representations encode a diverse range of knowledge (Peters et al., 2018; Tenney et al., 2019; Liu et al., 2019a; Hewitt and Manning, 2019; Wu et al., 2020).", "While probing provides intriguing evidence of linguistic structures encoded by MLMs, they do not address the goals of this work, which is how the pretraining objective encourages MLMs to extract such structures.", "Masked Language Modeling asks the model to predict a token given its surrounding context.", "Formally, consider an input sequence X of L tokens (cid:104) x 1 , . . . , x L (cid:105) where each variable takes a value from a vocabulary V .", "Let X D be the data generating distribution of X .", "Let x i be the i th token in X , and let X \\ i denote the sequence after replacing the i th token with a special [MASK] token.", "In other words, X \\ i := (cid:104) x 1 , . . . , x i 1 , [MASK] , x i +1 , . . . , x L (cid:105) .", "Similarly, define X \\{ i,j } as replacing both x i and x j with [MASK] .", "MLM determines what tokens are masked by a mask distribution i M .", "The goal of MLM is to learn a probabilistic model p positive beautiful movie Modified Input: Cloze-like Mask: [MASK] beautiful movie Generic Mask: positive [MASK] beautiful Figure 2: In our case study, we append the true label to each input and create ideal cloze-like masks.", "In BERT pretraining, each input token is masked with a fixed, uniform probability, which is a hyper-parameter to be chosen.", "We refer to this strategy as uniform masking .", "Finetuning is the canonical method for using pretrained MLMs.", "Consider a prediction task where y Y is the target variable, e.g. , the sentiment label of a review.", "Finetuning uses gradient descent to modify the pretrained parameters and learn a new set of parameters to minimize L finetune = EX D (cid:48) ,y p ( y | X ) log p , ( y | X ) , where p ( y | x ) is the ground-truth distribution and D (cid:48) is the data distribution of the downstream task.", "Our goals.", "We will study how the mask distribution M affects downstream performance.", "We define perfect cloze reductions as some partition of the vocabulary V y such that p ( x i V y | X \\ i ) p ( y | X ) .", "For a distribution M such that the masks we draw are perfect cloze-reductions, the MLM objective offers direct supervision to finetuning since LMLM L finetune .", "In contrast to cloze-like masking, in uniform masking we can think of p as implicitly learning a generative model of X (Wang and Cho, 2019).", "Therefore, as M moves away from the ideal distribution and becomes more uniform, we expect p to model more of the full data distribution D instead of focusing on cloze-like supervision for the downstream task.", "This mismatch between theory and practice raises questions about how MLM with uniform masking can learn useful inductive biases.", "When LMLM is not L finetune , what is LMLM learning?", "We analyze LMLM and show that it is similar to a form of conditional mutual information based graphical model structure learning.", "To motivate our subsequent discussions, we perform a controlled study for the case when LMLM", "L finetune and analyze how deviations from the ideal mask distribution affect downstream performance.", "We perform analysis on the Stanford Sentiment Treebank (SST-2; Socher et al., 2013), which requires models to classify short movie reviews into positive or negative sentiment.", "We append the ground-truth label (as the word positive or negative ) to each movie review (Figure 2).", "Masking the last word in each review is, by definition, an ideal mask distribution.", "To study how the deviation from the ideal mask distribution degrades downstream performance, we vary the amount of cloze-like masks during training.", "We do this by masking out the last word for p % of the time and masking out a random word in the movie review for (100 p )% of the time, and choose p { 0 , 20 , 40 , 60 , 80 , 100 } .", "Experimental details.", "We split the SST-2 training set into two halves, use one for pretraining, and the other for finetuning.", "For the finetuning data, we do not append the ground-truth label.", "We pretrain small transformers with LMLM using different masking strategies and finetune them along with a baseline that is not pretrained (NOPRETRAIN ).", "Further details are in Appendix A. Results.", "We observe that while cloze-like masks can lead to successful transfer, even a small modification of the ideal mask distribution deteriorates performance .", "Figure 3 shows the development set accuracy of seven model variants averaged across ten random trials.", "We observe as p decreases, the performance of CLOZEp % degrades.", "Notably, CLOZE80% is already worse than CLOZE100% and CLOZE20% does not outperform NOPRETRAIN by much.", "We notice that CLOZE0% in fact degrades finetuning performance, potentially because the pretrained model is over-specialized to the language modeling task (Zhang et al., 2020; Tamkin et al., 2020).", "While this is a toy example, we observe similar results for actual MLM models Z x 2 : prefer x 5 : flight x 3 : the x 4 : morning x 1 : I Figure 4: Our conceptual framework of MLM.", "across three tasks (Section 5.1), and this motivates us to look for a framework that explains the success of generic masks in practice.", "In the previous section, we saw that cloze-like masks do not necessarily explain the empirical success of MLMs with uniform masking strategies.", "Understanding uniform masking seems challenging at first, as uniform-mask MLMs seem to lack task-specific supervision and is distinct from existing unsupervised learning methods such as word embeddings (which rely upon linear dimensionality reduction) and autoencoders (which rely upon denoising).", "However, we show in this section that there is a correspondence between MLM objectives and classic methods for graphical model structure learning.", "As a consequence, we demonstrate that MLMs are implicitly trained to recover statistical dependencies among observed tokens.", "Our starting point is the observation that predicting a single feature ( x i ) from all others ( X \\ i ) is the core subroutine in the classic Gaussian graphical model structure learning algorithm of Meinshausen and Bhlmann (2006).", "In this approach, L different Lasso regression models are trained (Tibshirani, 1996) with each model predicting x i from X \\ i , and the nonzero coefficients of this regression correspond to the conditional dependence structure of the graphical model.", "The MLM objective can be interpreted as a nonlinear extension of this approach, much like a classical algorithm that uses conditional mutual information (MI) estimators to recover a graphical model (Anandkumar et al., 2012).", "Despite the similarity, real world texts are better viewed as models with latent variables ( e.g. topics; Blei et al., 2003) and many dependencies across tokens arise due to latent variables, which makes learning the direct dependencies difficult.", "We show that MLMs implicitly recover the latent variables and can capture the direct dependencies while accounting for the effect of latent variables.", "Finally, MLMs are only approximations to the true distribution and we show that the MLM objective can induce high-quality approximations of conditional MI.", "Analysis setup.", "To better understand MLMs as a way to recover graphical model structures, we show mask-based models can recover latent variables and the direct dependencies among variables in the Gaussian graphical model setting of Meinshausen and Bhlmann (2006).", "Let X = [ x 1 , . . . , x L ] RL represent an input sequence where each of its coordinates x i represents a token, and Z R k be a latent variable that controls the sequence generation process.", "We assume that all coordinates of X are dependent on the latent variable Z , and there are sparse dependencies among the observed variables (Figure 4).", "In other words, we can write Z N (0 , ZZ ) and X N ( A Z , XX ) .", "Intuitively, we can imagine that Z represents shared semantic information, e.g. a topic, and XX represents the syntactic dependencies.", "In this Gaussian graphical model, the MLM is analogous to regressing each coordinate of X from all other coordinates, which we refer to as masked regression.", "MLM representations can recover latent variable.", "We now study the behavior of masked regression through the representation x mask ,i that is obtained by applying masked regression on the i th coordinate of X and using the predicted values.", "Our result shows that masked regression is similar to the two-step process of first recovering the latent variable Z from X \\ i and then predicting x i from Z .", "Let XX , \\ i,i R d 1 be the vector formed by dropping the i th row and taking the i th column of XX and 2SLS ,i be the linear map resulting from the two-stage regression X \\ i Z x i .", "In other words, masked regression implicitly recovers the subspace that we would get if we first explicitly recovered the latent variables ( 2SLS ,i ) with an error term that scales with the off-diagonal terms in XX .", "The proof is presented in Appendix C. To give additional context for this result, let us consider the behavior of a different representation learning algorithm: PCA.", "It is well-known that PCA can recover the latent variables as long as the ZZ dominates the covariance Cov ( X ) .", "We state this result in terms of XPCA , the observed data projected to the first k components of PCA.", "Proposition", "2. Let k be the k th eigenvalue of A ZZA (cid:62) and XX ,k +1 be the k +1 th eigenvalue of XX and V be the first k eigenvectors of Cov(X).", "Assuming k > XX ,k +1 , we have EX (cid:107) A Z XPCA (cid:107) 2 2 (cid:107) XX (cid:107) op k XX ,k +1 ( (cid:107) A Z (cid:107) 2 + (cid:112) tr( XX ))+ (cid:13)(cid:13)(cid:13) AA (cid:62) (cid:13)(cid:13)(cid:13) op (cid:112) tr( XX ) , where (cid:107)(cid:107) op is the operator norm and tr( ) is the trace.", "This suggests that when the context X \\{ i,j } captures enough of the latent information, conditional MI can remove the confounding effect of the shared topic Z and extract the direct and sparse dependencies within X (see Appendix C for the proof).", "This shows that whenever XX is sufficiently small and k is large (i.e., the covariance is dominated by Z ), then PCA recovers the latent information in Z .", "The proof is based on the Davis-Kahan theorem (Stewart and Sun, 1990) and is presented in Appendix C. Comparing the bound of PCA and masked regression, both bounds have errors that scales with XX , but the key difference in the error bound is that the error term for masked regression does not scale with the per-coordinate noise ( diag( XX ) ) and thus can be thought of as focusing exclusively on interactions within X .", "Analyzing this more carefully, we find that XX , \\ i,i corresponds to the statistical dependencies between x i and X \\ i , which we might hope captures useful, task-agnostic structures such as syntactic dependencies.", "MLM log-probabilies can recover direct dependencies.", "Another effect of latent variables is that many tokens have indirect dependencies through the latent variables, which poses a challenge to recovering the direct dependencies among tokens.", "We now show that the MLMs can account for the effect of latent variable.", "In the case where there are no latent variables, we can identify the direct dependencies via conditional MI (Anandkumar et al., 2012) because any x i and x j that are disconnected in the graphical model will have zero conditional MI, i.e. , I ( x i ; x j | X \\{ i,j } ) = 0 .", "One valuable aspect of MLM is that we can identify direct dependencies even in the presence of latent variables.", "If we naively measure statistical dependency by mutual information, the coordinates of X would appear dependent on each other because they are all connected with Z .", "However, the MLM objective resolves this issue by conditioning on X \\{ i,j } .", "We show that latent variables (such as topics) that are easy to predict from X \\{ i,j } can be ignored when considering conditional MI.", "Proposition", "3. The gap between conditional MI with and without latent variables is bounded by the conditional entropy H ( Z | X \\{ i,j } ) , I ( x i ; x j | X \\{ i,j } ) I ( x i ; x j | Z , X \\{ i,j } ) 2 H ( Z | X \\{ i,j } ) .", "MLM objective encourages capturing condi-tonal MI.", "We have now shown that conditional MI captures direct dependencies among tokens, even in the presence of latent variables.", "Next, we will show that the MLM objective ensures that a LM with low log-loss accurately captures the conditional MI.", "We now show that learning the MLM objective implies high-quality estimation of conditional MI.", "Denote X ( i, v ) as substituting x i with a new token v , X ( i, v ) = (cid:104) x 1 , . . . , x i 1 , v, x i +1 , . . . , x L (cid:105) .", "Conditional MI is defined as the expected pointwise mutual information (PMI) conditioned on the rest of the tokens, I p = E x i ,x j [ log p ( x i | X \\ i ( j, x j )) log E x j | x i p ( x i | X \\ i ( j, x j )) ] where I p is the abbreviation of I p ( x i ; x j | X \\{ i,j } ) .", "Our main result is that the log-loss MLM objective directly bounds the gap between the true conditional mutual information from the data distribution and an estimator that uses the log-probabilities from the model.", "More formally, Proposition", "4. Let I p = E x i ,x j [log p ( x i | X \\ i ( j, x j )) log E x j | x i p ( x i | X \\ i ( j, x j ))] be an estimator constructed by the model distribution p .", "| I p I p | E x j D kl (cid:0) p ( x i | X \\ i ( j, x j )) || p ( x i | X \\ i ( j, x j )) (cid:1) ,", "where D kl represents the KL-divergence.", "Here, the KL-divergence corresponds to the LMLM objective, up to a constant entropy term that depends on p .", "We present the proof in Appendix C. In other words, the MLM objective is implicitly encouraging the model to match its implied conditional MI to that of the data.", "We now use this result to create an estimator that extracts the conditional independence structures implied by MLM.", "Our earlier analysis in Proposition 4 suggests that an MLM with low loss has an accurate approximation of conditional mutual information.", "Using this result, we will now propose a procedure which estimates I p .", "The definition of I p shows that if we can access samples of x i and x j from the true distribution p , then we can directly estimate the conditional mutual information by using the log probabilities from the MLM.", "Unfortunately, we cannot draw new samples of x j | X \\{ i,j } , leading us to approximate this distribution using Gibbs sampling on the MLM distribution.", "Our Gibbs sampling procedure is similar to the one proposed in Wang and Cho (2019).", "We start with X 0 = X \\{ i,j } .", "For the t th iteration, we draw a sample x ti from p ( x i | X t 1 \\ i ) , and update by X t = X t 1 ( i, x ti ) .", "Then, we draw a sample x tj from p ( x j | X t \\ j ) and set X t = X t ( j, x tj ) .", "We repeat and use the samples ( x 1 i , x 1 j ) , . . . , ( x ti , x tj ) to compute the expectations for conditional MI.", "This procedure relies upon an additional assumption that samples drawn from the MLM are faithful approximations of the data generating distribution.", "However, we show empirically that even this approximation is sufficient to test the hypothesis that the conditional independences learned by an MLM capture syntactic dependencies (Section 5.2).", "We now test two predictions from our analyses.", "First, similar to our observation in the case study, we show that cloze-like masks do not explain the success of uniform masks on three real-world datasets.", "Second, our alternative view of relating MLM to graphical models suggests that statistical dependencies learned by MLMs may capture linguistic structures useful for downstream tasks.", "We demonstrate this by showing that MLMs' statistical dependencies reflect syntactic dependencies.", "Setup.", "We now demonstrate that real-world tasks and MLMs show a gap between task-specific cloze masks and random masks.", "We compare the MLM with random masking to two different control groups.", "In the positive control (CLOZE ), we pretrain with only cloze-like masks and in the negative control (NOCLOZE ), we pretrain by explicitly excluding cloze-like masks.", "If the success of MLM can be mostly explained by implicit cloze reductions, then we should expect CLOZE to have strong downstream performance while NOCLOZE leads to a minimal performance gain.", "We compare pretraining with the uniform masking strategy used in BERT (UNIFORM ) to these two control groups.", "If UNIFORM performs worse than the positive control and more similar to the negative control, then we know that uniform masking does not leverage cloze-like masks effectively.", "Simulating Pretraining.", "Given computational constraints, we cannot retrain BERT from scratch.", "Instead, we approximate the pretraining process by continuing to update BERT with MLM (Gururan-gan et al., 2020), which we refer to as second-stage pretraining.", "Although this is an approximation to the actual pretraining process, the second-stage pretraining shares the same fundamental problem for pretraining: how can unsupervised training lead to downstream performance gains?", "We study the effectiveness of different masking strategies by comparing to a BERT model without second-stage pretraining (VANILLA ).", "We experiment with three text classification datasets: SST-2 (Socher et al., 2013), Hyperpartisan (Kiesel et al., 2019), and AGNews (Zhang et al., 2015).", "SST-2 classifies movie reviews by binary sentiment; Hyperpartisan is a binary classification task on whether a news article takes an extreme partisan standpoint; and AGNews classifies news articles into four different topics.", "On SST-2 and AGNews, we perform the second-stage pretraining on the training inputs (not using the labels).", "On Hyperpartisan, we use 100k unlabeled news articles that are released with the dataset.", "For SST-2 and AGNews, we study a low-resource setting and set the number of finetuning examples to be 20 .", "For Hyperpartisan, we use the training set, which has 515 labeled examples.", "All evaluations are performed by fine-tuning a bert-base-uncased model (See Appendix A for full details).", "Approximating Cloze-like Masking.", "We cannot identify the optimal set of cloze-like masks for an arbitrary downstream task, but these three tasks have associated lexicons which we can use to approximate the cloze-like masks.", "For SST-2, we take the sentiment lexicon selected by Hu and Liu (2004); for Hyperpartisan, we take the NRC word-emotion association lexicon (Mohammad and Turney, 2013); and for AGNews, we extract topic words by training a logistic regression classifier and Vanilla No Cloze Uniform Cloze 0.50 0.55 0.60 0.65 0.70 0.75 0.80 V a li d a t i o n A cc .", "taking the top 1k features to be cloze-like masks.", "Results.", "Figure 5 plots the finetuning performance of different masking strategies.", "We observe that UNIFORM outperforms VANILLA , which indicates that second-stage pretraining is extracting useful information and our experiment setup is useful for studying how MLM leads to performance gains.", "As expected, CLOZE achieves the best accuracy, which confirms that cloze-like masks can be helpful and validates our cloze approximations.", "The UNIFORM mask is much closer to NOCLOZE than CLOZE .", "This suggests that uniform masking does not leverage cloze-like masks well and cloze reductions alone cannot account for the success of MLM.", "This view is further supported by the observation that NOCLOZE outperforms VANILLA suggesting that generic masks that are not cloze-like still contain useful inductive biases.", "Our results support our earlier view that there may be an alternative mechanism that allows generic masks that are not cloze-like to benefit downstream learning.", "Next, we will empirically examine BERT's learned conditional independence structure among tokens and show that the statistical dependencies relate to syntactic dependencies.", "Our analysis in section 4.1 shows that conditional MI (which is optimized by the MLM objective) can extract conditional independences.", "We will show that statistical dependencies estimated by conditional MI are related to syntactic dependencies by using conditional MI for unsupervised parsing.", "Background.", "One might expect that the statistical dependencies among words are correlated with syntactic dependencies.", "Indeed, Futrell et al. (2019) show that heads and dependents in dependency parse trees have high pointwise mutual information (PMI) on average.", "However, previous attempts (Carroll and Charniak, 1992; Paskin, 2002) show that unsupervised parsing approaches based on PMI achieve close to random accuracy.", "Our analysis suggests that MLMs extract a more fine-grained notion of statistical dependence (condi-tional MI) which does not suffer from the existence of latent variables (Proposition 3).", "We now show that the conditional MI captured by MLMs achieves far better performance, on par with classic unsupervised parsing baselines.", "Baselines.", "We compare conditional MI to PMI as well as conditional PMI, an ablation in which we do not take expectation over possible words.", "For all statistical dependency based methods (cond. MI, PMI, and cond. PMI), we compute pairwise dependence for each word pair in a sentence and construct a minimum spanning tree on the negative values to generate parse trees.", "To contextualize our results, we compare against three simple baselines: RANDOM which draws a random tree on the input sentence, LINEARCHAIN which links adjacent words in a sentence, and a classic unsupervised parsing method (Klein and Manning, 2004).", "Experimental Setup.", "We conduct experiments on the English Penn Treebank using the WSJ corpus and convert the annotated constituency parses to Stanford Dependency Formalism (de Marneffe et al., 2006).", "Following Yang et al. (2020), we evaluate on sentences of length 10 in the test split, which contains 389 sentences (Appendix B.1 describes the same experiment on longer sentences, which have similar results).", "We experiment with the bert-base-cased model (more details in Appendix A) and evaluate by the undirected unlabeled attachment score (UUAS).", "Results.", "Table 1 shows a much stronger-than-random association between conditional MI and dependency grammar.", "In fact, the parses extracted The above represents a triumph of either apathy or civility .", "from conditional MI has better quality than LINEARCHAIN and the classic method (Klein and Manning, 2004).", "Unlike conditional MI, PMI only has a close-to-random performance, which is consistent with prior work.", "We also see that conditional MI outperforms conditional PMI, which is consistent with our theoretical framework that suggests that conditional MI (and not PMI) recovers the graphical model structure.", "We also perform a fine-grained analysis by investigating relations where conditional MI differs from LINEARCHAIN .", "Because the test split is small and conditional MI does not involve any training, we perform this analysis on 5 , 000 sentences from the training split.", "Table 2 presents the results and shows that conditional MI does not simply recover the linear chain bias.", "Meanwhile, we also observe a deviation between conditional MI and dependency grammar on relations like number and cc .", "This is reasonable because certain aspects of dependency grammar depend on human conventions that do not necessarily have a consensus (Popel et al., 2013).", "Figure 6 illustrates with an example parse extracted from conditional MI.", "We observe that conditional MI correctly captures dobj and conj .", "Knowing the verb, e.g. represents , limits the range of objects that can appear in a sentence so intuitively we expect a high conditional MI between the direct object and the verb.", "Similarly for phrases like A and B , we would expect A and B to be statistically dependent.", "Instead, it links or with either which certainly has statistical dependence.", "This once again suggests that the errors' incurred by the conditional PMI method are not simply failures to estimate dependence but natural differences in the definition of dependence.", "We study how MLM with uniform masking can learn useful linguistic structures and inductive biases for downstream tasks.", "Our work demonstrates that a substantial part of the performance gains of MLM pretraining cannot be attributed to task-specific, cloze-like masks.", "Instead, learning with task-agnostic, generic masks encourages the model to capture direct statistical dependencies among tokens, and we show through unsupervised parsing evaluations that this has a close correspondence to syntactic structures.", "Existing work has suggested that statistical and syntactic dependencies are fundamentally different, with unsupervised parsing based on PMI achieving close-to-random performance.", "Our work demonstrates that this is not necessarily the case, and better measures of statistical dependence (such as those learned by MLMs) can serve as implicit supervision for learning syntactic structures.", "Our findings open new space for future works on how syntax can be learned in an emergent way and on how to design masking strategies that further improve dependency learning.", "We thank Rishi Bommasani, Lisa Li, Kawin Etha-yarajh, the anonymous reviewers and the Stanford NLP group for their helpful feedback and discussions." ]
[ "method", "abstain", "result", "result", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "objective", "abstain", "objective", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "abstain", "other", "abstain", "other", "abstain", "objective", "other", "other", "other", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "result", "objective", "abstain", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "objective", "result", "abstain", "objective", "objective", "other" ]
[ "Self-attention mechanism has been shown to be an effective approach for capturing global context dependencies in sequence modeling, but it suffers from quadratic complexity in time and memory usage.", "Due to the sparsity of the attention matrix, much computation is redundant.", "Therefore, in this paper, we design an efficient Transformer architecture, named Fourier Sparse Attention for Transformer (FSAT), for fast long-range sequence modeling.", "We provide a brand-new perspective for constructing sparse attention matrix, i.e. making the sparse attention matrix predictable.", "Two core submodules are: (1) A fast Fourier transform based hidden state cross module, which captures and pools L 2 semantic combinations in O ( L log L ) time complexity.", "(2) A sparse attention matrix estimation module, which predicts dominant elements of an attention matrix based on the output of the previous hidden state cross module.", "By reparameterization and gradient truncation, FSAT successfully learned the index of dominant elements.", "The overall complexity about the sequence length is reduced from O ( L 2 ) to O ( L log L ) .", "Extensive experiments (natural language, vision, and math) show that FSAT remarkably outperforms the standard multi-head attention and its variants in various long-sequence tasks with low computational costs, and achieves new state-of-the-art results on the Long Range Arena benchmark.", "Models based on the Transformer architecture (Vaswani et al., 2017) have been firmly established as state of the art approaches across a range of domains like language (Brown et al., 2020; Clark et al., 2020; Devlin et al., 2018), and vision (Carion et al., 2020; Dosovitskiy et al., 2020).", "The Transformer architecture perceiving long-range context heavily relies on the multi-head self-attention mechanism, in which the relevance of every token pairs is computed to decide the attention scores and to-ken's representations are the weighted average of all tokens using the attention scores.", "Despite its effectiveness, self-attention mech-anism's quadratic time and memory complexity about the sequence length is an obstacle to extend Transformer for very long sequences, such as document-level text tasks, high-resolution images, videos, etc.", "Shen et al. (2021, 2018) elaborate the issue of high computational complexity.", "For instance, more than 68GB GPU memory and 1.6T multiply-accumulation operations are required for a 64 64 32 3D feature volume.", "Great efforts have been made to develop Trans-former's variants for long-range sequence modeling tasks.", "Tay et al. (2020c) categorize the researches of efficient Transformers:", "(a) Fixed patterns or combination of patterns (Beltagy et al., 2020; Zaheer et al., 2020), in which the field to be attended is pre-defined by fixed pattern.", "(b) Learnable patterns (Kitaev et al., 2020; Tay et al., 2020a), in which tokens are sorted or clustered in a data-driven fashion.", "(c) Memory (Ma et al., 2021; Lee et al., 2019), in which spacial tokens with global view are introduced to compress the input sequence.", "(d) Low-rank methods (Tay et al., 2021; Wang et al., 2020), which adopt low-rank approximations of the self-attention matrix.", "(e) Kernels (Katharopoulos et al., 2020; Choromanski et al., 2020a,b), which view the attention mechanism through kernelization.", "(f) Recurrence (Rae et al., 2019), which connects multiple segments via recurrence structure.", "Despite their variety, approximating the quadratic-cost attention matrix by the sparsity of attention matrix is the common idea.", "In this paper, we propose predictable sparse attention, and name it as Fourier Sparse Attention for Transformer (FSAT) due to fast Fourier transform is a key operation in our method.", "FSAT is a brand-new perspective of efficient Transformer, i.e. learning the sparse structure of an attention ma-234 trix in end-to-end fashion.", "Specifically, we firstly compute the semantic relevance of token pairs and then use it to predict the indices of dominant (non-zero) elements of an attention matrix, and finally attention scores are filled according to the predicted sparse structure.", "In this process, two problems have to be solved: (1) Efficiently capturing semantic relevance of L 2 token pairs where L is the length of an input sequence.", "(2) Learning discrete indices with gradient descent algorithm.", "To this end, we propose pooled hidden state cross to efficiently calculate and compress semantic relevance in O ( L log L ) time complexity.", "For end-to-end training, we get continuous and meaningful gradients for learning discrete indices by reparameterization and gradient truncation.", "Consequently, FSAT is out of the scope of Tay et al. (2020c)'s taxonomy.", "It's worth noting that predictable sparse attention is different from the methods of learnable patterns.", "Although these methods use learnable algorithms to sort or cluster tokens, they still exploit fixed patterns (chun-ked patterns).", "Instead, FSAT directly predicts the sparse structure of an attention matrix without any pre-defined pattern.", "In order to fit the predicted sparse attention matrix, the key and value vectors in self-attention mechanism are projected from pooled hidden state cross vectors, which can be viewed as 2-order features of the tokens.", "As an extra benefit, model's expressiveness may increase.", "Therefore, unlike some efficient Transformer variants which approximate the quadratic-cost attention matrix at the expense of accuracy, FSAT not only reduces computational complexity but also improves model accuracy in some tasks.", "On Long Range Arena benchmark, FSAT outperforms the Transformer and several recent efficient self-attention methods by a large margin.", "To summarize, our contributions are as follows: We propose Fourier Sparse Attention for Transformer (FSAT) to extend Transformer for long sequences.", "The overall complexity about the sequence length is reduced from O ( L 2 ) to O ( L log L ) .", "We introduce the pooled hidden state cross to implement FSAT.", "Empirically, extensive experiments (natural language, vision, and math) demonstrate the advantages of our proposed methods, and new state-of-the-art results are achieved on the Long Range Arena benchmark.", "Tay et al. (2020c) have provided a comprehensive overview of existing efficient Transformers.", "Some promising models are compared with our method in the experiments.", "Big bird (Zaheer et al., 2020) uses random, sliding window and global attention to build hybrid attention pattern.", "Performer (Choromanski et al., 2020a,b) utilizes orthogonal random features to approximate softmax-attention kernels with linear complexity.", "Linformer (Wang et al., 2020) achieves linear complexity by adopting random projections based on the JL lemma to compress the attention length to a fix length.", "Longformer (Beltagy et al., 2020) combines local windowed attention with task-motivated global attention for long documents.", "Reformer proposed in Kitaev et al. (2020) clusters similar tokens by locality-sensitive-hashing, and dot-product attention is performed inside clusters.", "Feature crosses, which synthesize crossing combinations of features, is a widely used technique for extending features' predictive ability in machine learning.", "For example, Takahashi et al. (2018) demonstrate their gender identification system leveraging synergy of both texts and images by feature cross technique.", "Yu et al. (2018) and Seo et al. (2016) utilize crossed feature to design a trilinear attention function.", "Chen et al. (2021) explore how to search the best feature crosses by submodular optimization.", "More researches involving feature crosses focus on feature selection (Zadeh et al., 2017; Wei et al., 2015; Hoque et al., 2014; Nie et al., 2010; Guyon and Elisseeff, 2003; Kwak and Choi, 2002; Rogati and Yang, 2002; Weston et al., 2000).", "Recently, Fourier transform in Transformer has garnered interest.", "Choromanski et al. (2020a,b) propose Performer by approximating softmax attention-kernels via orthogonal random Fourier features.", "Tamkin et al. (2020) propose BERT + Prism model using spectral filters in the activations of neurons for producing multi-scale representations, and got positive experimental results at 235 1 ( ) i f x W 1 W 2 i x ij c k c Sum-pooling (via FFT) L DW 1 W 2 L j x 2 ( ) j f x i x j x Figure 1: The process of feature mapping, hidden state cross, and sum-pooling along antidiagonals, corresponding to Formula", "More radically, Lee-Thorp et al. (2021) reform Transformer by replacing the entire self-attention sub-layer with discrete Fourier transforms along sequence dimension and hidden dimension respectively.", "In this section, we start by explaining the motivation of introducing pooled hidden state cross, then introduce how to compute pooled hidden state cross, and finally discuss the way to equip self-attention with pooled hidden state cross.", "Three desiderata motivate our use of pooled hidden state cross: (1) Long-range semantic dependency and relevance can be captured by hidden state crosses, since the combinations of every token pair have been included.", "Capturing semantic relevance is also the basis of the predictable sparse attention proposed in the next section.", "(2) Hidden state cross is a way to extract 2-order token features, intuitively it may generate more expressive feature representations.", "(3) Crossing and pooling hidden states are conducted depth-wisely, so that they can be efficiently implemented via fast Fourier transform.", "Inspired by the feature cross technique, we propose the concept of hidden state cross.", "Briefly speaking, feature cross technique (a.k.a feature combination) synthesizes new feature xy by multiplying feature x and feature y .", "We extend it to the level of hidden state of deep learning models.", "Specifically, given the hidden states of a token sequence x 0 , , x L 1 , we define c ij = f 1 ( x i ) f 2 ( x j ) (1) as the crossed hidden state vector of the i -th and the j -th token, where f ( ) is a parameterized nonlinear feature mapping function, subscripts indicate containing different parameters, and is the Hadamard product.", "We expect that the semantic combination of two tokens can be learned and encoded in the crossed hidden state vector.", "Problems arise when computing the hidden state crosses of L 2 token pairs.", "Firstly, the computational complexity is O ( L 2 ) about the sequence length L , which is computationally prohibitive for long sequences.", "Secondly, the output should be L 2 vectors which is too large to be attended in the Transformer model.", "To alleviate the problems, crossed hidden states are sum-pooled in this paper.", "Figure 1 illustrates the computation.", "The pooled hidden state cross c k represents the sum of crossed hidden states along the k -th antidiagonal.", "Therefore, the output vectors are compressed from L 2 vectors, i.e. { c ij } i,j [0 ,L 1] , to 2 L 1 vectors, i.e. { c k } k [0 , 2 L 2] .", "Formula 2 can be efficiently implemented by Fast Fourier Transform (FFT).", "Specifically, hidden states are firstly non-linearly converted by the feature mapping functions, and then 1D discrete Fourier transform is applied along the sequence dimension to transform the mapped hidden states into frequency domain, crossing and pooling are then conducted via multiplication in frequency domain, finally by applying inverse 1D discrete Fourier transform the pooled hidden state crosses are transformed back from frequency domain.", "By the Hermitian property, the imaginary part of the output is zero.", "Thus, we can safely only keep the real part of the output and avoid involving complex numbers into the model.", "Formally, C 0 = ( F 1 ( F ( f 1 ( X )) F ( f 2 ( X )))) (3) in which, X RL D denotes the matrix consisting of the L D -dimensional hidden states.", "are 1D Fourier transform and inverse 1D Fourier transform respectively.", "means keeping the real part of complex numbers.", "C 0 R (2 L 1) D is the output matrix of pooled hidden state crosses.", "The computational complexity about the sequence length is reduced from O ( L 2 ) to O ( L log L ) by FFT.", "Formula 3 has the output matrix of shape (2 L 1) D , which doubles the sequence length.", "To keep the computational complexity of attention not increasing, the length is needed to be reduced.", "As is shown in Figure 1, sum-pooling along antidiagonal produces a symmetric token combination (cross) about a central token.", "Specifically, in even-numbered antidiagonals, the central token is the k 2 -th token.", "For instance, the token combinations of the 10th antidiagonal include token 5-5, 4-6, 3-7, and so on.", "In odd-numbered antidiagonals, the symmetric center is between k 2 -th token and k 2 -th token.", "For instance, the 11th antidiagonal includes the token combinations of token 5-6, 4-7, 3-8, etc.", "Therefore, we can reduce the length by merging the consecutive even-numbered and odd-numbered antidiagonals so that token combinations of near symmetric centers are together.", "In the previous example, token combinations of the 10th antidiagonal and the 11th antidiagonal are summed up.", "Besides, token combinations of two same tokens are subtracted, e.g. token combination 5-5.", "Formally, C = LN ( C 1 + C 2 f 1 ( X ) f 2 ( X )) (4) where C 1 RL D (padding a row of zero to align its the length to C 2 ) and C 2 RL D are the odd-numbered and even-numbered rows of matrix C 0 respectively, LN denotes layer normalization which ensures stable training, C RL D is the output.", "In this section, we revise the multi-head self-attention to utilize our pooled hidden state cross.", "The output of an attention layer is calculated as follows.", "where d is the dimension of a single head, superscript h denotes the h -th head, (cid:81) denotes the concatenation operation of H heads along the last dimension, is a row-wise scoring function (e.g.", "softmax), W o R Hd D is the output projection matrix.", "It's worth noting that the key, and value in attention mechanism are revised.", "Due to the sparsity of attention matrix, most elements of the attention matrix are close to zero, for the sake of simplicity, we call those elements, which are much greater than zero, dominant elements.", "Base on the pooled hidden state cross, we propose predictable sparse attention, which predicts dominant elements of attention matrix, to avoid computing the full attention matrix.", "In this section, we describe the predictable sparse attention by a weighted directed sparse graph, in which the vertexes are the L query/key vectors of the input sequence, its directed edges represent that the head vertex attends to the tail vertex in the attention mechanism, and two weights (i.e. attention score and confidence) are assigned to each edge.", "Figure 2 illustrates a sub-graph with a single key vertex and its in-neighbors.", "The attention matrix is the adjacency matrix of the graph.", "For the multihead attention, each head has a graph computed independently.", "The i -th output vector of the pro-237 posed predictable sparse attention is defined as A ( X , C ) i = ( H (cid:89) h =1 (( q hi K h TN hi d ) s hi N hi ) V hN hi ) W o (7) where row vector q h , and matrices K h , V h RL d are respectively the query, key, and value projected from X or C following Formula 6, N hi represents the out-neighbors set of the i -th vertex in the h -th head's directed graph, when N hi is written at subscript, it means only extracting the ma-trix's rows corresponding to the vertexes in N hi , and s hi N hi represents the confidence vector consisting of the confidence scores of the edges pointing from i -th vertex to the vertexes in N hi in the h -th head's directed graph.", "The challenge of the predictable sparse attention is to find out which elements in the attention matrix are dominant under the condition of not computing the full attention matrix.", "We utilize pooled hidden state cross, because semantic combination vectors contain the information of the relevance of token pairs.", "We introduce attention confidence to help the model learning the sparse structure of an attention matrix.", "Specifically, we define the confidence of the i -th query vector q i attending to the j -th key vector k j as s i j = ( i | I j , 2 ) (8) in which, denotes the probability density function of Gaussian distribution, I j is the index of the dominant query vector, which attends to the key vector k j with a dominant attention score (i.e. edge I j j corresponds to a dominant element in the attention matrix), 2 is a hyper-parameter representing the variance.", "We have this definition because of the observation that the query vectors far away from the dominant query vector have decreasing probabilities of attending to the key vector.", "The key to make a sparse attention matrix predictable is how to back-propagate gradients through the predicted discrete indices.", "In this paper, discrete indices are reparameterized.", "A dominant index matrix I RL M is predicted based on the pooled hidden state cross C : I = ( CWI + b I ) L max (9) where WI RD M and b I RM are the learnable weight and bias respectively, ( ) is the sigmoid function, L max is the maximum sequence length that the model supports.", "Since there may be multiple dominant query vectors for a key vector, the hyper-parameter M presumes the maximum number of dominant query vectors for a single key vector.", "Given the sparse graph described by an index matrix I NL M , whose value in the j -th row m -th column I jm indicates that there is a directed edge pointing from the I jm -th query vector to the j -th key vector, the confidence score of each edge can be calculated as follows.", "where I jm is the m -th predicted dominant index of the j -th key vector predicted by Formula 9.", "Therefore, applying the chain rule, the gradient of confidence scores from a loss function can continue to be propagated through Formula 9 and Formula 10 to matrix C .", "The index matrix I NL M decides which edges are considered and which edges are ignored in the sparse graph.", "In this paper, two types of index matrix are adopted, a predicted index matrix I p = I and a random index matrix I r U [0 ,N 1] .", "The process for learning sparse attention matrix can be viewed as a process for searching right indices, it is a process of exploring new knowledge and exploiting existing knowledge.", "Therefore, in training, the sparse graph is decided by the union of I p (ex-ploitation) and I r (exploration), and, in inference, only I p is used.", "When back-propagation, the gradient of attention confidence is truncated into the range ( , 0 ] for stable convergence.", "Because, for gradient descent algorithm, positive gradients will decrease the confidence values on edges, it means that the gradients prevent the model from considering these edges in sparse attention mechanism, and tune the model's parameters to change its predicted dominant-indices.", "But due to the discreteness of indices, changing the predicted dominant-indices 238 Models ListOps Text Retrieval Image Pathfinder Avg.", "slightly larger or smaller does not ensure moving closer to the correct dominant indices.", "On the contrary, negative gradients indicate hitting the correct dominant-indices, and the model should be tuned using these gradients.", "In terms of computational complexity, the proposed predictable sparse attention has lower computational cost in time and memory usage.", "Specifically, the computational complexity for pooled hidden state cross includes the feature mapping O ( LD 2 ) , and fast Fourier transform O ( LD log L ) .", "The computational complexity involved in Formula 7 includes computing attention probability O ( LMD ) , computing attention confidence O ( LDM ) , and matrix multiplications about the value matrix and projection matrices O ( LMD + LD 2 ) .", "The overall computational complexity is O ( LD 2 + LD log L + LMD ) .", "Since M is always small (e.g. M = 4 ), for long sequences, this complexity is much smaller than O ( L 2 D + LD 2 ) which is the complexity of standard multi-head attention.", "In memory usage, sparse attention has no need to store the full attention matrix, thus the memory complexity is reduced from O ( L 2 H + LD ) to O ( LMH + LD ) .", "We conduct experiments to study the performance of the proposed approach on long sequence modeling tasks.", "As the primary goal, we evaluate the proposed Fourier Sparse Attention for Transformer (FSAT) on multiple tasks requiring long-context perception.", "We test our models on the Long Range Arena (LRA) benchmark (Tay et al., 2020b), since it is specifically designed for evaluating the performance of efficient Transformers on various long sequence tasks, and there are quite a number of baseline models evaluated on this benchmark.", "The LRA benchmark includes five tasks of different kinds and modalities (natural language, vision, and math) in order to simulate meaningful real-world tasks under the long-context scenario.", "ListOps This task requires models to compute the output value of mathematical expression with a hierarchical structure and operators.", "The sequence lengths are up to 2K.", "Text A byte-level text classification task to probe the model's reasoning ability with compositional, unsegmented characters.", "Character 239 Steps per second Peak Memory Usage Model 1K 2K 3K 4K 1K 2K 3K 4K Transformer 1.0 1.0 1.0 1.0 1.00 1.00 1.00 1.00 Local Attention 1.1 1.7 3.2 5.3 0.49 0.29 0.19 0.14 Linformer 1.2 1.9 3.7 5.5 0.44 0.21 0.18 0.1 Reformer 0.5 0.4 0.7 0.8 0.56 0.37 0.28 0.24 Sinkhorn Trans.", "sequences are truncated or padded to a fixed maximum length of 4K in this task.", "Retrieval A byte-level document retrieval task tests model's ability to compress long sequences into representations suitable for similarity-based matching.", "Image An image classification task evaluates a model's performance of perceiving 2D spatial relations between input pixels.", "Images are flattened to sequences of length 1K pixels.", "Pathfinder A binary image classification task tests if a model can capture long-range spatial dependencies by judging if two points are connected by a path consisting of dashes in an image with distractor paths.", "32 32 images are flattened to sequences of length 1K pixels.", "We compare our model with a number of promising models, including vanilla Trans-former(Vaswani et al., 2017), a local attention baseline, Sparse Transformer(Child et al., 2019), Longformer(Beltagy et al., 2020), Linformer(Wang et al., 2020), Reformer(Kitaev et al., 2020), Sinkhorn Transformer(Tay et al., 2020a), Synthe-sizer(Tay et al., 2021), Big Bird(Zaheer et al., 2020), Linear Transformer(Katharopoulos et al., 2020), Performer(Choromanski et al., 2020a,b), and more recent models, such as FNet(Lee-Thorp et al., 2021), Nsytr o mformer(Xiong et al., 2021), and Luan(Ma et al., 2021).", "We run our experiments on the LRA benchmark with the configurations based on Tay et al. (2020b)", "open source codebase.", "Specifically, we follow the original data preprocessing, data split, and keep roughly equivalent model parameters for a fair comparison with the baselines reported in Tay et al. (2020b).", "An exception is that we reproduce the experiments of the Retrieval task for a longer training of 30K steps because models are not fully converged in 5K training steps.", "Ma et al. (2021); Lee-Thorp et al. (2021); Xiong et al. (2021) also pointed the same issue.", "We also re-run the vanilla Transformer using our Pytorch implementation.", "For the proposed FSAT, the default value of the hyper-parameter M is 4, and the variance 2 is empirically set to L max which is the maximum sequence length of each task.", "To ensure roughly equivalent model parameters, we reduce the dimension of FFN layer from 4 times of the hidden size to 2 times for FSAT to offset the increased parameters of non-linear feature mapping functions (Formula 1).", "In our code, sparse matrix multiplications involved in FSAT model are implemented via scatter operations at batch-level for better efficiency.", "Median results of 5 runs are reported in the tables.", "Table 1 summarizes the performance of a number of models on the LRA benchmark.", "As we can see, the proposed model clearly outperforms all previously published approaches, and achieves new state-of-the-art performance on four of the five datasets, and a 5 .", "8% absolute improvement over average performance, which validates the effectiveness of the proposed FSAT model.", "It is noteworthy that we separately report the average accuracy 240 S p ee d / m s Length /# tokens FSATTrans 70 60 50 40 30 20 10 0 6k 5k 4k 3k 2k 1k 0 Length /# tokens FSATTrans M e mm o r y .", "without the Text task, this is because we find that convolution layers have a significant impact on this task.", "In the experiment, we adopt a depth-wise separable convolution layer with a kernel size of 5 as the feature mapping function when computing hidden crosses, the accuracy significantly increases from 65 .", "95% to 80 .", "24% .", "Therefore, considering the particularity of the Text task, we report its result separately.", "We suspect that the small data size may be the reason of its particularity.", "The time and memory efficiency of our model and competing approaches are summarized in Table", "2. Compared with the vanilla Transformer, our FSAT significantly reduces the computational complexity with faster training speed and lower memory usage, which demonstrates that directly predicting the sparse structure of attention matrix is an effective way for building efficient Transformer architecture.", "The limitation of FSAT is that extra operations (e.g. gathering, slicing) of linear complexity are involved, so that FSAT can not bring parallel superiority into full play.", "Even so, among all compared Transformer variants, FSAT achieves promising results in time and memory efficiency.", "The trend of computation complexity increasing is shown in Figure", "3. This is in line with expectations, FSAT has a linear rate of increase, its advantage is especially obvious for sequences longer than 4K tokens.", "This demonstrates the potential of the proposed predictable sparse attention for the tasks with much longer sequences, e.g. 3D feature volume.", "An ablation study is conducted to verify the necessity of our proposed model components.", "In Table 3, we report FSAT models with the different number of predicted dominant indices.", "The results show ListOps Retrieval Image Avg.", "that for each key vector in the attention mechanism about 4 predicted dominant query vectors are enough for the model to produce high accuracy.", "We also remove the sparse attention module, and test the architecture of only integrating the pooled hidden state cross into the attention mechanism, corresponding to Formula 5.", "We call this architecture Fourier Attention for Transformer (FAT).", "It can be seen from the results of FAT, better results can be obtained with the pooled hidden state cross in some tasks, which supports our hypothesis that the 2-order token feature may generate more expressive feature representations.", "It is noteworthy that without the gradient truncation, or only using random indices I r or I p , the performance significantly drops.", "Besides, the depth-wise convolution based feature mapping performs worse than a fully-connected structure base feature mapping.", "In this paper, we proposed a new efficient Transformer architecture, FSAT, which directly predicts the sparse structure of attention matrix to avoid computing the quadratic-cost full self-attention.", "The proposed approach has the advantages in long-range sequence modeling tasks meanwhile reaching a balance between time, memory, and accuracy.", "We showed the effectiveness of the proposed method in modeling long sequence context using the Long Range Arena benchmark.", "Experimental results showed that state-of-the-art performance is achieved by our proposed method." ]
[ "abstain", "abstain", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "objective", "abstain", "objective", "objective" ]
[ "Collaborative filtering (CF) is a core technique for recommender systems.", "Traditional CF approaches exploit user-item relations (e.g., clicks, likes, and views) only and hence they suffer from the data sparsity issue.", "Items are usually associated with unstructured text such as article abstracts and product reviews.", "We develop a Personalized Neural Embedding (PNE) framework to exploit both interactions and words seamlessly.", "We learn such embeddings of users, items, and words jointly, and predict user preferences on items based on these learned representations.", "PNE estimates the probability that a user will like an item by two termsbehavior factors and semantic factors.", "On two real-world datasets, PNE shows better performance than four state-of-the-art baselines in terms of three metrics.", "We also show that PNE learns meaningful word embeddings by visualization.", "Recommender systems are widely used in e-commerce platforms, such as to help consumers buy products at Amazon, watch videos on Youtube, and read articles on Google News.", "They are useful to alleviate the information overload and improve user satisfaction.", "Given history records of consumers such as the product transactions and movie watching, collaborative filtering (CF) is among the most effective approaches based on the simple intuition that if users rated items similarly in the past then they are likely to rate items similarly in the future (Sarwar et al., 2001).", "History records include both implicit (e.g., purchase and clicks) and explicit (e.g., likes/dislikes and ratings) feedback which can be represented as a user-item interaction matrix.", "Typically, observed user-item interactions are incomplete with a large portion remaining not recorded.", "The goal of recommendation is to predict user preferences on these missing interactions.", "This setting requires to complete the partial observed rating matrix.", "Matrix Factorization (MF) techniques which can learn latent factors for users and items are the main cornerstone for CF (Mnih and Salakhutdinov, 2008; Koren, 2008; Koren et al., 2009).", "It is effective and flexible to integrate with additional data sources (Hu et al., 2015).", "Recently, neural networks like Multilayer Perceptron (MLP) are used to learn an interaction function from data with the power of learning highly nonlinear relationships between users and items (Dziugaite and Roy, 2015; Cheng et al., 2016; He et al., 2017; Hu et al., 2018b).", "MF and neural CF exploit user-item behavior interactions only and hence they both suffer from the data sparsity and cold-start issues.", "Items are usually associated with unstructured text, like news articles and product reviews.", "These additional sources are essential for recommendation beyond user-item interactions since they contain independent and diverse information.", "Hence, they provide an opportunity to alleviate the data sparsity issue (Ganu et al., 2009; Hu et al., 2018a).", "For application domains like recommending research papers and news articles, the unstructured text associated with the item is its text content (Wang and Blei, 2011; Wang et al., 2015; Bansal et al., 2016).", "For some domains like recommending products, the unstructured text associated with the item is its user reviews which justify the rating behavior (McAuley and Leskovec, 2013; He and McAuley, 2016).", "These methods adopt topic modelling techniques and neural networks to exploit the item content leading to performance improvement.", "A typical way of exploiting text content is to firstly extract a feature vector for each document by averaging word embeddings in the document, and then to learn a text factor corresponding to this feature vector (Hu and Dai, 2017).", "These embeddings are pre-trained from a large corpus such as Wikipedia.", "This approach separates the extraction of text feature from the learning of user-item interaction.", "These two processes cannot benefit from each other and errors in the previous step maybe propagate to the successive steps.", "Another way is to learn a topic vector using topic modelling (Wang and Blei, 2011; McAuley and Leskovec, 2013; Bao et al., 2014) by aligning behavior factors and topic factors with a link function such as softmax and offset.", "Recently, neural networks are used to learn a representation from the text using autoencoder-s (Wang et al., 2015; Zhang et al., 2016), recurrent networks (Bansal et al., 2016), and convolutional networks (Zheng et al., 2017; Catherine and Cohen, 2017).", "These methods treat different words in the document as equal importance and do not match word semantics with the specific user.", "Instead, we achieve to learn a personalized word embedding with the guidance of user-item interactions.", "That is, the importance of words is learned to match user preferences.", "The attention mechanism can be used to learn these importance weights.", "Memory Networks (MemNet) have been used in recommendation to model item content (Hu et al., 2018c; Huang et al., 2017), capture user neighborhood (Ebesu et al., 2018), and learn latent relationships (Tay et al., 2018).", "We follow this thread to adapt a MemNet to match word semantics with user preferences.", "In this paper, we propose a novel neural framework to exploit relational interactions and text content seamlessly.", "The proposed Personalized Neural Embedding (PNE) model fuses semantic representations learnt from unstructured text with behavior representations learnt from user-item interactions jointly for effective estimation of user preferences on items.", "PNE estimates the preference probability by two kinds of factors.", "The behavior factor is to capture the personalized preference of a user to an item learned from behavior interactions.", "The semantic factor is to capture the high-level representation attentively extracted from the unstructured text by matching word semantics with user preferences.", "To model the behavior factor, we adopt a neural CF approach, which learns the user-item nonlinear interaction relationships using a neural network (CFNet).", "To model the semantic factor, we adopt a memory network to match word semantics with the specific user via the attention mechanism inherent in the memory module (MemNet), determining which words are highly relevant to the user preferences.", "PNE integrates relational interactions with unstructured text by bridging neural CF and memory networks.", "PNE can also learn meaningful word embeddings.", "We present PNE to jointly learn representations of users, items, and words.", "PNE seamlessly captures nonlinear user-item interaction relationships and matches word semantics with user preferences.", "Denote the set of users by U and items by I .", "We use a rating matrix Y R |U||I| to describe user-item interactions where each entry y ui { 0 , 1 } is 1 (observed entries) if user u has an interaction with item i and 0 (unobserved entries) otherwise.", "Usually the interaction matrix is very sparse since a user u U only consumed a very small subset of all items.", "For the task of item recommendation, each user is only interested in identifying topK items (typically K is small e.g. tens or hundreds).", "Items are ranked by their predicted scores: y ui = f ( u, i | ) , (1) where f is an interaction function and denotes model parameters.", "PNE consists of a CF network (CFNet) to learn a nonlinear interaction function and of a memory network (MemNet) to match word semantics with user preferences.", "The information flow in PNE goes from the input ( u, i ) to the output y ui through the following five modules.", "1. Input: ( u, i ) ( (cid:126)e u ,(cid:126)e i ) This module encodes user-item interaction indices.", "We adopt the one-hot encoding.", "It takes user u and item i , and maps them into one-hot encodings (cid:126)e u { 0 , 1 } |U| and (cid:126)e i { 0 , 1 } |I| where only the element corresponding to that user/item index is 1 and all others are 0.", "2. Embedding: ( (cid:126)e u ,(cid:126)e i ) x ui This module firstly embeds one-hot encodings into continuous representations x u = PT (cid:126)e u and x i = QT (cid:126)e i by embedding matrices P R |U| d and Q R |I| d respectively, where d is the latent dimension.", "It then concatenates them as x ui = [ x u , x i ] to be the input of following CFNet and MemNet modules.", "3. CFNet: x ui z behavior ui This module is a CF approach to exploit user-item interactions.", "It takes continuous representations from the embedding Dataset #user #item #rating #word #density avg.", "module and then transforms to a final behavior factor representation: z behavior ui = ReLU ( W x ui + b ) , (2) where ReLU ( x ) = max(0 , x ) is an activation function, and W and b are connection weights and biases.", "4. MemNet: x ui z semantic ui This module is to model the item content with the guidance of user-item interaction.", "The item content is modelled by memories.", "It takes representations from both the embedding module and the review text d ui associated with the corresponding user-item ( u, i ) into a final semantic factor representation: z semantic ui = (cid:88) j : w j d ui Softmax ( a u,ij ) c j , (3) where the external memory slot c j is an embedding vector for word w j by mapping it with an external memory matrix C .", "The attentive weight a u,ij encodes the relevance of user u to word w j by content-based addressing: a u,ij = x Tui m u,ij , (4) where memory m u,ij is concatenated from internal memory slots { m uj , m ij } which are mapped from word w j by internal memory matrices A u for user attention and A i for item attention.", "5. Output: z ui y ui This module predicts the recommendation score y ui for a given user-item pair based on the representation of both behavior factor and semantic factor from CFNet and MemNet respectively: z ui = [ z behavior ui , z semantic ui ] .", "The output is the probability that the input pair is a positive interaction.", "This is achieved by a logistic layer: y ui = 1 1 + exp( h T z ui ) , (5) where h is model parameter.", "We adopt the binary cross-entropy loss:", "L () = (cid:88) ( u,i ) S y ui log y ui +(1 y ui ) log(1 y ui )", "where S = Y + Y is the union of observed interactions and randomly sampled negative examples.", "Model parameters = { P , Q , W , b , A , h } where we use a single word embedding matrix A by sharing all memory matrices A u , A i , and C in order to reduce model complexity.", "The objective function can be optimized by stochastic gradient descent.", "In this section, we evaluate PNE on two datasets with five baselines in terms of three metrics.", "We evaluate on two real-world datasets.", "The public Amazon products (McAuley and Leskovec, 2013) and a company Cheetah Mobile news (Hu et al., 2018c; Liu et al., 2018) (see Table 1).", "We preprocess the data following the strategy in (Wang and Blei, 2011).", "The size of word vocabulary is 8,000.", "We adopt leave-one-out evaluation (Hu et al., 2018b) and use three ranking metrics: hit ratio (HR), normalized discounted cumulative gain (ND-CG), and mean reciprocal rank (MRR).", "We compare with five baselines (see Table 2).", "BPR (Rendle et al., 2009) is a latent factor model based on matrix factorization.", "HFT (McAuley and Leskovec, 2013) adopts topic distributions to learn latent factors from text reviews.", "TBPR (Hu and Dai, 2017) extends BPR by integrating text content via word embedding features.", "Word embeddings used in TBPR are pre-trained by GloVe (Pennington et al., 2014).", "MLP (He et al., 2017) is a neural CF approach.", "Note that, CFNet of PNE is an MLP with only one hidden layer.", "LCMR (Hu et al., 2018c) is a deep model for CF with unstructured text.", "Note that, MemNet of PNE is the same with the local MemNet of LCMR with only one-hop hidden layer.", "Our method is implemented by TensorFlow (Abadi et al., 2016).", "Parameters are randomly initialized from Gaussian with optimizer Adam (K-ingma and Ba, 2015).", "Learning rate is 0.001, batch size is 128, the ratio of negative sampling is", "1. 3.3 Results Results on two datasets are shown in Table 3 and Table 4, respectively.", "We have some observations.", "First, PNE outperforms the neural CF method MLP on two datasets in terms of three ranking metrics.", "On Amazon dataset, PNE obtains a large improvement in performance gain with relative 12.3% HR@10, 7.7% NDCG@10, and 6.2% MRR@5.", "On Cheetah Mobile dataset, PNE obtains a large improvement in performance gain with relative 5.0% HR@5, and 4.2% NDCG@5, and 3.9% MRR@5.", "Since the CFNet component of PNE is a neural CF method (with only one hidden layer), results show Figure 1: Dimension of embedding.", "the benefit of exploiting unstructured text to alleviate the data sparsity issue faced by CF methods (BPR and MLP).", "Second, PNE outperforms the traditional hybrid methods HFT and TBPR on two datasets in terms of three ranking metrics.", "On Amazon dataset, PNE obtains a significantly large improvement in performance gain with relative 55.0% HR@5, 28.9% NDCG@5, and 20.4% MRR@5.", "On Cheetah Mobile dataset, PNE still obtains reasonably large improvements with relative 17.5% HR@10, 1.8% NDCG@10, and 1.9% MRR@10.", "Compared with traditional hybrid methods which integrate the text content using topic modelling or word embeddings, results show the benefit of integrating text information through memory networks (and exploiting the interaction data through neural CF).", "Last, PNE outperforms neural hybrid method LCMR by a large margin on Amazon dataset with relative improvements of 16.2% HR@5, 9.6% NDCG, and 7.4% MRR@5.", "PNE obtains reasonable improvements on Cheetah Mobile dataset with relative improvements of 3.1% HR@5, 2.8% NDCG, and 2.7% MRR.", "The design of CFNet of PNE is Figure 3: Visualization of word embeddings.", "more reasonable than that of centralized memory module of LCMR which is equivalent to use a softmax activation between two hidden layers.", "The results show the effectiveness of fusing strategy in PNE to exploit unstructured text via MemNet and the interaction data via CFNet.", "We first evaluate the effects of the dimensionality of the embedding space.", "The x -axis in Figure 1 is the dimension of user/item and hence the dimensionality of input to CFNet is double since we adopt concatenation.", "It clearly indicates that the embedding should not be too small due to the possibility of information loss and the limits of expressiveness.", "The dimension 75 (and hence d = 150 ) is a good tradeoff between recommendation performance and computation burden.", "We next show optimization curves of performance and loss (averaged over all examples) against iterations on Cheetah Mobile dataset in Figure", "2. The model learns quickly in the first 20 iterations and improves slowly until 50, while training losses continue to go down and valid losses stabilize.", "The average time per epoch of PNE takes 68.1s and as a reference it is 34.5s for MLP using one NVIDIA TITAN Xp GPU.", "We visualize the learned word embeddings A .", "We show that we can learn meaningful semantics for word embeddings such that words are to cluster when they have relevant semantics.", "We give an example to show the neighbors of the word drug in the 3D space by projecting the high-dimensional word vectors using TensorFlow 1 as shown in Figure", "3. The top nearest neighbors of drug are: shot, shoots, gang, murder, killing, rape, stabbed, truck, school, police, teenage .", "We can see they are highly semantic relevant.", "We may also infer that school teenagers have close relationships to the drug issue from the Cheetah News corpus.", "This should raise a concern for society and it shows the society impact of natural language processing (Hovy and Spruit, 2016).", "Try our trained word embeddings 2 .", "We showed that relational interactions can be effectively integrated with unstructured text under a neural embedding model.", "Our method attentively focuses relevant words to match user preferences with user and item attentions (semantic factor) and captures nonlinear relationships between users and items (behavior factor).", "Experiments show better performance than five baselines on two real-world datasets in terms of three ranking metrics.", "We learn meaningful word embeddings and rethink the society impact of language processing technology.", "The work is supported by HK CERG projects 16211214/16209715/16244616, NSFC 61673202, and HKPFS PF15-16701." ]
[ "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "method", "objective", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "method", "abstain", "objective", "other" ]
[ "We present data and methods that enable a supervised learning approach to Open Information Extraction (Open IE).", "Central to the approach is a novel formulation of Open IE as a sequence tagging problem, addressing challenges such as encoding multiple extractions for a predicate.", "We also develop a bi-LSTM transducer, extending recent deep Semantic Role Labeling models to extract Open IE tuples and provide confidence scores for tuning their precision-recall tradeoff.", "Furthermore, we show that the recently released Question-Answer Meaning Representation dataset can be automatically converted into an Open IE corpus which significantly increases the amount of available training data.", "Our supervised model, made publicly available, 1 outperforms the state-of-the-art in Open IE on benchmark datasets.", "Open Information Extraction (Open IE) systems extract tuples of natural language expressions that represent the basic propositions asserted by a sentence (see Figure 1).", "They have been used for a wide variety of tasks, such as textual entailment (Berant et al., 2011), question answering (Fader et al., 2014), and knowledge base population (Angeli et al., 2015).", "However, perhaps due to limited data, existing methods use semi-supervised approaches (Banko et al., 2007; Wu and Weld, 2010), or rule-based algorithms (Fader et al., 2011; Mausam et al., 2012; Del Corro and Gemulla, 2013).", "In this paper, we present new data and methods for Open IE, showing that supervised learning can greatly improve performance.", "Work performed while at Bar-Ilan University.", "1 Our code and models are made publicly available at https://github.com/gabrielStanovsky/ supervised-oie Mercury filling, particularly prevalent in the USA, was banned in the EU, partly because it causes antibiotic resistance.", "(mercury filling; particularly prevalent ; in the USA) (mercury filling; causes ; antibiotic resistance) (mercury filling; was banned ; in the EU; partly because it causes antibiotic resistance) Figure 1 : Open IE extractions from an example sentence.", "Each proposition is composed of a tuple with a single predicate position (in bold), and an ordered list of arguments, separated by semicolons.", "We build on recent work that studies other natural-language driven representations of predicate argument structure, which can be annotated by non-experts.", "Recently, Stanovsky and Dagan (2016) created the first labeled corpus for evaluation of Open IE by an automatic translation from question-answer driven semantic role labeling (QA-SRL) annotations (He et al., 2015).", "We extend these techniques and apply them to the QAMR corpus (Michael et al., 2018), an open variant of QA-SRL that covers a wider range of predicate-argument structures (Section 5).", "The combined dataset is the first corpus that is large and diverse enough to train an accurate extractor.", "To train on this data, we formulate Open IE as a sequence labeling problem.", "We introduce a novel approach that can extract multiple, overlapping tuples for each sentence (Section 3), extending recent deep BIO taggers used for semantic role labeling (Zhou and Xu, 2015; He et al., 2017).", "We also introduce a method to calculate extraction confidence, allowing us to effectively trade off precision and recall (Section 4).", "Experiments demonstrate that our approach out-885 performs state-of-the-art Open IE systems on several benchmarks (Section 6), including three that were collected independently of our work (Xu et al., 2013; de Sa Mesquita et al., 2013; Schneider et al., 2017).", "This shows that for Open IE, careful data curation and model design can push the state of the art using supervised learning.", "In this section we survey existing Open IE systems, against which we compare our system, and available data for the task, that we will use for training and testing our model.", "Open IE's original goal (Banko et al., 2007) was to extend traditional (closed) information extraction, such that a ll of the propositions asserted by a given input sentence are extracted (see Figure 1 for examples).", "The broadness of this defini-tion, along with the lack of a standard benchmark dataset for the task, prompted the development of various Open IE systems tackling different facets of the task.", "While most Open IE systems aim to extract the common case of verbal binary propositions (i.e, subject-verb-object tuples), some systems specialize in other syntactic constructions, including noun-mediated relations (Yahya et al., 2014; Pal and Mausam, 2016), n-ary relations (Akbik and Loser, 2012), or nested propositions (Bhutani et al., 2016).", "Many different modeling approaches have also been developed for Open IE.", "Some of the early systems made use of distant supervision (Banko et al., 2007; Wu and Weld, 2010), while the current best systems use rule-based techniques to extract predicate-argument structures as a post-processing step over an intermediate representation.", "ReVerb (Fader et al., 2011) extracts Open IE propositions from part of speech tags, OLLIE (Mausam et al., 2012), ClausIE (Del Corro and Gemulla, 2013) and PropS (Stanovsky et al., 2016) postprocess dependency trees, and Open IE4 2 extracts tuples from Semantic Role Labeling (SRL) structures.", "These systems typically associate a confidence metric with each extraction, which allows end applications to trade off precision and recall.", "Recent work addressed the lack of labeled reference Open IE datasets for comparatively evaluating extractors.", "Stanovsky and Dagan (2016) created a large Open IE corpus ( OIE2016 ) for verbal predicates by automatic conversion from QA-SRL (He et al., 2015), a variant of traditional SRL that labels arguments of verbs with simple, template-based natural language questions.", "Schneider et al. (2017) aggregated datasets annotated independently in previous Open IE efforts ( WEB and NYT (de Sa Mesquita et al., 2013), PENN (Xu et al., 2013), and OIE2016 ) into a common benchmarking suite.", "In addition to these, we create and make available a new Open IE training corpus, All Words Open IE ( AW-OIE ), derived from Question-Answer Meaning Representation (QAMR) (Michael et al., 2018), a recent extension of the QA-SRL paradigm to free-form questions over a wide range of predicate types (see Section 5).", "Table 1 presents more details on these datasets.", "In this work, we choose to model an Open IE proposition as a tuple consisting of a single predicate operating over a non-empty set of arguments, where the predicate and the arguments are contiguous spans from the sentence.", "As with traditional (binary) Open IE, every tuple should be asserted by the sentence and the order of the tuple elements should be such that it would be naturally interpretable when reading from left to right (for example, see the third tuple in Figure 1).", "As we show in following sections, this formulation intuitively lends itself to BIO tagging, while being expressive enough to capture a wide range of propositions.", "Formally, given an input sentence S = Dataset Domain #Sent.", "Table 1 : Datasets used in this work, following (Schneider et al., 2017).", "AW-OIE (All Words Open IE) was created in the course of this work, see Section 5 for details.", "Table 2 : Example sentences and respective Open IE extractions.", "The first line in each example presents the input pairs ( S, p ) , where S is the input sentence, and the predicate head, p , is denoted with an underline.", "Below the inputs we present the corresponding Open IE extractions.", "The corresponding encodings are presented below the dashed lines, where subscripts indicate the associated BIO label.", "Demonstrating:", "(a) the encoding of a multi-word predicate,", "(b) several arguments collapsed into the same A0 argument position,", "(c) argument position deviating from the sentence ordering.", "( w 1 , . . . , w n ) , a tuple consists of ( x 1 , . . . , x m ) , where each x i is a contiguous subspan of S .", "One of the x i is distinguished as the predicate (marked in bold in Figure 1), while the other spans are considered its arguments.", "Following this definition, we reformulate Open IE as a sequence labeling task, using a custom BIO 3 (Ramshaw and Marcus, 1995; Sang and Veenstra, 1999) scheme adapted from recent deep SRL models (He et al., 2017).", "In our formulation, the set of Open IE tuples for a sentence S are grouped by predicate head-word p , as shown in Table 2. For instance, example", "(b) lists two tuples for the predicate head born, which is underlined in the sentence.", "Grouping tuples this way allows us to run the model once for each predicate head, and accumulate the predictions across predicates to produce the final set of extractions.", "Open IE tuples deviate from SRL predicate-argument structures in two major respects.", "First, while SRL generally deals with single-word predicates, Open IE uses multi-word predicates that often incorporate modals and embedded predicates.", "For example, the first tuple in the table includes the embedded predicate claimed that he won .", "Second, Open IE generates multiple extractions from a single predicate in certain syntactic constructions (e.g., apposition, co-ordination or corefer-ence).", "For instance, example", "(b) repeats the predicate was born in for the two components of the 3 Beginning, Inside, Outside apposition Barack Obama, a former U.S. president .", "To model these unique challenges, we introduce a custom BIO tagging scheme, shown in Table 2 below the dashed lines.", "Predicates are encoded using the P label type, while arguments are represented using Ai labels, where i represents the argument's position within the extracted Open IE tuple.", "While softer than SRL's predicate-specific argument roles (e.g., ARG0), these argument positions also capture semantic information because they are arranged such that the tuple can be naturally read as a standalone statement, regardless of the complications of the source text's syntax (such as reorderings and long-distance dependen-cies).", "For instance, in the last example in Table 2, the order of the arguments in the Open IE tuple deviates from the ordering in the original sentence due to a relative clause construction (headed by the word Brexit ).", "Finally, multiple extractions per predicate are encoded by assigning the same argument index to all arguments appearing in that position across all of the predicate's extractions.", "For example, note that the A0 argument label appears twice for the apposition in example", "(b).", "To reconstruct the extractions from the BIO labels, we produce an extraction for every possible way of choosing one argument for each index.", "Figure 2 : RNN model architecture.", "Orange circles represent current word features: embedding for word and part of speech.", "Yellow circles represent predicate features, duplicated and concatenated to all other word features.", "Figure 3 : Word label distribution in the training set.", "Our model, named RnnOIE, is a bi-LSTM transducer, inspired by the state of the art deep learning approach for SRL suggested by Zhou and Xu (2015) and He et al. (2017).", "The architecture is shown in Figure 2. Given an input instance of the form ( S, p ) , where S is the input sentence, and p is the word index of the predicate's syntactic head, we extract a feature vector feat for every word w i S : feat ( w i , p ) = emb ( w i ) emb ( pos ( w i )) emb ( w p ) emb ( pos ( w p )) Here, emb ( w ) is a d -dimensional word embedding, emb ( pos ( w )) is a 5-dimensional embedding of w 's part of speech, and denotes concatenation.", "We duplicate the predicate head's features on all words to allow the model to more directly access this information as it makes predicate-specific word label predictions.", "The features are fed into a bi-directional deep LSTM transducer (Graves, 2012) which computes contextualized output embeddings.", "The outputs are used in softmaxes for each word, producing in-dependent probability distributions over possible BIO tags.", "The model is trained with gold predicate heads, using a per-word maximum likelihood objective.", "Figure 3 depicts the overall word label distribution within the training set.", "The large percentage of O labels demonstrates Open IE's tendency to shorten arguments, compared to SRL which considers full syntactic constitutes as arguments.", "Inference At inference time, we first identify all verbs and nominal predicates in the sentence as candidate predicate heads.", "We use a Part Of Speech (POS) tagger to identify verbs, and Catvar's subcategorization frames (Habash and Dorr, 2003) for nominalizations, identifying nouns which share the same frame with a verbal equivalent (e.g., acquisition with acquire ).", "We then generate an input instance for each candidate predicate head.", "For each instance, we tag each word with its most likely BIO label under the model, and reconstruct Open IE tuples from the resulting sequence according to the method described in Section 3, with the exception that we ignore malformed spans (i.e., if an A0-I label is not preceded by A0-I or A0-B , we treat it as O ).", "Assigning extraction confidence It is beneficial for an Open IE system to associate a confidence value with each predicted extraction to allow for tuning its precision-recall tradeoff.", "Our model does not directly produce confidence values for extractions, but it does assign probabilities to each BIO label that it predicts.", "We experimented with several heuristics to combine these predictions to an extraction-level confidence metric.", "The best performance on the development set was achieved by multiplying the probabilities of the B and I labels participating in the extraction.", "4 This metric prefers shorter extractions, which correlates well with the requirements of Open IE (Bhutani et al., 2016).", "4 We also tried taking the maximum or minimum observed single word-label probability.", "Table 3 : Comparison of QA-SRL, QAMR, and desired Open IE annotations for an example sentence, adapted from the QAMR corpus.", "hyperparameters were tuned on the OIE2016 development set.", "The bi-LSTM transducer has 3 layers and each LSTM cell uses 128 hidden units and a linear rectifier (ReLU) (Nair and Hinton, 2010) activation function.", "The model was trained for 100 epochs in mini batches of 50 samples, with 10% word-level dropout.", "The word-embeddings were initialized using the GloVe 300-dimensions pre-trained embeddings (Pennington et al., 2014) and were kept fixed during training.", "The part of speech embeddings were randomly initialized and updated during training.", "Finally, we use the average perceptron part-of-speech tagger (as implemented in spaCy 5 ) to predict parts of speech for input features and verb predicate identification.", "This section describes our approach for automatically extracting Open IE tuples from QAMR (Michael et al., 2018), a recent extension of QA-SRL.", "While QA-SRL uses question templates centered on verbs, QAMR annotates free-form questions over arbitrary predicate types.", "The QAMR corpus consists of annotations over 5 , 000 sentences.", "By extending the OIE2016 training set with extractions from QAMR, we more than triple the available amount of training data.", "Question-Answer Meaning Representation, or QAMR (Michael et al., 2018), was recently proposed as an extension of QA-SRL.", "Like QA-SRL, QAMR represents predicate-argument structure with a set of question-answer pairs about a sentence, where each answer is a span from the sentence.", "However, while QA-SRL restricts questions to fit into a particular verb-centric template, QAMR is more general, allowing any natural language question that begins with a wh -word and contains at least one word from the sentence.", "This allows QAMR to express richer, more complex relations.", "Consider, for example, the first two entries for QAMR in Table 3. The first explicates the implicit relation made of from the noun compound mercury filling , and the second identifies the adjectival predicate prevalent .", "Neither of these can be represented in QA-SRL.", "While QAMR's broader scope presents an opportunity to vastly increase the number and coverage of annotated Open IE tuples, it also poses additional challenges for the extraction algorithm.", "The free-form nature of QAMR questions means that some are over-expressive for Open IE, while in many other cases it is less obvious how to extract a predicate and a list of arguments from a question-answer pair.", "Figure 4 : A QAMR (top) to Open IE (bottom) conversion example.", "The BIO labels for our encoding of the Open IE tuple appear below the text.", "The root of the question's dependency tree is the predicate, while its syntactic constituents are the arguments.", "The answer appears as the first argument of the Open IE tuple due to the passive construction.", "Over-expressiveness The QAMR formalism allows many constructions that diverge from Open IE extractions, which generally are drawn verbatim from the source text.", "For example, the predicate made is introduced in the QAMR for the sentence in Table 3, despite not appearing in the sentence.", "To circumvent this issue, we filter out questions which: (1) introduce new content words, 6 (2) have more than one wh -word, (3) do not start with who , what , when or where , or (4) ask what did X do?", ", delegating the predicate to the answer.", "Detecting predicates and arguments While a QA-SRL question has a designated predicate and a single argument as the answer, in QAMR, the predicate can appear anywhere in the question and its arguments are spread between the question and answer.", "For example, extracting an Open IE tuple for the predicate banned in Table 3 requires decoupling the predicate and its arguments in the EU and partly because it causes antibiotic resistance .", "Our solution to this problem is illustrated in Figure 4. We first run each question through a syntactic dependency parser.", "We then identify the predicate as the head of the question's dependency tree extended to include all dependents with an auxiliary relation (e.g., aux , neg , or prt ).", "The predicted arguments are the predicate's constituent argument subtrees, while the answer to the question replaces the subtree headed by the wh-word.", "Finally, we employ similar heuristics to those used converting verbal QA-SRL to Open IE to find the correct argument position for the answer (Stanovsky and Dagan, 2016).", "For example, the passive construc-6 We do not count inflected forms of verbs from the sentence, such as caused in the last entry of the table, as new words.", "QAMR Open IE Tuples (The treaty of Brussels; was signed ; on 17 March 1948; by Belgium, the Netherlands, Luxembourg, France, and the UK) (The treaty of Brussels; is the precursor to ; the NATO agreement) (The scope of publishing; has expanded to include ; websites, blogs, and the like.) Table 4 : Tuples from the All Words Open IE Corpus, exemplifying n-ary extractions (top example), non-verbal predicates (middle), and multi-word predicates (bottom).", "tion in Figure 4 implies that the answer should be placed in the first argument position, while the existence of a prepositional object in, e.g., What did he put on the table ?", "signals that the answer should be placed in the second argument position.", "As described by Michael et al. (2018), QAMR annotations were gathered via crowdsourcing in a two-stage pipeline over Wikipedia and Wikinews text.", "We use the training partition of the QAMR dataset, which consists of 51,063 QA pairs over 3,938 sentences.", "Our filtering and conversion from the QAMR corpus yields 12,952 Open IE tuples (2.5 times the size of OIE2016 's training corpus), composed of 7,470 (58%) verbal predicates, 4,952 (38%) nominal predicates, and 530 (4%) adjectival predicates.", "See Table 4 for example tuples, taken from the converted corpus.", "Examining the results, we found that they are not accurate enough to constitute a gold test cor-890 pus, partly because some relations were missed by the annotators of QAMR and partly because of noise introduced in the automatic extraction process.", "Instead, we use this corpus to extend the train partition of OIE2016 .", "In the following section, we show its usefulness in significantly improving the precision and recall of our Open IE model.", "We evaluate the performance of our model on the four test sets discussed in Section 2.", "Metrics We evaluate each system according to three metrics.", "First, as is typical for Open IE, we compute a precision-recall (PR) curve by evaluating the systems' performance at different extraction confidence thresholds.", "This curve is useful for downstream applications which can set the threshold according to their specific needs (i.e., recall oriented versus precision oriented).", "Second, we compute the area under the PR curve (AUC) as a scalar measurement of the overall system performance.", "Finally, for each system, we report a single F1 score using a confidence threshold optimized on the development set.", "This can serve as a preset threshold for out-of-the-box use.", "Matching function Similar to other cases in NLP, we would like to allow some variability in the predicted tuples.", "For example, for the sentence The sheriff standing against the wall spoke in a very soft voice we would want to treat both (The Sheriff; spoke ; in a soft voice) and (The sheriff standing against the wall; spoke ; in a very soft voice) as acceptable extractions.", "To that end, we follow He et al. (2015) which judge an argument as correct if and only if it includes the syntactic head of the gold argument (and similarly for predicates).", "For OIE2016 , we use the available Penn Treebank gold syntactic trees (Marcus et al., 1993), while for the other test sets, we use predicted trees instead.", "While this metric may sometimes be too lenient, it does allow a more balanced and fair comparison between systems which can make different, but equally valid, span boundary decisions.", "Baselines We compare our model (RnnOIE) against the top-performing systems of those evaluated most recently in Stanovsky and Dagan (2016) and in Schneider et al. (2017): Open IE4, 7 ClausIE (Del Corro and Gemulla, 2013), and PropS (Stanovsky et al., 2016).", "Table 5 reports the AUC and F1 scores of all of the systems on the 4 test sets.", "In addition, the PR curves for the two largest test sets ( OIE2016 and WEB ) are depicted in Figures 5a and 5b.", "We report results for two versions of our model: one trained on the OIE2016 training set containing only verbal predicates ( RnnOIE-verb ), and an-other on the extended training set that includes the automatic conversion of QAMR outlined in Section 5 ( RnnOIE-aw ).", "Overall, RnnOIE-aw outperforms the other systems across the datasets.", "On the larger test sets ( OIE2016 and WEB ) it provides the best performance in terms of AUC and F1, with a superior precision-recall curve.", "On each of the smaller test sets, it performs best on one metric and competitively on the other.", "Furthermore, on all of the test sets, extending the training set significantly improves our model's performance, showing that it benefits from the additional data and types of predicates available in the QAMR dataset.", "While this is most notable in the test sets which include nominalizations ( WEB , NYT , and PENN ), it also improves the performance on OIE2016 , which is composed solely of verb predicates.", "In our analysis, we find that RnnOIE generalizes to unseen predicates, produces more and shorter arguments on average than are in the gold extractions, and, like all of the systems we tested, struggles with nominal predicates.", "Unseen predicates We split the propositions in the gold and predicted OIE2016 test set into two partitions, seen and unseen , based on whether the predicate head's lemma appears in the training set.", "The unseen part contains 145 unique predicate lemmas in 148 extractions, making up 24% out of the 590 unique predicate lemmas and 7% out of the 1993 total extractions in the test set.", "We then evaluated RnnOIE-aw on each part separately.", "The resulting PR curves (Figure 5c) depict overall good performance also on the unseen part, competitive with previous Open IE systems.", "7 https://github.com/dair-iitd/ OpenIE-standalone 891 OIE2016 WEB NYT PENNAUC F1 (P, R) AUC F1 (P, R) AUC F1 (P, R) AUC F1 (P, R) ClausIE .38 .59 (.49, .74) .40 .45 (.39,.53) .23 .30 (.24, .39) .28 .34 (.24, .61) PropS .34 .56 (.64, .49) .45 .59 (.44, .89) .22 .37 (.25, .77) .28 .39 (.26, .81) Open IE4 .42 .60 (.64, .56) .45 .56 (.63, .50) .24 .38 (.26, .74) .28 .43 (.37, .50) RnnOIE-verb .45 .59 (.57, .62) .23 .46 (.38, .58) .09 .25 (.20,.33) .21 .38 (.35, .40) RnnOIE-aw .48 .62 (.61, .64) .47 .67 (.83, .56) .25 .35 (.24,.67) .26 .44 (.31,.75) Table 5 : Performance of the OIE extractors on our test sets.", "Each system is tested in terms of Area Under the PR Curve (AUC), and F1 (precision and recall in parenthesis).", "Figure 5 : Precision-recall curves of the different OIE systems on OIE2016 (5a), WEB (5b) and seen vs. unseen predicates in RnnOIE-aw on OIE2016 (5c).", "See details in Section 6.", "Table 6 : Output statistics of the different systems on OIE2016 , versus the gold data.", "Argument length and number In Table 6 we compare statistics on the the outputs of the Open IE systems on OIE2016 and the gold data.", "The best performing systems, RnnOIE and OpenIE4, tend to produce more arguments, and each argument tends to be shorter on average, in comparison to other systems and gold.", "Open IE systems on a batch of 3200 sentences from OIE2016 , running on a Xeon 2.3GHz CPU.", "The results are presented in Table 8.", "8 We find that our system fares well, processing only 12% fewer sentences per second than the fastest system, Open IE 4.0.", "Further, while these numbers are reported on CPU for the sake of fair comparison, running our neural model on a GPU (NVIDIA GeForce GTX 1080 Ti) boosts speed by a factor of more than 10 (149.25 sentences per second, on av-erage).", "Error analysis All of the systems still lack in recall across all tested corpora.", "We examined a random sample of 100 recall errors shared by all of the extractors across the tested datasets and found several common error types, shown in Table 7.", "Notably, noun and nominalized predicates still pose a challenge, appearing in 51% of the recall errors (whereas they make up 24% of all extrac-tions).", "19% of the examined errors required some 8 PropS and ClausIE's relatively slow performance is in part due to their hard-coded use of the Stanford parser, which took on average 0.2 seconds per sentence.", "Using a faster parser (e.g., spaCy) may improve this performance.", "Table 7 : Analysis of frequently-occuring recall errors for all tested systems on a random sample of 100 sentences.", "For each phenomenon we list the percentage of sentences in which it occurs (possibly overlapping with other phenomena), and a protoypical example, taken from the WEB corpus.", "Table 8 : Runtime analysis, measured in sentences per second, of the different systems on 3200 sentences from the OIE2016 corpus on Xeon 2.3GHz CPU (top) and on an NVIDIA GeForce GTX 1080 Ti (bottom).", "Baselines were only run on CPU as they are currently not optimized for GPU.", "form of sentence level inference, such as determining event factuality or pronoun resolution.", "14% of the errors involved long sentences with over 40 words (where the average word count per sentence is 29.4).", "We present a supervised model for Open IE, formulating it as a sequence tagging problem and applying a bi-LSTM transducer to produce a state-of-the-art Open IE system.", "Along the way, we address several task-specific challenges, including the BIO encoding of predicates with multiple extractions and confidence estimation in our sequence tagging model.", "To train the system, we leverage a recently published large scale corpus for Open IE (Stanovsky and Dagan, 2016), and further extend it using a novel conversion of the QAMR corpus (Michael et al., 2018), which covers a wider range of predicates.", "In addition to these contributions, this work shows that Open IE can greatly benefit from future research into the QA-SRL paradigm.", "For example, Open IE would directly benefit from an automatic QA-SRL extractor, while a more exhaustive or extensive annotation of QAMR would improve Open IE's performance on a wider range of predicates.", "This work was supported in part by grants from the MAGNET program of the Israeli Office of the Chief Scientist (OCS); the German Research Foundation through the German-Israeli Project Cooperation (DIP, grant DA 1600/1-1); the Israel Science Foundation (grant No. 1157/16); the US NSF (IIS1252835,IIS-1562364); and an Allen Distinguished Investigator Award." ]
[ "method", "abstain", "objective", "result", "result", "abstain", "abstain", "abstain", "objective", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "abstain", "other" ]
[ "Neural sequence-to-sequence models are currently the dominant approach in several natural language processing tasks, but require large parallel corpora.", "We present a sequence-to-sequence-to-sequence autoencoder ( SEQ 3 ), consisting of two chained encoder-decoder pairs, with words used as a sequence of discrete latent variables.", "We apply the proposed model to unsupervised abstractive sentence compression, where the first and last sequences are the input and reconstructed sentences, respectively, while the middle sequence is the compressed sentence.", "Constraining the length of the latent word sequences forces the model to distill important information from the input.", "A pretrained language model, acting as a prior over the latent sequences, encourages the compressed sentences to be human-readable.", "Continuous relaxations enable us to sample from categorical distributions, allowing gradient-based optimization, unlike alternatives that rely on reinforcement learning.", "The proposed model does not require parallel text-summary pairs, achieving promising results in unsupervised sentence compression on benchmark datasets.", "Neural sequence-to-sequence models ( SEQ 2 SEQ ) perform impressively well in several natural language processing tasks, such as machine translation (Sutskever et al., 2014; Bahdanau et al., 2015) or syntactic constituency parsing (Vinyals et al., 2015).", "However, they require massive parallel training datasets (Koehn and Knowles, 2017).", "Consequently there has been extensive work on utilizing non-parallel corpora to boost the performance of SEQ 2 SEQ models (Sennrich et al., 2016; Gulcehre et al., 2015), mostly in neural machine translation where models that require absolutely no parallel corpora have also been pro1 , 2 , , 1 , 2 , , 1 , 2 , , Compressor (encoder-decoder) Reconstructor ( encoder-decoder ) Reconstruction Loss LM Prior Loss Topic Loss Figure 1: Overview of the proposed SEQ 3 autoencoder.", "posed (Artetxe et al., 2018; Lample et al., 2018b).", "Unsupervised (or semi-supervised) SEQ 2 SEQ models have also been proposed for summarization tasks with no (or small) parallel text-summary sets, including unsupervised sentence compression.", "Current models, however, barely reach lead-N baselines (Fevry and Phang, 2018; Wang and Lee, 2018), and/or are non-differentiable (Wang and Lee, 2018; Miao and Blunsom, 2016), thus relying on reinforcement learning, which is unstable and inefficient.", "By contrast, we propose a sequence-to-sequence-to-sequence autoencoder, dubbed SEQ 3 , that can be trained end-to-end via gradient-based optimization.", "SEQ 3 employs differentiable approximations for sampling from categorical distributions (Maddison et al., 2017; Jang et al., 2017), which have been shown to outperform reinforcement learning ( Havrylov and Titov, 2017).", "Therefore it is a generic framework which can be easily extended to other tasks, e.g., machine translation and semantic parsing via task-specific losses.", "In this work, as a first step, we apply SEQ 3 to unsupervised abstractive sentence compression.", "SEQ 3 ( x 2) comprises two attentional encoder-decoder (Bahdanau et al., 2015) pairs (Fig. 1): a compressor C and a reconstructor R .", "C ( x 2.1) receives an input text x = x 1 ; : : : ; x N of N words, and generates a summary y = y 1 ; : : : ; y M of M words ( M < N ), y being a latent variable.", "R and C communicate only through the discrete words of the summary y ( x 2.2).", "R ( x 2.3) produces a sequence ^ x = ^ x 1 ; : : : ; ^ x N of N words from y , try 0 1 2 1 1 1 2 2 0 1 1 1 1 1 1 2 Topic Loss 1 2 Compressor Reconstructor LM prior Loss 0 1 0 1 1 2 1 1 Figure 2: More detailed illustration of SEQ 3 .", "ing to minimize a reconstruction loss LR = ( x ; ^ x ) ( x 2.5).", "A pretrained language model acts as a prior on y , introducing an additional loss LP ( x ; y ) that encourages SEQ 3 to produce human-readable summaries.", "A third loss LT ( x ; y ) rewards summaries y with similar topic-indicating words as x .", "Experiments ( x 3) on the Gigaword sentence compression dataset (Rush et al., 2015) and the DUC -2003 and DUC -2004 shared tasks (Over et al., 2007) produce promising results.", "Our contributions are: (1) a fully differentiable sequence-to-sequence-to-sequence ( SEQ 3 ) autoencoder that can be trained without parallel data via gradient optimization; (2) an application of SEQ 3 to unsupervised abstractive sentence compression, with additional task-specific loss functions; (3) state of the art performance in unsupervised abstractive sentence compression.", "This work is a step towards exploring the potential of SEQ 3 in other tasks, such as machine translation.", "The bottom left part of Fig. 2 illustrates the internals of the compressor C .", "An embedding layer projects the source sequence x to the word embeddings e s = e s 1 ; : : : ; e sN , which are then encoded by a bidirectional RNN , producing h s = h s 1 ; : : : ; h sN .", "Each h st is the concatenation of the corresponding left-to-right and right-to-left states (outputs in LSTM s) of the biRNN .", "h st = [ (cid:0)(cid:0)! RNN s ( e st ; (cid:0)! h st (cid:0) 1 ); (cid:0)(cid:0) RNN s ( e st ; (cid:0) h st +1 )] To generate the summary y , we employ the attentional RNN decoder of Luong et al. (2015), with their global attention and input feeding.", "Concretely, at each timestep ( t 2 f 1 ; : : : ; M g ) we compute a probability distribution a i over all the states h s 1 ; : : : ; h sN of the source encoder conditioned on the current state h ct of the compressor's decoder to produce a context vector c t .", "The matrix W a is learned.", "We obtain a probability distribution for y t over the vocabulary V by combining c t and the current state h ct of the decoder.", "o ct = tanh ( W o [ c t ; h ct ] + b o ) (1) u ct = W v o ct + b v (2) p ( y t j y <t ; x ) = softmax ( u ct ) (3) W o ; b o ; W v ; b v are learned.", "c t is also used when updating the state h ct of the decoder, along with the embedding e ct of y t and a countdown argument M (cid:0) t (scaled by a learnable w d ) indicating the number of the remaining words of the summary ( Fevry and Phang, 2018; Kikuchi et al., 2016).", "For each input x = x 1 ; : : : ; x N , we obtain a target length M for the summary y = y 1 ; : : : ; y M by sampling (and rounding) from a uniform distribution U ( (cid:11) N ; (cid:12) N ) ; (cid:11); (cid:12) are hyper-parameters ( (cid:11) < (cid:12) < 1 ); we set M = 5 , if the sampled M is smaller.", "Sampling M , instead of using a static compression ratio, allows us to train a model capable of producing summaries with varying (e.g., user-specified) compression ratios.", "Controlling the output length in encoder-decoder architectures has been explored in machine translation (Kikuchi et al., 2016) and summarization (Fan et al., 2018).", "To generate the summary, we need to sample its words y t from the categorical distributions p ( y t j y <t ; x ) , which is a non-differentiable process.", "Soft-Argmax Instead of sampling y t , a simple workaround during training is to pass as input to the next timestep of C 's decoder and to the corresponding timestep of R 's encoder a weighted sum of all the vocabulary's ( V ) word embeddings, using a peaked softmax function (Goyal et al., 2017): e ct = j V j i e ( w i ) softmax ( u ct =(cid:28) ) (5) where u ct is the unnormalized score in Eq.", "2 (i.e., the logit) of each word w i and (cid:28) 2 (0 ; 1 ) is the temperature.", "As (cid:28) !", "0 most of the probability mass in Eq.", "5 goes to the most probable word, hence the operation approaches the arg max .", "Gumbel-Softmax We still want to be able to perform sampling, though, as it has the benefit of adding stochasticity and facilitating exploration of the parameter space.", "Hence, we use the Gumbel-Softmax ( GS ) reparametrization trick (Maddison et al., 2017; Jang et al., 2017) as a low variance approximation of sampling from categorical distributions.", "Sampling a specific word y t from the softmax (Eq.", "3) is equivalent to adding (element-wise) to the logits an independent noise sample (cid:24) from the Gumbel distribution 1 and taking the arg max : y t (cid:24) softmax ( u ct ) $ y t = arg max( u ct + (cid:24) ) (6) Therefore, using the GS trick, Eq.", "5 becomes: ~ e ct = j V j i e ( w i ) softmax (( u ct + (cid:24) ) =(cid:28) ) (7) Straight-Through Both relaxations lead to mixtures of embeddings, which do not correspond to actual words.", "Even though this enables the compressor to communicate with the reconstructor using continuous values, thus fully utilizing the available embedding space, ultimately our aim is to constrain them to communicate using only natural language.", "In addition, an unwanted discrepancy is created between training (continuous embeddings) and test time (discrete embeddings).", "We alleviate these problems with the Straight-Through estimator ( ST ) (Bengio et al., 2013).", "Specifically, in the forward pass of training we dis-cretize ~ e ct by using the arg max (Eq. 6), whereas in the backward pass we compute the gradients using the GS (Eq. 7).", "This is a biased estimator due 1 (cid:24) i = (cid:0) log( (cid:0) log( x i )) ; x i (cid:24) U (0 ; 1) to the mismatch between the forward and backward passes, but works well in practice.", "ST GS reportedly outperforms scheduled sampling (Goyal et al., 2017) and converges faster than reinforcement learning (Havrylov and Titov, 2017).", "The reconstructor (upper right of Fig.", "2) works like the compressor, but its encoder operates on the embeddings e c 1 ; : : : ; e cM of the words y 1 ; : : : ; y M of the summary (exact embeddings of the sampled words y t in the forward pass, approximate differentiable embeddings in the backward pass).", "We initialize the hidden state of each decoder using a transformation of the concatenation [ (cid:0)! h sN ; (cid:0) h s 1 ] of the last hidden states (from the two directions) of its bidirectional encoder and a length vector, following Mallinson et al. (2018).", "The length vector for the decoder of the compressor C consists of the target summary length M , scaled by a learnable parameter w v , and the compression ratio MN .", "Reconstruction Loss LR ( x ; ^ x ) is the (negative) log-likelihood assigned by the (decoder of) R to the input (correctly reconstructed) words x = x 1 ; : : : ; x N , where p R is the distribution of R .", "We do not expect LR ( x ; ^ x ) to decrease to zero, as there is information loss through the compression.", "However, we expect it to drive the compressor to produce such sentences that will increase the likelihood of the target words in the reconstruction.", "LM Prior Loss To ensure that the summaries y are readable, we pretrain an RNN language model (see Appendix) on the source texts of the full training set.", "We compute the Kullback-Leibler divergence DKL between the probability distributions of the (decoder of) the compressor ( p ( y t j y <t ; x ) , Eq.", "3) and the language model ( p LM ( y t j y <t ; x ) ).", "Similar priors have been used in sentence compression (Miao and Blunsom, 2016) and agent communication (Havrylov and Titov, 2017).", "Topic Loss Words with high TF-IDF scores are indicative of the topic of a text (Ramos et al., 2003; Erkan and Radev, 2004).", "To encourage the compressor to preserve in the summary y the topic-indicating words of the input x , we compute the TF-IDF -weighted average v x of the word embeddings of x and the average v y of the word embeddings of y and use their cosine distance as an additional loss LT = 1 (cid:0) cos( v x ; v y ) .", "N si MM", "Length Penalty A fourth loss LL (not shown in Fig.", "1) helps the (decoder of the) compressor to predict the end-of-sequence ( EOS ) token at the target summary length M .", "LL is the cross-entropy between the distributions p ( y t j y <t ; x ) (Eq.", "3) of the compressor at t = M + 1 and onward, with the one-hot distribution of the EOS token.", "Parameter Sharing We tie the weights of layers encoding similar information, to reduce the number of trainable parameters.", "First, we use a shared embedding layer for the encoders and decoders, initialized with 100-dimensional GloVe embeddings (Pennington et al., 2014).", "Additionally, we tie the shared embedding layer with the output layers of both decoders (Press and Wolf, 2017; Inan et al., 2017).", "Finally, we tie the encoders of the compressor and reconstructor (see Appendix).", "OOVs Out-of-vocabulary words are handled as in Fevry and Phang (2018) (see Appendix).", "Datasets We train SEQ 3 on the Gigaword sentence compression dataset (Rush et al., 2015).", "2 It consists of pairs, each containing the first sentence of a news article ( x ) and the article's headline ( y ), a total of 3.8M/189k/1951 train/dev/test pairs.", "We also test (without retraining) SEQ 3 on DUC -2003 and DUC -2004 shared tasks (Over et al., 2007), containing 624/500 news articles each, paired with 4 reference summaries capped at 75 bytes.", "2 github.com/harvardnlp/sent-summary", "sentences (sources) of the training pairs from Gigaword to train SEQ 3 ; our model is never exposed to target headlines (summaries) during training or evaluation, i.e., it is completely unsupervised.", "Our code is publicly available.", "3 We compare SEQ 3 to other unsupervised sentence compression models.", "We note that the extractive model of Miao and Blunsom (2016) relies on a pre-trained attention model using at least 500K parallel sentences, which is crucial to mitigate the inefficiency of sampling-based variational inference and REINFORCE .", "Therefore it is not comparable, as it is semi-supervised.", "The results of the extractive model of Fevry and Phang (2018) are also not comparable, as they were obtained on a different, not publicly available test set.", "We note, however, that they report that their system performs worse than the LEAD -8 baseline in ROUGE -2 and ROUGE-L on Gigaword.", "The only directly comparable unsupervised model is the abstractive Pretrained Generator' of Wang and Lee (2018).", "The version of Adversarial REINFORCE ' that Wang and Lee (2018) consider unsupervised is actually weakly supervised, since its discriminator was exposed to the summaries of the same sources the rest of the model was trained on.", "As baselines, we use LEAD -8 for Gigaword, which simply selects the first 8 words of the source, and PREFIX for DUC , which includes the first 75 bytes of the source article.", "We also compare to supervised abstractive sentence compression methods (Tables 1-3).", "Following previous work, we report the average F1 of ROUGE 1, ROUGE -2, ROUGE-L (Lin, 2004).", "We implemented SEQ 3 with LSTM s (see Appendix) and during inference we perform greedy-sampling.", "Results Table 1 reports the Gigaword results.", "SEQ 3 outperforms the unsupervised Pretrained Generator across all metrics by a large margin.", "It also surpasses LEAD -8.", "If we remove the LM prior, performance drops, esp. in ROUGE -2 and ROUGEL .", "This makes sense, since the pretrained LM rewards correct word order.", "We also tried removing the topic loss, but the model failed to converge and results were extremely poor (Table 1).", "Topic loss acts as a bootstrap mechanism, biasing the compressor to generate words that maintain the topic of the input text.", "This greatly reduces variance due to sampling in early stages of training, alleviating the need to pretrain individual 3 https://github.com/cbaziotis/seq3 Type Supervision Methods R-1 R-2 R-L Supervised 3.8M ABS (Rush et al., 2015) 29.55 11.32 26.42 SEASS (Zhou et al., 2017) 36.15 17.54 33.63 words-lvt5k-1sent (Nallapati et al., 2016) 36.4 17.7 33.71 Weakly supervised (3.8M) Adversarial REINFORCE (Wang and Lee, 2018) 28.11 9.97 25.41 Unsupervised 0 LEAD -8 (Baseline) 21.86 7.66 20.45 Pretrained Generator (Wang and Lee, 2018) 21.26 5.60 18.89 SEQ 3 (Full) 25.39 8.21 22.68 SEQ 3 w/o LM prior loss 24.48 6.68 21.79 SEQ 3 w/o TOPIC loss 3.89 0.1 3.75 Table 1: Average results on the (English) Gigaword dataset for abstractive sentence compression methods.", "components, unlike works that rely on reinforcement learning (Miao and Blunsom, 2016; Wang and Lee, 2018).", "Overall, both losses work in synergy, with the topic loss driving what and the LM prior loss driving how words should be included in the summary.", "SEQ 3 behaves similarly on DUC 2003 and DUC -2004 (Tables 2-3), although it was trained on Gigaword.", "In DUC -2003, however, it does not surpass the PREFIX baseline.", "Finally, Fig. 3 illustrates three randomly sampled outputs of SEQ 3 on Gigaword.", "In the first one, SEQ 3 copies several words esp. from the beginning of the input (hence the high ROUGE-L ) exhibiting extractive capabilities, though still being adequately abstractive (bold words denote para-phrases).", "In the second one, SEQ 3 showcases its true abstractive power by paraphrasing and compressing multi-word expressions to single content words more heavily, still without losing the overall meaning.", "In the last example, SEQ 3 progressively becomes ungrammatical though interestingly retaining some content words from the input.", "input: the american sailors who thwarted somali pirates flew home to the u.s. on wednesday but without their cap-tain , who was still aboard a navy destroyer after being rescued from the hijackers .", "gold : us sailors who thwarted pirate hijackers fly home SEQ 3 : the american sailors who foiled somali pirates flew home after crew hijacked .", "input: the central election commission -lrbcec -rrbon monday decided that taiwan will hold another election of national assembly members on may # .", "gold : national <unk> election scheduled for may SEQ 3 : the central election commission -lrbcec UNK announced elections .", "input: dave bassett resigned as manager of struggling english premier league side nottingham forest on saturday after they were knocked out of the f.a. cup in the third round , according to local reports on saturday .", "gold : forest manager bassett quits .", "SEQ 3 : dave bassett resigned as manager of struggling en-glish premier league side UNK forest on knocked round press Figure 3: Good/bad example summaries on Gigaword.", "hypothesize that since the reconstructor is autoregressive, i.e., each word is conditioned on the previous one, errors occurring early in the generated sequence have cascading effects.", "This inevitably encourages the compressor to select the first words of the input.", "A possible workaround might be to modify SEQ 3 so that the first encoder-decoder pair would turn the inputs to longer sequences, and the second encoder-decoder would compress them trying to reconstruct the original inputs.", "In future work, we plan to explore the potential of SEQ 3 in other tasks, such as unsupervised machine translation (Lample et al., 2018a; Artetxe et al., 2018) and caption generation (Xu et al., 2015).", "We would like to thank Ryan McDonald for helpful discussions and feedback.", "This work has been partially supported by computational time granted from the Greek Research & Technology Network (GR-NET) in the National HPC facility ARIS.", "We thank NVIDIA for donating a TitanX GPU." ]
[ "abstain", "method", "objective", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other" ]
[ "We propose neural models to generate text from formal meaning representations based on Discourse Representation Structures (DRSs).", "DRSs are document-level representations which encode rich semantic detail pertaining to rhetorical relations, presupposition, and co-reference within and across sentences.", "We formalize the task of neural DRS-to-text generation and provide modeling solutions for the problems of condition ordering and variable naming which render generation from DRSs non-trivial.", "Our generator relies on a novel sibling treeLSTM model which is able to accurately represent DRS structures and is more generally suited to trees with wide branches.", "We achieve competitive performance (59.48 BLEU) on the GMB benchmark against several strong baselines.", "It is not uncommon for text generation systems to produce natural language output from intermediate semantic representations (Yao et al., 2012; Takase et al., 2016).", "The literature presents several examples of generating text from logical forms underlying various grammar formalisms (Wang, 1980; Shieber et al., 1990; Carroll and Oepen, 2005; White et al., 2007), typed lambda calculus (Lu and Ng, 2011), Abstract Meaning Representations (AMR; Flanigan et al. 2016; Konstas et al. 2017; Song et al. 2018; Beck et al. 2018; Damonte and Cohen 2019; Ribeiro et al. 2019; Zhu et al. 2019; Cai and Lam 2020; Wang et al. 2020), Discourse Representation Theory (DRT; Basile and Bos 2011; Basile 2015), and Minimal Recursion Semantics (MRS; Horvat et al. 2015; Hajdik et al. 2019).", "In this work, we propose neural models to generate high-quality text from semantic representations based on Discourse Representation Structures (DRSs).", "DRSs are the basic meaning-carrying units in Discourse Representation Theory (DRT; Kamp b 5 b 6 : x 1 , b 1 : e 1 , b 1 : x 2 , b 2 : t 1 b 1 b 6 : Pred( x 1 , male.n.02) b 1 : Pred( e 1 , play.v.03) b 1 : Agent( e 1 , x 1 ) b 1 : Theme( e 1 , x 2 ) b 1 : Pred( x 2 , piano.n.01) b 2 : Pred( t 1 , now.n.01) b 1 : temp after( e 1 , t 1 ) b 4 b 4 : (cid:5) : b 3 : x 3 , b 3 : e 2 b 3 b 3 : Named( x 3 , tom) b 3 : Pred( e 2 , stop.v.05) b 3 : Agent( e 2 , x 3 ) b 1 : Pred( x 1 , male.n.02) b 3 : Patient( e 2 , x 1 ) b 3 : temp before( e 2 , e 1 ) CONTRAST( b 1 , b 4 ) The man is going to play the piano.", "1981; Kamp and Reyle 1993; Asher and Lascarides 2003), a formal semantic theory designed to handle a variety of linguistic phenomena, including anaphora, presuppositions (Van der Sandt, 1992; Venhuizen et al., 2018), and temporal expressions within and across sentences.", "DRSs are scoped meaning representations, they capture the semantics of negation, modals, and quantification.", "Figure 1 displays in box format the meaning representation for a discourse consisting of two sentences.", "The outermost box is a segmented DRS expressing the rhetorical relation CONTRAST between box b 1 representing the first sentence and box b 4 representing the second sentence.", "Boxes b 1 and b 2 are DRSs, the top layers contain variables (e.g., x 1 , x 2 ) indicating discourse referents and the bottom layers contain conditions (e.g., Named( x 3 , tom)) representing information about discourse referents.", "Variables and conditions have pointers (denoted by b in the figure) pointing to the boxes where they should be interpreted.", "1 Predicates are disambiguated to their Wordnet (Fellbaum, 1998) senses (e.g., male.n.02 and play.v.03).", "Although there has been considerable activity recently in developing models which analyze text in the style of DRT (van Noord et al., 2018, 2019; Liu et al., 2019a, 2018; Fancellu et al., 2019), attempts 1 In Figure 1, b 6 is a presuppositional box for the interpretation of the man in the context of the two-sentence discourse.", "to generate text from DRSs have been few and far between (however see Basile 2015 and Narayan and Gardent 2014 for notable exceptions).", "This is primarily due to two properties of DRS-based semantic representations which render generation from them challenging.", "Firstly, DRS conditions are unordered representing a set (rather than a list).", "2 A hypothetical generator would have to produce the same output text for any DRSs which convey the same meaning but appear different due to their conditions having a different order (see Figures 1 and 2a which are otherwise identical but the order of conditions in boxes b 1 and b 4 varies).", "The second challenge concerns variables and their prominent status in DRSs.", "Variables identify objects in discourse (such as entities and predicates), and are commonly used to model semantic phenomena including coreference, control constructions, and scope.", "In Figure 1, variables x, e, s, t, p, and b denote entities, events, states, time, propositions and boxes, respectively.", "Variable names themselves are arbitrary and meaningless posing a challenge for learning.", "Our generator must verbalize different variable names to the same surface form.", "The meaning representations in Figures 1 and 2b are identical and both correspond to the same discourse except that the variables have been given different names ( b 5 in Figure 1 has been named b 1 in Figure 2b, b 1 is now b 2 , x 1 is x 2 , e 1 is e 9 , and so on).", "These two problems are further compounded by the way DRSs are displayed, in a box-like format which is intuitive and easy to read but not convenient for modeling purposes.", "As a result, DRSs are often post-processed in a format that can be handled more easily by modern neural network models.", "For example, DRS variables and conditions are converted to clauses (van Noord et al., 2018) or DRSs are modified to trees where each box is a subtree and conditions within the box correspond to children of the subtree (Liu et al., 2019a, 2018).", "In this paper we propose novel solutions to condition ordering and variable naming.", "We argue that even though DRS conditions appear unordered, they have a latent order due to biases in the way the training data is created.", "To give a concrete example, the Groningen Meaning Bank (GMB; Bos et al. 2017) provides the largest collection to date of English texts annotated with DRSs.", "These annotations were generated with the aid of a CCG parser (Clark and Curran, 2007); atomic DRS conditions were associated with CCG supertags and then semantically combined following the syntactic CCG derivations.", "Even annotators creating DRSs manually would be prone to follow a canonical order (e.g., listing named entities first, then verbal predicates and their thematic roles, and finally temporal conditions).", "We propose a graph-based model which learns to recover the latent order of conditions without explicitly enumerating all possible orders which can be prohibitive.", "We also handle variable names with a method which rewrites arbitrary indices to relative ones which are in turn determined by the order of conditions.", "Following previous work, we convert DRSs to a more amenable format.", "Specifically, we consider Discourse Representation Tree Structures (DRTSs; Liu et al. 2019b) as the semantic representation input to our document generation task, and generate a sequence of words autoregressively.", "We adopt an encoder-decoder framework with a treeLSTM (Tai et al., 2015) encoder and a standard LSTM (Hochre-iter and Schmidhuber, 1997) decoder.", "Problematically, DRS trees are wide and the number of children for a given node can be as many as 180.", "It therefore becomes memory-consuming and sparse to assign a forget gate for each child as in the case of conventional ( N -ary) treeLSTM (Tai et al., 2015).", "We propose a variant which we call Sibling treeLSTM that replaces N forget gates with a parent gate and a sibling gate.", "As a result, it reduces memory usage from O ( N ) to O (2) , and is more suitable for modeling wide and flat trees.", "Our contributions can be summarized as follows: (1) we formalize the task of neural DRS-to-text generation; (2) we provide solutions for the problems of condition ordering and variable naming, which render generation from DRS-based meaning representations non-trivial; and (3) propose a novel sibling treeLSTM model that can be also generally used to model wide tree structures.", "We make our code and datasets publicly available.", "3 2 Problem Formulation Let S denote a DRS-based meaning representation.", "The aim of DRS-to-text generation is to produce text T that verbalizes input meaning S : T = arg max T TP ( T | S, ) , where T is the set of all possible texts, S has an arbitrary order of conditions and indexing of variables, and is the set of model parameters.", "Our generation model is based on the encoder-decoder framework (Bahdanau et al., 2015) and operates over tree structures.", "Moreover, prior to training, variable names are rewritten so that their (arbitrary) indices denote relative order of appearance.", "We propose a novel sibling TreeLSTM for encoding tree structures.", "The decoder is a sequential LSTM equipped with an attention mechanism generating word sequence T = [ t 0 , t 1 , ..., t m 1 ] , where m is the length of the text.", "At test time, DRS conditions are normalized, i.e., they are reordered following a canonical order learned from data, and used as input to our generation model.", "We first describe our DRS-to-tree conversion and variable renaming procedures (Sections 2.1 and 2.2).", "We next present our tree-to-sequence generation model (Section 2.3), and explain how DRS conditions are ordered (Section 2.4).", "The algorithm of Liu et al. (2018) renders DRSs in a tree-style format.", "It constructs trees based on DRS conditions in the bottom box layers, without considering variables in the top layer.", "This results in oversimplified semantic representations and information loss (e.g., presuppositions cannot be handled).", "We improve upon their approach by 3 https://github.com/LeonCrashCode/ Discourse-Representation-Tree-Structure/tree/main/gmb/DRS-to-text", "merging variables in the top layer with variables in the bottom layer via introducing special conditions.", "We collect variables in top layers of DRS boxes to construct a dictionary d = { v : b } , where v denotes a variable and b is a presupposition box label (e.g., x 1 : b 1 ).", "We then move variables from the top to the bottom layer by expressing them as special conditions b : Ref( v ) and placing them before conditions on variable v .", "For example, b 6 : x 1 in Figure 1 becomes special condition b 6 : Ref( x 1 ) and is placed before condition b 6 : Pred( x 1 , male.n.02) in Figure", "3(a).", "Once top variables have been rewritten as special conditions, the resulting DRSs are converted into trees as shown in", "Figure3(b).", "Box variables (e.g., b 1 , b 5 ) become parent nodes, while conditions, which are also subtrees, become children.", "We rename variables with regard to their relative position in a given DRS following a predefined", "traversal order.", "We obtain the sequence of box variables by traversing DRSs in an outer-to-inner and left-to-right manner, e.g., [ b 5 , b 1 , b 4 , b 3 ] in Figure 1.", "For SDRSs, we replace variables in discourse relations with k i , where i denotes th i th box from left to right.", "For example CONTRAST( b 1 , b 4 ) in Figure 1 is rewritten to CONTRAST( k 0 , k 1 ). Variables and conditions within presupposition boxes are rewritten to B i , where i Z denotes the distance of the current box to the presupposition box. For example, b 1 : Agent( e 1 , x 1 ) is rewritten to B 0 : Agent( e 1 , x 1 ) because it is in the current box b 1 , while b 1 : Pred( x 1 , male.n.02) is rewritten to B 2 : Pred( x 1 , male.n.02) because it is in box b 3 and two hops away from presupposition box b 1 .", "We use special label O for presupposition boxes pertaining to semantic content outwith the current DRS.", "For example, b 6 : Ref( x 1 ) is rewritten to O : Ref( x 1 ) because it introduces a new presupposition box, and b 6 : Pred( x 1 , male.n.02) is rewritten to O 0 : Pred( x 1 , male.n.02) because the condition can only be interpreted in this new presupposition box (now O 0 and previoulsy b 6 ).", "We obtain a sequence of general variables by traversing conditions as they appear in the DRS.", "Variables introduced for the first time are denoted by their type (going from left-to-right), while subsequent mentions of the same variables are rewritten with relative indices denoting their distance from the position where they were first introduced.", "Take Figure", "3(a) as an example.", "The sequence of general variables is [ x 1 , x 1 , e 1 , e 1 , e 1 , x 1 , x 2 , e 1 , x 2 , x 2 , t 1 , t 1 , e 1 , t 1 , x 3 , x 3 , e 2 , e 2 , e 2 , x 3 , x 1 , e 2 , x 1 , e 2 , e 1 ], and is rewritten to [ X , X 0 , E , E 0 , E 0 , X 0 , X , E 0 , X 0 , X 0 , T , T 0 , E 0 , T 0 , X , X 0 , E , E 0 , E 0 , X 0 , X 2 , E 0 , X 2 , E 0 , E 1 ].", "The DRS from Figure", "3(a) is shown in Figure 4 with relative variables.", "Our generation model is based on the encoder-decoder framework, where an encoder is used to encode input DRS trees and a decoder outputs a sequence of words.", "A limitation of sequential encoders is that they only allow sequential information propagation without considering the structure of the input (Tai et al., 2015; Wang et al., 2019).", "In b 6 x 1 Ref b 6 x 1 male.n.02", "our case, DRS tree structures are additionally wide (the longer a document, the wider the tree) and relatively flat (see Figure", "3(b)).", "To better model these aspects, we propose a treeLSTM encoder which takes sibling information into account.", "As shown in Figure 5, the hidden representations of the sibling TreeLSTM cells are updated from preceding sibling and child nodes.", "More formally, the hidden representation for node j is given by: u j = tanh( g u ([ x j ; h js ; h jp ])) (1) i j , o j = ( g io ([ x j ; h js ; h jp ])) (2) f js = ( g fs ([ x j ; h js ])) (3) f jp = ( g fp ([ x j ; h jp ])) (4) c j = i j u j + f js c js + f jp c jp (5) h j = o j tanh( c j ) , (6) where x j is the token input representation, h js is the hidden representation of the sibling node preceding j , h jp is the hidden representation of the last child of node j (Equation (1)), g are linear functions, and is a sigmoid function (Equations (2) (4)).", "For each node j , we obtain its cell input representation u j (Equation (1)), its input gate i j and output gate o j (Equation (2)), and two forget gates f js (Equation (3)) and f jp (Equation (4)) for its neighbor cell and the last child cell, respectively.", "The memory of the current cell c j (Equation (5)) is updated by the gated sum of its cell input representation and the memories of its neighbor and child cells.", "The hidden representation of current node h j is computed with its output gate o j (Equation (6)).", "Finally, a DRS tree is represented by the hidden representations of its nodes [ h 0 , h 1 , ..., h n (cid:48) 1 ] as computed by the sibling treeLSTM ( n (cid:48) denotes the number of nodes).", "The decoder is a standard LSTM with global attention (Bahdanau et al., 2015).", "As discussed previously, DRSs at test time may exhibit an arbitrary order of conditions, which our model should be able to handle.", "Our solution is to to reorder conditions prior to generation by learning a latent canonical order from training data (e.g., to recover boxes b 1 and b 3 in Figure 1 from boxes b 1 and b 3 in Figure 2).", "More formally, given a set of conditions R set , we obtain an optimal ordering R = [ r 0 , r 1 , ..., r n 1 ] such that: R = arg max R ( R set ) SCOREK ( R | R set ) , (7) where ( R set ) are all permutations of R set , and R is the order with the highest likelihood according to SCOREK .", "Here, K parametrizes SCORE as knowledge we collect from our training data by observing canonical orders of conditions.", "Unfortunately, the time complexity of calculating Equation (7) is O ( n !) , we must enumerate all possible permutations for a set of conditions with n as large as 180 .", "Since this is prohibitive, we resort to graph ordering which allows us to recover the order of the conditions without enumeration.", "Graph Construction We construct a graph from the set of DRS conditions which we break down into graph nodes and edges.", "Conditions in DRSs can be simple or complex according to their type of arguments.", "A simple condition might have a relation name with two arguments (e.g., Named( x 3 , tom) and Agent( e 1 , x 3 )), while a complex condition has a scoped name (e.g., possibility (cid:5) ) and takes one or more DRSs as arguments.", "Simple conditions are denoted by a 3-tuple ( l s , a 0 , a 1 ) , where l s is the condition name (e.g., Named and Agent) and a 0 and a 1 are its first and the second argument, respectively, which could be a variable or constant (e.g., e 1 , x 3 and piano.n.01).", "Complex conditions are a 2-tuple ( l c , V r ), where l c is the scope name, and V r the set of arguments scoped by the condition.", "For example, the set of arguments for the possibility scope ( (cid:5) ) in Figure 1 is { e 1 , e 2 , x 1 , x 3 , tom, stop.v.05, male.n. } .", "Condition names become nodes in our graph.", "Simple conditions are further divided into constant and thematic nodes.", "Constant nodes are", "constructed by concatenating the relation name in the condition with the constant argument (e.g., condition Pred( x 1 , male.n.02) becomes node Pred male.n.02).", "Thematic nodes correspond to the relation name of the thematic condition (e.g., Agent( e 1 , x 1 ) becomes the node Agent).", "Complex nodes correspond to the name of complex conditions (e.g., possibility (cid:5) ).", "We insert edges between graph nodes if these share arguments.", "For example, in Figure", "6(b), there is an edge connecting node Pred male.n.02 with Agent as they share argument x 1 .", "We label this edge with a 1 to denote the fact that it is the second argument of Agent.", "Another edge is drawn between Pred play.v.03 and Agent (as they share argument e 1 ) with label a 0 denoting that this is the first argument of Agent.", "Edges between nodes are bidirectional, with inverse edges bearing the suffix -of.", "Edges drawn between constant and complex nodes bear the label Related, while edges between two constant nodes (with the same variables) bear the label Equal (we provide a more formal description in the Appendix).", "Ordering Model Given graph G = ( R set , E ) , where R set = { r 0 , r 1 , ..., r n 1 } is the set of nodes and E is the set of edges in G , our model outputs R as the optimal order of R set .", "As shown Figure", "6(a), each node is a sequence of words.", "A BiLSTM is applied to obtain representation x i of each node r i = [ w i 0 , ..., w im 1 ] : x i = BiLSTM ([ w i 0 , ..., w im 1 ]) .", "each node r i , we collect information from neighbor hidden representations with a gate controling the", "information flow from neighbors to current nodes: h (cid:48) ki = (cid:88) j g kj h k 1 j ; (9) g kj = ( f ([ e ji , h k 1 i , h k 1 j ])) , (10) where e ji is the embedding of edges from node r j to r i , and k is the recurrent step in the GRU.", "The node hidden representations are updated as: h ki = GRUCell ([ x i ; g k 1 G ] , h k 1 i ) (11) g kG = GRUCell ( 1 n (cid:88) i h ki , g k 1 G ) (12) where g G represents the hidden representation of the graph as the average of (hidden) node representations, and GRUCell denotes the gated recurrent cell function.", "We obtain the hidden representations of nodes in the final recurrent step ( K ) as HK = { h K 0 , h K 1 , ..., h Kn 1 } .", "Our decoder obtains the orders with the highest probability.", "We avoid enumerating all possible permutations for a set of nodes by generating their order autoregressively with an LSTM-based Pointer Network (PN; Vinyals et al. 2015): SCOREK ( R | R set ) = PN ( R | R set , HK , ) (13) PN ( R | R set , HK , ) = (cid:89) i P ( r i | r <i , HK ) (14) P ( r i | r <i , HK ) = softmax( v T tanh( W [ h di ; HK ])) (15) where are the parameters of the Pointer Network, h di is the i th step hidden representation of the Pointer Network, and v , and W are parameters.", "Hidden representation h di is updated by the input representation of the ( i 1) th ordered node: h di = LSTMCell ( x r i 1 , h di 1 ) .", "All parameters are optimized with standard back-propagation.", "Our experiments were carried out on the Groningen Meaning Bank (GMB; Bos et al. 2017) which provides a large collection of English documents annotated with DRSs.", "We used the standard training, development, and test splits that come with the distribution of the corpus.", "All DRSs in the GMB were preprocessed into the tree-based format discussed in Section 2.1.", "We also extracted from the training data conditions and their order for training our graph ordering model.", "Dataset statistics are shown in Table 1.", "Models and Settings Before evaluating our generator per se, we assess the effectiveness of the proposed condition ordering model (see Section 2.4).", "Specifically we compare four kinds of graphs: NoEdges , is a graph without edges; FullEdges , is a complete graph where each pair of nodes has edges; SiGraph , is the proposed graph without bidirectional edges; and BiGraph , is the proposed graph with bidirectional edges (see Figure 6).", "We also consider Counting , a baseline model which greedily orders pairs of conditions according to their frequency of appearance in the training data (see the Appendix for details).", "For all neural models the embedding dimension was 50 and the hidden dimension 300.", "The bidirectional LSTM used for representing the graph nodes has a single layer, and the recurrent step in the GCRN is 2 ( K = 2 ).", "We applied the Adam optimizer (Kingma and Ba, 2014).", "We use accuracy to measure the percentage of absolute orders which are predicted correctly and Kendall's co-efficient to measure the relationship between two lists of ordered items; ranges from 1 to 1, where 1 means perfect inversion and 1 means perfect agreement.", "Results Table 2 summarizes our results.", "SiGraph performs better than NoEdges ( + 14.83% accu-racy), showing that edge information is helpful for the representation of nodes which are used to order conditions.", "FullEdges performs worse than SiGraph ( 13.68% accuracy), underlying the fact that graph structure matters (i.e., edges are helpful when connecting certain pairs of nodes).", "BiGraph achieves the best ordering performance by a large margin compared to SiGraph ( + 9.63 % accuracy).", "One possible reason is that bidirectionality ensures all nodes have incoming edges, which can be used to update the node representations.", "Models and Settings We first examine generation performance in an ideal setting where (gold standard) condition orders are given and the indices of variables are fixed.", "We compared the proposed treeLSTM against Seq , a baseline sequence-to-sequence model which adopts a bidirectional LSTM as its encoder.", "4 Trees were linearized in a top-down and left-to-right fashion, X = [ x 0 , x 1 , ...x n 1 ] , where n is the tree length.", "We obtained hidden representations H = [ h 0 , h 1 , ..., h n 1 ] of the input with: [ h 0 , h 1 , ..., h n 1 ] = BiLSTM ([ x 0 , x 1 , ..., x n ]) In addition, we included various models with tree-based encoders: ChildSum , is the bidirectional childsum-treeLSTM encoder of Tai et al. (2015); it operates over right-branch binarized trees; Nary , is the bidirectional Nary-TreeLSTM of Tai et al. (2015), again over right-branch binarized trees; 5 and Sibling is our bidirectional sibling-TreeLSTM.", "All models were equipped with the same LSTM decoder, global attention (Bahdanau et al., 2015), and the copy strategy of See et al. (2017).", "The embedding dimension was 300 and the hidden dimension 512.", "All encoders and decoders have 2 layers.", "The detailed settings are shown in 4 The length of the input tokens can be around 4,000.", "the Appendix.", "We measure generation quality with case-insensitive BLEU (Papineni et al., 2002).", "Results Table 3 shows our results on the development dataset.", "Overall, treeLSTM models performs better (average + 1.69 BLEU) than sequence models.", "Nary performs better ( + 0.26 BLEU) than ChildSum because the latter cannot model the order of children.", "Sibling performs best (74.22 BLEU), because it it not only encodes the tree structure but also keeps track of sequential information.", "Models and Settings We finally, present our results in a more realistic setting where both problems of condition ordering and variable naming must be addressed.", "We recover condition order using four approaches: a Naive method which has no special-purpose ordering mechanism; the order of conditions is random in the development/test sets and fixed in the training set; Random , the order of conditions is random in the training, development, and test sets; Counting , the order of conditions is recovered by the Counting method; GraphOrder recovers the order of conditions with BiGraph .", "All comparison systems employ variable renaming as introduced in Section 2.2.", "We report experiments with a sequence-to-sequence generator and our sibling-TreeLSTM.", "Results Table 4 summarizes our results on the development set.", "Naive performs poorly, indicating that both Seq and Sibling models are sensitive to the order of conditions.", "Random, has higher variance with Seq ( + 16.51) compared to Sibling.", "Hidden representations for each timestep in Seq are heavily influenced by all previous steps, which are sequentially encoded; subtrees are encoded as a unit in Sibling, which is a more global representation for capturing patterns.", "Overall, we observe that the order of conditions plays a key role in the generation: both Seq and Sibling models improve when ordering of conditions is explicitly incorporated (either with Counting or GraphOrder).", "We observe that the combination of Sibling with GraphOrder achieves the best results (58.73 BLEU).", "Table 5 presents our results on the test set.", "We compare our Sibling encoder against a sequential one.", "Both models are interfaced with GraphOrder.", "We also compare to a previous graph-to-text model (Song et al., 2018; Damonte and Cohen, 2019) which has been used for generating from AMRs.", "We converted DRSs to graphs following the method of Liu et al. (2020); graphs were encoded with a GCRN (Seo et al., 2018) and decoded with an LSTM.", "As can be seen, Sibling+GraphOrder outperforms all comparison systems achieving a BLEU of 59.26.", "However, compared to ideal-world generation (see Table 3) there is still considerable room for improvement.", "Figure 7 shows model performance on test set against DRS size (i.e., the number of nodes in a DRS tree).", "Perhaps unsurprisingly, we see that generation quality deteriorates with bigger DRSs (i.e., with > 1,600 nodes).", "While BLEU is frequently adopted as an automatic evaluation metric for genration tasks, it is somewhat problematic in our case as it merely calculates word overlap between generated and gold-standard text without assessing whether model output is faithful to the semantics of the input (i.e., the DRS meaning representations).", "To this effect, we present examples of text generated by our model, demonstrating how the DRS input constrains and affects the output text.", "Figure 8 shows examples of text generation from the test set.", "In the first example, the model generates the word because from the rhetorical relation, BECAUSE( b 10 , b 12 ).", "Temporal information (highlighted in blue in the figure) is also accurately reflected in the generated text ( sell is inflected to its present tense form).", "In addition, the model tends to over-generate (e.g., the word dollar is mentioned twice) and sometimes misses out on important determiners (e.g., some ).", "In the second example, the model generates the word themselves referring to the entities mentioned before, e.g., x 29 equals to x 27 which refers to inmates , resolving the coref-erence.", "In the third example, the model generates the modal verb must in accordance with the scope operator NEC (a shorthand for Necessity, (cid:3) ).", "Also, the model generates all for food and goods corresponding to the Implication (IMP) condition (i.e., x ( P ( x ) Q ( x )) ).", "Much previous work has focused on text generation from formal representations of meaning focusing exclusively on isolated sentences or queries.", "The literature offers a collection of approaches to generating from AMRs most of which employ neural models and structured encoders (Song et al., 2018; Beck et al., 2018; Damonte and Cohen, 2019; Ribeiro et al., 2019; Zhu et al., 2019; Cai and Lam, 2020; Wang et al., 2020).", "Other work generates text from structured query language (SQL) adopting either sequence-to-sequence (Iyer et al., 2016) or graph-to-sequence models (Xu et al., 2018).", "Basile (2015) was the first to attempt generation from DRT-based meaning representations.", "He proposes a pipeline system which operates over graphs and consists of three components: an alignment module learns the correspondence between surface text and DRS structure, an ordering module determines the relative position of words and phrases in the surface form and a realizer generates the final text.", "Narayan and Gardent (2014) simplify complex sentences with a two-stage model which DRS ...", "first performs sentence splitting and deletion operations over DRSs and then uses a phrase-based machine translation model for surface realization.", "Our work is closest to Basile (2015); we share the same goal of generating from DRSs, however, our model is trained end-to-end and can perform long-form generation for documents and sentences alike.", "We also adopt an ordering component, but we order DRS conditions rather than lexical items, and propose a model capable of inferring a global order.", "There has been long-standing interest in information ordering within NLP (Lapata, 2003; Abend et al., 2015; Chen et al., 2016; Gong et al., 2016; Logeswaran et al., 2018; Cui et al., 2018; Yin et al., 2019; Honovich et al., 2020).", "Our innovation lies in conceptualizing ordering as a graph scoring task which can be further realized with graph neural network models (Wu et al., 2020).", "In this paper, we have focused on document-level generation from formal meaning representations.", "We have adopted DRT as our formalism of choice and highlighted various challenges associated with the generation task.", "We have introduced a novel sibling treeLSTM for encoding DRSs rendered as trees and shown it is particularly suited to trees with wide branches.", "We have experimentally demonstrated that our encoder coupled with a graph-based condition ordering model outperforms strong comparison systems.", "In the future, we would like to embed our generator in practical applications such as summarization and question answering.", "We thank the anonymous reviewers for their feedback.", "We gratefully acknowledge the support of the European Research Council (Lapata, Liu; award number 681760), the EU H2020 project SUMMA (Cohen, Liu; grant agreement 688139) and Bloomberg (Cohen, Liu)." ]
[ "objective", "abstain", "objective", "objective", "result", "abstain", "other", "objective", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "objective", "abstain", "method", "method", "method", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "objective", "method", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "other", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "abstain", "objective", "other", "abstain", "method", "result", "objective", "objective", "method", "other", "other" ]
[ "Character-level information is included in many NLP models, but evaluating the information encoded in character representations is an open issue.", "We leverage perceptual representations in the form of shape, sound, and color embeddings and perform a representational similarity analysis to evaluate their correlation with textual representations in five languages.", "This cross-lingual analysis shows that textual character representations correlate strongly with sound representations for languages using an alphabetic script, while shape correlates with featural scripts.", "We further develop a set of probing classifiers to intrinsically evaluate what phonological information is encoded in character embeddings.", "Our results suggest that information on features such as voicing are embedded in both LSTM and transformer-based representations.", "On the one hand, writing is an essential form of human communication.", "Writing systems and orthographies differ across languages and impact our reading behavior.", "Psycholinguists have extensively studied the effect of orthographic depth, i.e., the transparency of grapheme-to-phoneme mappings, on reading acquisition as well as skilled reading (Seymour et al., 2003).", "On the other hand, the wide range of cross-linguistic diversity is still a major challenge for natural language processing (NLP) and for the study of language more generally (Mielke et al., 2019; Gutierrez-Vasques and Mijangos, 2020), especially on sub-word levels (Gutierrez-Vasques et al., 2021).", "This increases the importance of cross-lingual analyses of character-level language models (LMs), because anglocentrism in linguistic research is not only prevalent in NLP, but also in (reading and) orthography research (Share, 2008).", "with Latin scripts, since they contain meaningful information on various linguistic levels and enhance the robustness of models.", "Oh et al. (2021) suggest that character LMs provide a more human-like account of sentence processing, which assumes a larger role of morphology, phonotactics, and orthographic complexity than was previously thought.", "Moreover, including character and sub-character information in LMs for Asian scripts is a standard practice.", "Despite of this recent attention, work focusing on getting a deeper understanding of character representation is scarce (Kann and Monsalve-Mercado, 2021), in particular regarding the comparison between languages and different types of scripts.", "The goal of this work is to improve our understanding of learned character representations, for better interpretability of the models.", "Like other neural network based models, character-level LMs can be seen as black-box methods and reveal limited insights about the causes for their predictions (Gilpin et al., 2018).", "We investigate the information encoded in character embeddings by comparing them to perceptual representations.", "Such representations we design by mimicking features of human language processing, from reading, writing and speaking, by the creation of embeddings based on the shape of characters, the sound (phonological features derived from grapheme-to-phoneme mappings) and color (elicited in the form of grapheme-color mappings from synesthetes).", "Contributions We train models to learn three types of character embeddings: a positive pointwise mutual information (PPMI) vectorization, a recurrent model, and a transformer model.", "As an intrinsic evaluation method, we conduct a representational similarity analysis (RSA) between the distances of textual character representations and the perceptual representations in the form of shape, sound, and color embeddings.", "Furthermore, to pro-6819 vide more interpretable evaluation methods for character embeddings, we propose a novel probing task of predicting phonological features.", "Crucially, we address the cross-linguistic challenges that arise with character-level modeling by taking into account languages of varying scripts and orthographic depths.", "We argue that character-level black-box models can only be understood through cross-linguistic approaches and not on individual languages.", "We perform analyses of five languages: Dutch, English, Japanese, Korean, and Spanish.", "We discuss the compelling patterns of significant correlations and show the effectiveness of the probing classifiers even in a zero-shot scenario.", "The implementation and character representations are available online 1 .", "Character-level information in LMs.", "Including character-level information in LMs of languages with Latin scripts has become a common practice in NLP in recent years.", "This has been the case for different tasks, such as language modeling (Kim et al., 2016; Al-Rfou et al., 2019), part-of-speech tagging (Ling et al., 2015), morphological inflection (Faruqui et al., 2016; Kann and Schtze, 2016; Kann et al., 2020), named entity recognition (Lample et al., 2016), machine translation (Sen-nrich et al., 2016; Ngo et al., 2019).", "Character-level information can enhance the models by providing background knowledge in the form of the underlying structures of words in a language (Adouane et al., 2018).", "Ma et al. (2020) showed how combining characterand word-level information in pre-trained LMs improves not only the performance but also the robustness of the model.", "For certain languages, it is standard practice to include sub-token information in LMs, which happens naturally due to the compositional structure of their orthographies.", "This is the case for East Asian languages such as Korean and Japanese (e.g., Misawa et al. 2017; Chen et al. 2015).", "Korean LMs are often trained on Jamos (i.e., letters, as opposed to syllables), the smallest unit of the Korean script (Ahn et al., 2017; Park et al., 2018).", "This reduces the vocabulary size and injects syntactic and semantic information to the model that is difficult to access with conventional character-or token-level units (Stratos, 2017).", "Recently, Lee 1 https://github.com/syssel/ Interpreting-character-embeddings et al. (2020b) showed that a Korean BERT model using sub-character information requires less training data than previous models.", "Similarly, Japanese LMs also benefit from sub-character information (Nguyen et al., 2017).", "Evaluating character embeddings.", "Character-based language models are most often evaluated on downstream NLP tasks or on next character or word prediction (e.g., Takase et al. 2019; Tay et al. 2021; Clark et al. 2021).", "Additionally, they can be evaluated on word-level intrinsic evaluation tasks such as word analogy or similarity (e.g., Li et al. 2015).", "While work on intrisic evaluation of character embeddings is scarce (Kann and Monsalve-Mercado, 2021), the evaluation of neural models trained on phonemes have received more attention, focusing on what phonological knowledge is embedded within (Silfverberg et al., 2018; Kolachina and Magyar, 2019; Mayer and Nelson, 2020; Mayer, 2020; Silfverberg et al., 2021).", "Mayer (2020) and Mayer and Nelson (2020) use characters as an approximation of phonemes in the case of Samoa and Finnish, respectively, as graphemes are closely connected to phonemes in these orthographies.", "The methods we leverage in this paper, previously applied for evaluating different types of representations, are representational similarity analysis (RSA) and probing classifiers.", "The former was first proposed by Kriegeskorte et al. (2008) for comparing brain activity vectors in heterogeneous representational spaces, but has also been applied in NLP as an interpretability metric as it allows us to study the relation between language representations (Abnar et al., 2019; Abdou et al., 2019; Chrupaa and Alishahi, 2019).", "RSA enables a transparent comparison between the representational geometries of different models and modalities (Sgaard, 2021).", "Contrarily, probing classifiers learn to classify output representations in supervised settings (Et-tinger et al., 2016).", "The intuition behind probing is that if a classifier can be learned to accurately predict certain linguistic properties from the representations of a neural model, then this model has \"learned\" this property.", "Typically, lightly parametrized classifiers (like logistic regression) are applied, however, the exact trade-off between accuracy and complexity of a probe is an open question (Belinkov, 2021).", "In recent years, NLP studies have used probing classifiers to investigate 6820 whether LMs encode linguistic properties including morphological features (such as person and number , Torroba Hennigen et al. (2020)) and word sense (Coenen et al., 2019).", "However, we apply probing classifiers for the first time to character representations.", "Impact of different orthographies on linguistics and human language learning.", "Orthographic depth, i.e., the transparency of grapheme-phoneme correspondences in written language (Frost et al., 1987; Katz and Frost, 1992), is a well-studied fac-tor influencing reading acquisition and skilled reading behavior (Seymour et al., 2003; Landerl et al., 2013; Richlan, 2020).", "For instance, English is considered to be a deep orthography , as there are often multiple different pronunciations for the same spelling patterns (e.g., <gh> in tough and though ).", "This contrasts shallow orthographies with more reliable grapheme-phoneme correspondences, such as Spanish.", "The consistency and complexity with which print reflects speech is one of the prime factors of cross-linguistic differences in reading flu-ency (Ziegler et al., 2010; Schmalz et al., 2015).", "It is the starting point for any discussion that centers on reading development across languages (Pa-padopoulos et al., 2021).", "Since the orthography has such a high impact on human reading behavior, its effect should also be considered more carefully in the development of NLP models.", "models.", "While orthographic depth has been discussed at length in reading research and psychology, it has rarely been addressed in NLP.", "This partly due to the prevalent anglocentrism and missing resources (Bender, 2018).", "Some research has gone into studying the differences between languages when it comes to train computational LMs (Mielke et al., 2019), showing the impact of the vocabulary size and sentence length, but there is lack of NLP research analyzing or taking into account the varying orthographies across languages.", "Two notable exceptions are the recent methods proposed by Mar-jou (2021) and Sproat and Gutkin (2021), who use neural networks to estimate the transparency of orthographies and degree of logography, respectively.", "Moreover, Gorman et al. (2020) conducted a shared task on grapheme-to-phoneme prediction.", "Their results show an urgency for improving these systems and the pronunciation dictionaries used to train them across languages and scripts.", "We train three types of character embeddings based on textual input: count-based PPMI embeddings, and embeddings learned by LSTM and transformer language model.", "We use the Wiki40B multilingual dataset (Guo et al., 2020) to train the character models.", "For each of the five languages, English (en), Dutch (nl), Spanish (es), Korean (ko), and Japanese (ja), we extract training sets of 3 million characters.", "See Appendix A for details on preprocessing.", "The first three languages all use variants of the Latin script, while Hangul (Korean) and Hiragana (one of three scripts used in Japanese) are syllabic scripts, in which most graphemes denote entire syllables.", "We preprocess Korean Hangul characters, decomposing them into constituent Jamos , each corresponding roughly to a single phoneme.", "For Japanese, we convert Kanji symbols to Hiragana and train the language model on Hiragana and Katakana characters.", "The representational similarity analyses are then only performed on Hiragana .", "Figure 1 shows 2-dimensional plots of the learned textual character representations.", "Count-based PPMI embeddings.", "We generate vectorized character representations in a purely count-based manner with a positive pointwise mutual information (PPMI) weighting.", "While the importance of positional information is less obvious for modelling word semantics, it is crucial for modelling the distribution of sounds.", "Following the approach by Mayer (2020), we let our PPMI weighting diverge from traditional bag-of-words models by distinguishing contexts by their relative position to a target.", "Thus, embeddings will have indepen-dent values for the contexts AB_ , _AB , and A_B , counting the number of times a target follows, precedes, and mediates a string AB .", "Using bigram contexts, the resulting embeddings have a dimension of 3 c 2 , where c is the number of characters in a given language, and 3 indicating the number of possible relative positions.", "LSTM.", "We train a recurrent language model consisting of two unidirectional long-short term mem-ory (LSTM) layers.", "It receives sequences of 40 characters as input at each time step and is trained for next character prediction.", "The model is trained with an Adam optimizer (Kingma and Ba, 2015), 6821 o m k u t p b r h d z s g y l f x q e i a j c w n v en PPMI vowelconsonant a b c d e f g h i j k l m n o p q r s t u v w x y z en LSTM vowelconsonant a b c d e f g h i j k l m n o p q r s t u v w x y z en Transformer vowelconsonant ko PPMI vowelconsonant ko LSTM consonantvowel ko Transformer consonantvowel Figure 1: tSNE cluster plots of the character distances from the three types of character language models for English and Korean (see Appendix Figure 5 for the plots for Dutch, Spanish and Japanese).", "an initial learning rate of 0.01, and a batch size of 128.", "We extract the hidden representations of 128 dimensions as the character embeddings.", "See Appendix C.1 for training specifications and Appendix C.2 for perplexity metrics.", "We additionally experimented with bidirectional LSTMs (see Table 1) and 1-layer LSTMs without any substantial changes in the results (Appendix C).", "Transformer.", "Similarly, we also train a transformer character model on the same data (Vaswani et al., 2017).", "The input layer consists of character and positional embeddings, followed by a single transformer block with 2 heads and a hidden layer size of 128.", "We follow the same training procedure as for the LSTM and extract the representations of the hidden layer as the character embeddings.", "Again, see Appendix C for additional details, model modifications, and perplexity metrics.", "Sound.", "The first perceptual representation that we consider is sound .", "To retrieve this representation, we map characters to a phonological distinctive feature space.", "This method has previously been applied to phonemes as a means of generalisation compared to sparse representations (Rumelhart and McClelland, 1986; Mirea and Bicknell, 2019), and to evaluate the knowledge embedded representations learned from neural networks (Silfverberg et al., 2018; Kolachina and Magyar, 2019).", "As sound and speech are only indirectly reflected in writing, we approximate sound representations of characters using grapheme-to-phoneme alignment: For all languages, we extract data from the WikiPron pronunciation dictionary (Lee et al., 2020a) and use the m2m-aligner (Ji-ampojamarn et al., 2007) to align graphemes with phonemes in an unsupervised manner.", "Having alignments from the WikiPron data, we chose the most frequent phoneme mapping to represent the sound of each character (resulting mappings are listed in the Appendix D) We also considered extracting the most frequent phoneme mapping only from word-initial positions.", "The intuition behind this approach was to retrieve representations as close to phonemic as possible, as sounds in the initial position are expected to be less prone to phenomena such as reduction and assimilation reflected in the WikiPron data (e.g., reduction of English \"o\" to @ ).", "However, the word-initial position is also subject to phonotactic restrictions: For exam-6822 ple, in Korean only including consonants occurring word-initially heavily reduces the inventory considered.", "Having phonemes mapped to characters, we are able to associate it with a set of phonological distinctive features, which we use to form our final sound representation: Using the ipapy 2 toolkit, we retrieve International Phonetic Alphabet (IPA) descriptions of the phoneme mappings from which we create a sparse vector that describes what phonological features (e.g., consonant manner of articulation, plosive, or vowel height, front) are active.", "For every language, this provides us with a sound embedding table, S | V || F | , where V is the set of characters and F is the set of distinctive features: S i,j = (cid:40) 1 if F j phonmap( V i ) .", "Color.", "Inspired by Kann and Monsalve-Mercado (2021), we compute color character representations from synesthesia data.", "Grapheme-color synesthesia is a neurological phenomenon in which viewing a grapheme elicits an automatic, involuntary, and consistent sensation of color (Eagleman et al., 2007).", "Color-to-letter associations in synesthesia allow to examine the relationships between visual, acoustic, and semantic aspects of language.", "Recent research in this area has found cross-linguistic similarities in synesthesia, suggesting that some influences on grapheme-color associations in synesthesia might be universal and highlighting the importance of multilingual analyses (Root et al., 2018).", "Figure 2 shows example grapheme-color associations from individual subjects for each of our studied languages.", "It emphasizes the preference for red color tones for the first letter of the alphabet irrespective of the language (Root et al., 2018).", "We use the cross-linguistic synesthesia data collected by Root et al. 2018 (see Appendix B for the dataset statistics).", "In order to extract color representations we compute the Euclidean distances between the 3-dimensional CIELuv color coding scheme for all character combinations.", "We average the distances across all participants of the same language.", "The resulting vector representations re-flect the finding of Root et al. (2018) that the first grapheme in any language is unusually distinct (see Figure 4 in Appendix) .", "Shape.", "Lastly, we also create simple character representations based on their shape.", "Previous works (Brang et al., 2011; Watson et al., 2012) have relied on Gibson (1969) or Courrieu et al. (2004) to build shape-related embeddings from human similarity judgements.", "However, we create shape embeddings directly from their visual expressions.", "We create an image for each printed character as shown in Figure 6 in the appendix.", "For each script, all images have the same width and height (the largest width among all characters incremented with 10 pixels, and the same for the height, which results approximately in 35 45 pixels) and all characters are drawn at position {5,5} .", "We use the font Arial Unicode MS with size 28 .", "From these images, we create shape representations by reading the images as gray scale images row-wise from top to bottom and flattening the matrix into vectors.", "In order to analyze the relation between the learned character representations and the three perceptual representations sound, shape, and color we first compute the pairwise distances between characters of a single model/representation type to analyze how similar the model's representations for each character are to each other 3 .", "For each pair of experimental conditions, the spatial correlation is calculated between the distances of all characters of a language.", "Figure 3 shows the Pearson correlations between the character distances of all embedding types.", "The figure also includes a baseline, where the correlation between random distances and the distances of the respective character representations is computed.", "We correct the significance results by applying the Bonferroni correction for multiple comparisons.", "As expected, the textual character representations show high correlation amongst each other for all five languages.", "The correlations between the textual embeddings and the perceptual representations show that even though the first are purely trained on written language, they still learn to encode certain inherent characteristics of human language processing and production.", "As a general pattern, the textual character representations correlate strongly with sound representations, moderately with color representations, and not at all with the shape representations (with the 3 We use cosine distance for all textual, sound and shape representations; and Euclidean distance for color.", "exception of Korean, discussed below).", "Japanese character embeddings behave differently.", "For instance, the correlation with the sound representations is weaker than for the other languages, which might be due to the syllabic nature of the Japanese script.", "In the following, we discuss the results for each of the perceptual embedding types in detail.", "The PPMI character embeddings show the highest correlation with sound representations, followed closely by transformer embeddings.", "This is notable in the three languages with Latin scripts (en, es, nl).", "To explain this finding, we speculate that the context and learning direction available to the LMs provide phonetic information.", "While the PPMI embeddings have access to contextual information in both directions, the unidirectional LSTM and transformer learn from left-to-right only.", "Therefore, as an addition, we trained a bidirectional LSTM (hid-den dimension = 256) to show that the addition of right-to-left information improves the correlation to the sound representations.", "The results are shown in Table 1.", "Moreover, comparing the results across Latin script, we note that Spanish character embeddings from all models achieve higher correlations than Dutch and English.", "The shallow orthography of the Spanish language explains this finding.", "This is also the case for Korean.", "Our findings on the correlation between English character embeddings and synesthesia data are in line with Kann and Monsalve-Mercado (2021), who find that LSTMs agree with human letter-color perceptions more than transformers on a dataset with more participants (0.08 for LSTM-LM and 0.0 for transformer-LM).", "Moreover, we reach the same conclusion for the other alphabetic scripts, Dutch and Spanish, while for Korean and Japanese there is no clear pattern evident from the correlation coefficients.", "This might be due to the smaller 6824 number of synesthete participants in the dataset.", "The character embeddings of non-featural Latin scripts show low (or even negative) correlation to the shape embedding.", "However, due to their featural writing systems (Sampson, 1985; Marjou, 2021), Japanese and especially Korean embeddings correlate significantly with shape.", "The fact that the Korean consonant graphemes were designed to resemble the place of articulation (Lee, 2021; Gale, 1912), can explain the high correlations between character and shape embeddings for this language.", "This is also shown in the positive correlation between sound and shape representations, which is absent for the other languages.", "To analyze this further, we compare our initial results with transformer character representations computed based on Jamos (e.g., individual phonemes such as \" \"), to character representations of full Hangul characters (e.g., syllables such as \" \").", "Table 2 shows higher correlations for characters decomposed into Jamos.", "The correlation between sound and shape is also lower for full syllables (0.31).", "In this light, the result is unsurprising and can be interpreted as an effective proof-of-concept of using a correlation analysis between textual and perceptual representations.", "More genuine shape representations, for example learned by a convolutional neural network, could be applied to reveal more accurate correlation patterns for Latin scripts.", "Except for Japanese, the results show that the neural embeddings correlate the most with the perceptual sound representations.", "To get a closer look at the information that may be encoded in the dense embeddings, we design a probing task in which classifiers are trained to predict whether certain distinctive features are present given character embeddings as input.", "For each distinctive feature, we train a binary Logistic Regression to predict whether the the feature is present (1), or not (0).", "The labels are given by the sound representations as explained in Section 3.2.", "As the number of samples is small (limited to the number of characters in a language), we do this in a leave-one-out manner, training a classifier for each character, while using the rest for training.", "In both test and training, for features that only concern consonants (e.g., manner of articulation and voicing ), we exclude vowels, and similarly, for features that only concern vowels (e.g., vowel height and vowel rounding ), we exclude consonants.", "The performance of the probes are evaluated for each distinctive feature using F1 scores and by comparison with two baseline strategies, namely,", "(a) to predict labels uniformly at random, and", "(b) to always predict the most frequent label according to the training distribution.", "The former is given as the average across 1000 runs.", "For some features, choosing the most frequent label is a good strategy and will yield good results.", "To further challenge the knowledge learned by the embeddings and distinguish the classifiers from the strategy of choosing the most frequent baseline, we create a zero-shot setup in which the classifiers will have to be able to transfer knowledge between features in order to excel in the task.", "In particular, we test 1) if a classifier trained to predict whether a consonant is voiced is able to identify vowels and 2) if labial consonants are retrieved by a classifier trained to predict vowel rounding.", "While the intuition behind 1) relates to the sonority sequencing principle (Clements, 1990), which states that the nucleus of a syllable (vowels in the majority of the cases) represents a sonority peak, the intuition behind 2) is more experimental, relying on a global feature such as 'rounding'.", "The results for the probing classifiers are found in Table", "3. Generally, both LSTM and transformer embeddings outperform both the most-frequent and random baselines, with the transformer beating the LSTM by a small margin.", "This should, however, be taken with a grain of salt considering the limited number of examples.", "Considering the global features, vowel and consonant , classifiers are able to learn this distinction using both LSTM and transformer character embeddings.", "In particular, consonants are identified with high certainty.", "This is, however, the majority group (ref. the most frequent strategy).", "The F1 scores for vowel prediction are considerably lower.", "However, in this case they cannot be explained by neither a most-frequent strategy nor a random baseline, which indicates that a global vowel/consonant distinction is captured in the embeddings.", "The findings for the voiced/voiceless consonant distinction are similar.", "But here the groups are more balanced, which provides the most-frequent strategy with less of an advantage and in turn the F1 scores are generally lower.", "For Korean, the scores are lower compared to the other languages.", "As the feature of consonant voicing correlates with manner in Korean (with all plosives, affricates and fricatives being voiceless, and plosives being the majority class), the task captured by the classifier may be distorted.", "The fact that the classifier may not be able to pick up features of voicing from the Korean embeddings are reflected in the zero-shot experiment.", "The results for the first zero-shot experiment for predicting vowels using the classifier for identifying voiced consonants are found in Table", "4. Here, the results for Korean are worse than the random baseline.", "While the results for English and Dutch can be explained by the most-frequent strategy, the result for Spanish indicates that features of voicing or sonority are encoded in the embeddings, amplifying the initial results from the probing classifier experiment.", "Turning from consonant to vowel features, the inventory of vowels is considerably smaller, leaving a small number of training examples with few positive examples.", "Thus, the results of the probing classifiers are associated with uncertainty.", "For the zero-shot task of retrieving consonants with labial features from a classifier trained to predict vowel rounding, we focus our analysis on Spanish LSTM embeddings as they showed the most promising results for predicting rounding in the regular probing 6826 Language F1 True positive False positive False negative es 0.47 b f v w g k q x y z p m Table 5: Results from the zero-shot task to predict 'rounded' consonants using the Spanish LSTM embeddings.", "task.", "However, as can be seen in Table 5, while the classifier for Spanish has a high recall its precision lacks behind and retrieves many false positives.", "Overall, we believe that the results are promising and a good indication on how character representations can capture features related to phonology.", "This especially in light of the results from the first zero-shot task, that suggested that classifiers are able to transfer knowledge of sonority from embeddings of consonants to unseen vowels.", "In this work, we attempted to understand the information encoded in character-level representations.", "We obtained two main types of embeddings: text-based embeddings and perceptual embeddings.", "While the first type of representations (PPMI, LSTM, and transformer) were trained from raw text data, perceptual representations were obtained from sources mimicking human language, i.e., pronunciation dictionaries, synesthesia data and shape visualizations.", "We have performed representational similarity analyses between these types of embeddings for five different languages.", "Besides, we defined and trained models to predict certain phonological distinctive features in order to interpret the embeddings.", "We found interesting patterns in the representational similarity analysis as a simple first approach for intrinsic character embedding evaluation.", "While clearly outperforming a random baseline in most cases, the strength of the correlations vary between scripts.", "For instance, the strong correlation between Korean character embeddings and shape representations provides positive evidence of the suitability of this approach.", "Further research is required to dissect the differences between character LMs: While the LSTM embeddings showed stronger correlation with color, the transformer embeddings were supe-rior when compared to sound representations.", "The inclusion of additional languages and scripts will be helpful to identify more generalizable insights.", "that they contribute differently for different tasks.", "For instance, sound representations would be expected to be useful for tasks revolving around phonology, such as grapheme-to-phoneme conversion, or shape representations could be relevant for predicting orthographic errors.", "The phonological probing tasks show promising results, especially with respect to interpretability.", "Besides, this methodology is applicable to any language with sufficient raw data and a pronunciation dictionary, and could potentially shed light in measuring the phonological difficulty of certain languages.", "In future work, we will focus on the development of more sophisticated probes, for instance, multitask networks with shared layers across tasks.", "Moreover, the labels of the probing task were given from using the sound embeddings retrieved from the most frequent phoneme mapping.", "Had we focused the analysis on contextual character embeddings instead, that would allow us to distance ourselves from this paradigm as we would be able to analyse character and sound embeddings in the context they occur in.", "Finally, we stress the need for further intrinsic evaluation methods for character representations.", "The high impact of orthography on human language learning is an adamant argument to consider the cross-linguistic diversity of writing systems more carefully in the development of NLP models.", "We thank the anonymous reviewers from ACL Rolling Review for their valuable comments.", "The first author is supported by the project Script and Text in Time and Space funded by the Velux Foundations." ]
[ "abstain", "method", "abstain", "objective", "result", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "method", "objective", "method", "objective", "method", "abstain", "method", "result", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "method", "result", "abstain", "method", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "other", "other" ]
[ "Neural Networks trained with gradient descent are known to be susceptible to catastrophic forgetting caused by parameter shift during the training process.", "In the context of Neural Machine Translation (NMT) this results in poor performance on heterogeneous datasets and on sub-tasks like rare phrase translation.", "On the other hand, non-parametric approaches are immune to forgetting, perfectly complementing the generalization ability of NMT.", "However, attempts to combine non-parametric or retrieval based approaches with NMT have only been successful on narrow domains, possibly due to over-reliance on sentence level retrieval.", "We propose a novel n-gram level retrieval approach that relies on local phrase level similarities, allowing us to retrieve neighbors that are useful for translation even when overall sentence similarity is low.", "We complement this with an expressive neural network, allowing our model to extract information from the noisy retrieved context.", "We evaluate our semi-parametric NMT approach on a heterogeneous dataset composed of WMT, IWSLT, JRC-Acquis and OpenSubtitles, and demonstrate gains on all 4 evaluation sets.", "The semi-parametric nature of our approach opens the door for non-parametric domain adaptation, demonstrating strong inference-time adaptation performance on new domains without the need for any parameter updates.", "Over the last few years, neural sequence to sequence models (Sutskever et al., 2014; Bahdanau et al., 2015; Cho et al., 2014) have revolutionized the field of machine translation by significantly improving translation quality over their phrase based counterparts (Sennrich et al., 2015; Wu et al., 2016; Zhou et al., 2016).", "With more gains arising from continued research on new neural network architectures and accompanying training techniques (Vaswani et al., 2017; Gehring et al., 2017; Chen et al., 2018), NMT researchers, both in industry and academia, have doubled down on their ability to train high capacity models on large corpora with gradient based optimization.", "However, despite huge improvements in overall translation quality NMT has shown some glaring weaknesses, including idiom processing, and rare word or phrase translation (Koehn and Knowles, 2017; Isabelle et al., 2017; Lee et al., 2018) tasks that should be easy if the model could retain learned information from individual training examples.", "NMT has also been shown to perform poorly when dealing with multi-domain data (Farajian et al., 2017a).", "This catastrophic for-getting' problem has been well-studied in traditional neural network literature, caused by parameter shift during the training process (McCloskey and Cohen, 1989; Santoro et al., 2016).", "Nonparametric methods, on the other hand, are resistant to forgetting but are prone to over-fitting due to their reliance on individual training examples.", "We focus on a non-parametric extension to NMT, hoping to combine the generalization ability of neural networks with the eidetic memory of non-parametric methods.", "Given a translation query, we rely on an external retrieval mechanism to find similar source-target instances in the training corpus, which are then utilized by the model.", "There has been some work on semi-parametric NMT (Gu et al., 2017; Zhang et al., 2018b; Cao and Xiong, 2018), but its effectiveness has been confined to narrow domain datasets.", "Existing approaches have relied on sentence level similarity metrics for retrieval, which works well for domains with high train-test overlap, but fails to retrieve useful candidates for broad domains.", "Even if we could find training instances with overlapping phrases it's likely that the information in most retrieved source-target pairs is noise for the purpose of translating the current query.", "To retrieve useful candidates when sentence similarity is low, we use n-gram retrieval instead of sentence retrieval.", "This results in neighbors which have high local overlap with the source sentence, even if they are significantly different in terms of overall sentence similarity.", "This is intuitively similar to utilizing information from a phrase table (Koehn et al., 2003) within NMT (Dahlmann et al., 2017), without losing the global context lost when constructing the phrase table.", "We also propose another simple extension using dense vectors for n-gram retrieval which allows us to exploit similarities beyond lexical overlap.", "To effectively extract the signal from the noisy retrieved neighbors, we develop an extension of the approach proposed in (Cao and Xiong, 2018).", "While (Cao and Xiong, 2018) encode the retrieved targets without any context, we incorporate information from the current and retrieved sources while encoding the retrieved target, in order to distinguish useful information from noise.", "We evaluate our approach on a multi-domain English-French corpus constructed from narrow domain datasets like JRC-Acquis (Stein-berger et al., 2006; Tiedemann) and OpenSubtitles (Tiedemann, 2009) 1 , and the standard IWSLT and WMT bilingual corpora, as described in Sections 3 and", "4. Our results, for the first time, indicate that semi-parametric NMT can be beneficial beyond narrow domain tasks, demonstrating gains of around 0.5 BLEU on WMT, and huge gains ranging from 2-10 BLEU points on IWSLT, JRC-Acquis and OpenSubtitles, when compared to a strong sequence to sequence baseline.", "The semi-parametric nature of our model enables non-parametric inference-time adaptation to new datasets, without the need for any parameter updates.", "When trained on WMT and evaluated on the other datasets, our model out-performs fine-tuning based adaptation (Luong and Manning, 2015) on JRC-Acquis and OpenSubtitles, and significantly improves performance over the non-adapted model on IWSLT.", "Standard approaches for Neural Machine Translation rely on seq2seq architectures (Sutskever et al., 2014; Bahdanau et al., 2015), where given a source sequence X = { x 1 , x 2 , . . . x T x } and a target sequence Y = { y 1 , y 2 , . . . y T y } , the goal is to model the probability distribution, p ( y t | X, y 1 , . . . y t 1 ) .", "Semi-parametric NMT (Dahlmann et al., 2017; Gu et al., 2017) approaches this learning problem with a different formulation, by modeling p ( y t | X, y 1 , . . . y t 1 , X ) instead, where X = { ( X 1 , Y 1 ) . . . ( XN , YN ) } is the set of sentence pairs where the source sentence is a neighbor of X , retrieved from the training corpus using some similarity metric.", "This relies on a two step approach the retrieval stage finds training instances, ( X i , Y i ) , similar to the source sentence X , and the translation stage generates the target sequence Y given X and X .", "We follow this setup, proposing improvements to both stages in order to enhance the applicability of semi-parametric NMT to more general translation tasks.", "Existing approaches have proposed using off the shelf search engines for the retrieval stage.", "However, our objective differs from traditional information retrieval, since the goal of retrieval in semi-parametric NMT is to find neighbors which might improve translation performance, which might not correlate with maximizing sentence similarity.", "Our baseline strategy relies on a sentence level similarity score, similar to those used for standard information retrieval tasks (Robertson, 2004).", "We compare this against finer-grained n-gram retrieval using the same similarity metric.", "We also propose a dense vector based n-gram retrieval strategy, using representations extracted from a pre-trained NMT model.", "Our baseline approach relies on a simple inverse document frequency (IDF) based similarity score.", "We define the IDF score of any token, t , as f t = log( (cid:107) C (cid:107) n t ) , where (cid:107) C (cid:107) is the number of sentence pairs in training corpus and n t is the number of sentences t occurs in.", "Let any two sentence pairs in the corpus be ( X i , Y i ) and ( X j , Y j ) .", "Then we define the similarity between ( X i , Y i ) and ( X j , Y j ) by, sim ( X i , X j ) = 2 t ( X i X j ) f t t ( X i X j ) f t (1) For every sentence in the training, dev and test corpora, we find the N most similar training sentence pairs and provide them as context to NMT.", "Motivated by phrase based SMT, we retrieve neighbors which have high local, sub-sentence level overlap with the source sentence.", "We adapt our approach to retrieve n-grams instead of sentences.", "We note that the similarity metric defined above for sentences is equally applicable for n-gram retrieval.", "Let X = ( t 1 , ...t T ) be a sentence.", "Then the set of all possible n-grams of X, for a given n , can be defined as S nX = { ( t i , ...t i + n ) 1 i T } (also including padding at the end).", "To reduce the number of n-grams used to represent every sentence, we define the reduced set of n-grams for X to be S nX = { ( t i , ...t i + n ) 1 i T, i mod n 2 = 1 } .", "We represent every sentence by their reduced n-gram set.", "For every n-gram in S nX , we find the closest n-gram in the training set using the IDF similarity defined above.", "For each retrieved n-gram we find the corresponding sentence (In case an n-gram is present in multiple sentences, we choose one randomly).", "The set of neighbors of X is then the set of all sentences in the training corpus that contain an n-gram that maximizes the n-gram similarity with any n-gram in S nX .", "To capture phrases of different lengths we use multiple n-gram widths, n .", "In case a sentence has already been added to the retrieved set, we find the next most similar sentence to avoid having duplicates.", "The number of neighbors retrieved for each source sentence is proportional to its length.", "We also extend our n-gram retrieval strategy with dense vector based n-gram representations.", "The objective behind using a dense vector based approach is to incorporate information relevant to the translation task in the retrieval stage.", "We use a pre-trained Transformer Base (Vaswani et al., 2017) encoder trained on WMT to generate sub-word level dense representations for the sentence.", "The representation for each n-gram is now defined to be the mean of the representations of all its constituent sub-words.", "We use the L 2 distance of n-gram representations as the retrieval criterion.", "Note that we use a sub-word level decomposition of sentences for dense retrieval, as compared to word-level for IDF based retrieval (i.e., n-grams are composed of sub-words instead of words).", "Following the approach described for IDF based n-gram retrieval, we use multiple values of n , and remove duplicate neighbors while creating the retrieved set.", "To incorporate the retrieved neighbors, X , within the NMT model, we first encode them using Transformer layers, as described in subsection 2.2.1.", "This encoded memory is then used within the decoder via an attention mechanism, as described in subsection 2.2.2.", "We now describe how each retrieved translation pair, ( X i , Y i ) , is encoded.", "This architecture is illustrated in Figure", "1. We first encode the retrieved source, X i , in a Transformer layer.", "Apart from self-attention, we incorporate information from the encoder representation of the current source, X , using decoder style cross-attention.", "The encoded representations for all targets, { Y i , 1 i N } , are then concatenated along the time axis to form the Conditional Source Target Memory (CSTM).", "We use gated multi-source attention to combine the context from the source encoder representations and the CSTM.", "This is similar to the gated attention employed by (Cao and Xiong, 2018).", "We use a Transformer based decoder that attends to both, the encoder outputs and the CSTM, in every cross-attention layer.", "The rest of the decoder architecture remains unchanged.", "Let the context vectors obtained by applying multi-head attention to the source and memory, with query q t be c st and c mt respectively.", "Then the gated context vector, c t , is given by, g t = ( W gs c s t + W gm c m t ) (2) c t = g t c st + (1 g t ) c mt (3) where g t is the scalar gating variable at time-step t, and W gs and W gm are learned parameters.", "These steps are illustrated in Figure", "2. 3 Experiments 3.1 Data and Evaluation We compare the performance of a standard Transformer Base model and our semi-parametric NMT approach on an English-French translation task.", "We create a new heterogeneous dataset, constructed from a combination of the WMT training set (36M pairs), the IWSLT bilingual corpus (237k pairs), JRC-Acquis (797k pairs) 2 and OpenSubtitles (33M pairs) 3 .", "For WMT, we use newstest 13 for validation and newstest 14 for test.", "For IWSLT, we use a combination of the test corpora from 2012-14 for validation and test 2015 for eval.", "For OpenSubtitles and JRC-Acquis, we create our own splits for validation and test, since no benchmark split is publicly available.", "After deduping, the JRC-Acquis test and validation set contain 6574 and 5121 sentence pairs respectively.", "The OpenSubtitles test and validation sets contain 3975 and 3488 pairs.", "For multi-domain training, the validation set is a concatenation of the four individual validation sets.", "All datasets are tokenized with the Moses tok-enizer (Koehn et al., 2007) and mixed without any sampling.", "We use a shared vocabulary Sentence-Piece Model (Kudo and Richardson, 2018) for sub-word tokenization, with a vocabulary size of 32000 tokens.", "We train each model for 1M steps, and choose the best checkpoint from the last 5 checkpoints based on validation performance.", "BLEU scores are computed with tokenized true-cased output and references with multi-bleu.perl from Moses.", "For IDF based sentence retrieval, for each sentence in the training, dev and test corpus, we use N = 10 neighbors per example during both, training and evaluation.", "For the N-Gram level retrieval strategies, we used N = 10 neighbors dur-2 From http://opus.nlpl.eu/JRC-Acquis.php 3 From http://opus.nlpl.eu/OpenSubtitles.php Model Data newstest 14 IWSLT 2015 OpenSub JRC-Acquis TransformerBase Multi Domain (MD) 41.92 43.17 26.67 56.19 + CSTM MD + IDF Sentence 40.89 42.35 28.25 65.38 + CSTM MD + IDF N-Gram 41.92 45.09 28.74 66.39 + CSTM MD + Dense N-Gram 42.41 45.02 29.06 66.92 Table 1: Comparison of test translation quality (BLEU) with different retrieval strategies.", "ing training, and neighbors corresponding to all n-grams during decoding.", "This was meant to limit memory requirements and enable the model to fit on P100s during training.", "We used n-gram width, n = { 6 , 10 , 18 } , for both IDF and dense vector based n-gram retrieval approaches.", "For scalabil-ity reasons, we restricted the retrieval set to the in-domain training corpus, i.e. neighbors for all train, dev and test sentences in the JRC-Acquis corpus were retrieved from the JRC-Acquis training split, and similarly for the other datasets.", "For our baseline model we use the standard Transformer Base model (Vaswani et al., 2017).", "For the semi-parametric model, all our hyper-parameters for attention (8 attention heads), model dimensions (512) and hidden dimensions (2048), including those used in the CSTM memory are equivalent to Transformer Base.", "The Transformer baselines are trained on 16 GPUs, with the learning rate, warm-up schedule and batching scheme described in (Vaswani et al., 2017).", "The semi-parametric models were trained on 32 GPUs with each replica split over 2 GPUs, one to train the translation model and the other for computing the CSTM.", "We used a conservative learning rate schedule (3, 40K) (Chen et al., 2018) to train the semi-parametric models.", "We apply a dropout rate(Srivastava et al., 2014) of 0.1 to all inputs, residuals, attentions and ReLU connections in both models.", "We use Adam (Kingma and Ba, 2014) to train all models, and apply label smoothing with an uncertainty of 0.1 (Szegedy et al., 2015).", "In addition to the transformer layers, layer normalization (Ba et al., 2016) was applied to the output of the CSTM.", "All models are implemented in Tensorflow-Lingvo (Shen et al., 2019).", "We compare the test performance of a multi-domain Transformer Base and our semi-parametric model using dense vector based n-gram retrieval and CSTM in Table", "1. Apart from significantly improving performance by source Consciousness also is what makes life worth living . ' neighbor source So in the last 10 years and the hope for the future , we 've seen the beginnings of a science of positive psychology , a science of what makes life worth living . ' baseline translation La conscience est aussi ce qui rend la vie valable . ' neighbor target Donc , depuis 10 ans , et , esperons-le , `a l' avenir nous assistons `a l' emergence d' une science de la psychologie positive : une science qui fait en sorte que la vie vaille la peine d' etre vecue .", "more than 10 BLEU points on JRC-Acquis, 2-3 BLEU on OpenSubtitles and IWSLT, we notice a moderate gain of 0.5 BLEU points on WMT 14.", "We compare the performance of all 3 retrieval strategies in Table", "1. The semi-parametric model with sentence level retrieval out-performs the seq2seq model by a huge margin on JRC-Acquis and OpenSubtitles.", "A sample from the JRC-Acquis dataset where the semi-parametric approach improves significantly over the neural approach is included in Table", "2. We notice that there is a lot of overlap between the source sentence and the retrieved source, resulting in the semi-parametric model copying large chunks from the retrieved target.", "However, its performance is noticeably worse on WMT and IWSLT.", "Based on a manual inspection of the retrieved candidates, we attribute these losses to retrieval failures.", "For broad domain datasets like WMT and IWSLT sentence retrieval fails to find good candidates.", "Switching to n-gram level retrieval brings the WMT performance close to the seq2seq approach, and IWSLT performance to 2 BLEU points above the baseline model.", "Representative examples from IWSLT and WMT where n-gram retrieval improves over sentence level retrieval can be seen in Tables 3 and", "4. Despite the majority of the retrieved neighbor having nothing in common with the source sentence, n-gram retrieval is able to find neighbors that contain local overlaps.", "Using dense n-gram retrieval allows us to move beyond lexical overlap and retrieve semantically similar n-grams even when the actual tokens are different.", "As a result, dense n-gram retrieval improves performance over all our models on all 4 datasets.", "An illustrative example from WMT is included in Table", "5. source The artist died last Sunday at the age of 71 .' neighbor source A former minister George Thomson passed away last week at the age of 87 .' baseline translation L' artiste est mort dimanche dernier `a l' age de 71 ans", "We report the performance of the various memory ablations in Table", "6. We first remove the retrieved sources, X i , from the CSTM, resulting in an architecture where the encoding of a retrieved target, Y i , only incorporates information from the source X , represented by the row CTM in the table.", "This results in a clear drop in performance on all datasets.", "We ablate further by removing the attention to the original source X , resulting in a slightly smaller drop in performance (represented by TM).", "These experiments indicate that incorporating context from the sources significantly contributes to performance, by allowing the model to distinguish between relevant context and noise.", "Using a semi-parametric formulation for MT opens up the possibility of non-parametric adaptation.", "The biggest advantage of this approach is the possibility of training a single massively customizable model which can be adapted to any new dataset or document at inference time, by just updating the retrieval dataset.", "We evaluate our model's performance on nonparametric adaptation and compare it against a fully fine-tuned model.", "In this setting, we train a baseline model and a dense n-gram based semi-parametric model on the WMT training corpus.", "We only retrieve and train on examples from the WMT corpus during training.", "We use the same hyper-parameters and training approaches used for the multi-domain experiments, as in Section 3.", "The baseline model is then fine-tuned independently on JRC-Acquis, OpenSubtitles and IWSLT.", "The semi-parametric model is adapted non-parametrically to these three datasets, without any parameter updates.", "Adaptation is achieved via the retrieval mechanism while evaluating, we retrieve similar examples from their respective training datasets.", "To quantify headroom, we also fine-tune our semi-parametric model on each of these datasets.", "The results for non-parametric adaptation experiments are documented in Table 7.", "We notice that the non-parametric adaptation strategy significantly out-performs the base model on all 4 datasets.", "More importantly, the we find that our approach is capable of adapting to both, JRC-Acquis and OpenSubtitles, via just the retrieval apparatus, and out-performs the fully fine-tuned model indicating that non-parametric adaptation might be a reasonable approach when adapting to a lot of narrow domains or documents.", "In-domain fine-tuning on top of non-parametric adaptation further improves by 2 BLEU points on all datasets, increasing the gap even further with the seq2seq adapted models.", "Tools incorporating information from individual translation pairs, or translation memories (Lagoudaki; Reinke, 2013), have been widely utilized by human translators in the industry.", "There have been a few efforts attempting to combine non-parametric methods with NMT (Gu et al., 2017; Zhang et al., 2018b; Cao and Xiong, 2018), but the key difference of our approach is the introduction of local, sub-sentence level similarity in the retrieval process, via n-gram level retrieval.", "Combined with our architectural improvements, motivated by the target encoder and gated attention from (Cao and Xiong, 2018) and the extended transformer model from (Zhang et al., 2018a), our semi-parametric NMT model is able to outperform purely neural models in broad multi-domain settings.", "Some works have proposed using phrase tables or the outputs of Phrase based MT within NMT (Dahlmann et al., 2017; Zhang et al., 2017; Zhou et al., 2017).", "While this reduces the noise present within the retrieved translation pairs, it requires training and maintaining a separate SMT system which might introduce errors of its own.", "Another class of methods requires fine-tuning the entire NMT model to every instance at inference time, using retrieved examples (Farajian et al., 2017b; Wuebker et al., 2015), but these approaches require running expensive gradient descent steps before every translation.", "Beyond NMT, there have been a few other attempts to incorporate non-parametric approaches into neural generative models (Guu et al., 2018; Hayati et al., 2018; Weston et al., 2018).", "This strong trend towards combining neural generative models with non-parametric methods is an attempt to counter the weaknesses of neural networks, especially their failure to remember information from individual training instances and the diversity problem of seq2seq models (Vijayaku-mar et al., 2016; Jiang and de Rijke, 2018).", "While our approach relies purely on retrieval from the training corpus, there has been quite a lot of work, especially on Question Answering, that attempts to find additional signals to perform the supervised task in the presence of external knowledge sources (Chen et al., 2017; Wang et al., 2018).", "Retrieving information from unsupervised corpora by utilizing multilingual representations (Guo et al., 2018) might be another interesting extension of this work.", "We make two major technical contributions in this work which enable us to improve the quality of semi-parametric NMT on broad domain datasets.", "First, we propose using n-gram retrieval, with standard Inverse Document Frequency similarity and with dense vector representations, that takes into account local sentence similarities that are critical to translation.", "As a result we are able to retrieve useful candidates even for broad domain tasks with little train-test overlap.", "Second, we propose a novel architecture to encode retrieved source-target pairs, allowing the model to distinguish useful information from noise by encoding the retrieved targets in context of the current translation task.", "We demonstrate, for the first time, that semi-parametric methods can beat neural models by sig-nificant margins on multi-domain Machine Translation.", "By successfully training semi-parametric neural models on a broad domain dataset (WMT), we also open the door for non-parametric adaptation, showing huge improvements on new domains without any parameter updates.", "While we constrain this work to retrieved context, our architecture can be utilized to incorporate information from other sources of context, including documents, bilingual dictionaries etc.", "Using dense representations for retrieval also allows extending semi-parametric neural methods to other input modalities, including images and speech.", "With this work, we hope to motivate further investigation into semi-parametric neural models for and beyond Neural Machine Translation.", "We would like to thank Naveen Arivazhagan, Macduff Hughes, Dmitry Lepikhin, Mia Chen, Yuan Cao, Ciprian Chelba, Zhifeng Chen, Melvin Johnson and other members of the Google Brain and Google Translate teams for their useful inputs and discussions.", "We would also like to thank the entire Lingvo development team for their foundational contributions to this project." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "method", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "abstain", "abstain", "objective", "method", "abstain", "abstain", "objective", "abstain", "method", "method", "objective", "objective", "result", "other", "other", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "other", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "method", "abstain", "other", "other", "other", "other", "other", "abstain", "method", "objective", "objective", "result", "objective", "objective", "objective", "method", "abstain", "objective", "other", "other" ]
[ "Word identification from continuous input is typically viewed as a segmentation task.", "Experiments with human adults suggest that familiarity with syntactic structures in their native language also influences word identification in artificial languages; however, the relation between syntactic processing and word identification is yet unclear.", "This work takes one step forward by exploring a radically different approach of word identification, in which segmentation of a continuous input is viewed as a process isomorphic to unsupervised constituency parsing.", "Besides formalizing the approach, this study reports simulations of human experiments with DIORA (Drozdov et al., 2019), a neural unsupervised constituency parser.", "Results show that this model can reproduce human behavior in word identification experiments, suggesting that this is a viable approach to study word identification and its relation to syntactic processing.", "When exposed to speech in an unknown language, humans are faced with the task of finding out what are the basic combinatorial units of the language, such as phonemes, syllables, words and phrases.", "Since speech is continuous, humans need to rely on implicit cues such as statistical information to find out the building blocks of the language.", "One approach that studies which statistical cues can be used by humans in this task is Artificial Grammar Learning (AGL).", "Experiments in AGL are characterized by the use of artificial languages with carefully controlled statistical properties.", "To investigate word identification with this paradigm, participants in a typical AGL experiment are first exposed to a speech-like sample of the artificial language (usually recorded with synthetic voice).", "Then, they participate in a test that has been designed to show whether participants identified the words in the artificial language.", "To formalize theories of how humans identify words in AGL tasks, a range of computational models have been proposed over the last two decades.", "These models have explained a wide arrange of phenomena, using a variety of algorithms such as Bayesian inference (Frank et al., 2010), normative statistics (Swingley, 2005), cognitively inspired processes implementing recognition or memorization (Alhama and Zuidema, 2017; Perruchet and Vinter, 1998), and neural networks (French et al., 2011; Endress and Johnson, 2021).", "There is, however, one phenomenon that has not been addressed in the computational literature in AGL: the fact that participant's knowledge of their native language influences performance in this type of AGL experiments.", "In particular, results seem to be influenced by co-occurrence statistics of sublexical units (Onnis et al., 2005; Siegelman et al., 2018; Elazar et al., 2022), and interestingly, also by the presence of leftor right-branching syntactic structures in the native language, which predict the statistics that subjects use to identify words (Onnis and Thiessen, 2013).", "One likely reason why this has not been the focus of prior models of word identification in AGL is that we are in need a computational framework that can represent this information on the first place.", "While sensitivity to co-occurrences of sublexical patterns could potentially be accounted for with at least some of the existing models (in particular the neural network approaches, which should show similar output to input with similar representations), the influence of prior syntactic knowledge cannot be readily explained with the existing approaches, as none of these models incorporate syntactic processing.", "Thus, a preliminary step before modelling the influence of prior knowledge is to develop a modelling framework that can relate word identification in AGL to syntactic processing in the first place 1 .", "1 In the field of word identification from naturalistic input, 4103 This work aims to fill this gap by presenting a radically different account of word identification that is isomorphic to syntactic processing: namely, word segmentation as unsupervised constituency parsing .", "This paper is structured as follows.", "Section 2 reviews the experimental record that this work focuses on.", "The approach of modelling word segmentation as unsupervised constituency parsing is formalized in section 3.", "Next, section 4 reports an empirical study using DIORA (Drozdov et al., 2019), an unsupervised neural inside-outside constituency parser.", "The results, reported in section 5, show that this approach can be effectively used to model human word identification in AGL experiments with human adults.", "Finally, implications of this new perspective on word identification are discussed in section 6, and directions for future studies are proposed in section 7.", "A long tradition of AGL experiments have used artificial languages to discover how humans identify words from a continuous speech-like stream.", "Studies show that humans can segment words based on statistics over syllables, such as frequency of co-occurence (Aslin et al., 1998), transitional probabilities (Saffran et al., 1996a,b; Perruchet and Desaulty, 2008) predictive dependencies between non-adjacent syllables (Pea et al., 2002; Endress and Bonatti, 2007; Frost and Monaghan, 2016), or phonotactic patterns (Onnis et al., 2005).", "Here, the focus is on the two experiments reported in Perruchet and Desaulty (2008) (P&D on-wards).", "These experiments showed that humans have the ability to keep track of both forward and backward transitional probabilities (as explained next) and use them for identifying words.", "It is precisely this ability that is susceptible of being influenced by prior syntactic knowledge (Onnis and Thiessen, 2013), motivating the choice to focus on these experiments as a starting point.", "In Experiment 1, the authors used an artificial language consisting of 9 bi-syllabic words', formed with combinations of 12 different syllables.", "There were two conditions in the experiment: forward and backward .", "In the forward condition, the first syllable of each word uniquely predicted the models that allow for some level of hierarchical representations have been proposed (De Marcken, 1995; Johnson and Goldwater, 2009; Lignos, 2012) but have not been evaluated for unsupervised parsing at the syntactic level.", "second syllable (e.g. if A and B were syllables and AB was word, then A was only followed by B ).", "In other words, the forward TP ( T P fw ) within words was consistently 1, while it was much lower between words: T P fw ( AB ) = p ( B | A ) = (cid:40) 1 ifAB { words } 0 .", "The backward condition follows exactly the same design, except that it is the second syllable in the word which uniquely predicts the first:", "The participants were familiarized with a sample of synthesized speech of this language, consisting of a random concatenation of 115 repetitions of each word.", "With this design, the co-occurrence frequency of syllables within a word was 3 times larger than for syllables spanning word boundaries.", "The total duration of the recorded stream was 8 minutes, and there were no pauses or any other acoustic indication that separated the words.", "Thus, the only two cues that participants could use to identify words were the TPs between syllables and the co-occurrence frequency of syllables (as it was 3 times higher for syllables within words than for syllables spanning word boundaries).", "After listening to this stream of artificial words, the participants were presented with a 2-Alternative Forced Choice (2AFC) test.", "Each trial in the test consisted of a choice between a word of the language, and a partword', i.e. a sequence of two syllables that spanned across word boundaries.", "For instance, in the forward condition, a test trial could involve the word CX and the partword XD (see 4104 Table 1).", "Participants were instructed to choose the item that seemed more like a word of the artificial language.", "In both conditions, participants chose words more frequently than partwords (with a slight advantage for the backward condition).", "This finding suggests that words can be identified based on statistical properties such as syllable co-occurrence frequency and TPs, in either directions.", "To disentangle the contribution of each cue, in a second experiment, the authors designed an artificial language in which the frequency of words and partwords in the familiarization stream was controlled.", "Thus, the only way to identify words was to keep track of TPs.", "Results of Experiment 2 showed that participants were statistically above chance in both conditions, with a slight advantage for the forward condition (although the difference between directions did not reach significance).", "The authors concluded that human adults can track TPs in both directions, and use them to identify words in a continuous stream.", "The approach presented this paper is to model the task of word identification from a continuous input using the same process for discovering syntactic constituents.", "A number of adaptations and considerations are required, as described next.", "Constituency parsing is the task of identifying which word spans form constituents, and how are those constituents are hierarchically combined into larger constituents to form the correct syntactic tree.", "The nodes that occupy the lowest positions in the tree (considering that the root is the highest node) correspond to the tightest' constituents, i.e. those that span over words that form cohesive phrases that can be further combined (Onnis and Thiessen, 2013).", "As an example, given the sentence the singer yelled , a constituency parser needs to decide whether a grouping like ((the, singer), yelled) is more likely than (the, (singer, yelled)) .", "A successful parser would conclude that (the, singer) forms a cohesive constituent (concretely, a noun phrase), while (singer, yelled) does not.", "More generally, given a sentence S = ABC where A , B and C are basic units (in this case, words), the parser needs to decide whether to group together AB or BC to form a higher-order unit (a constituent).", "Likewise, a segmentation algorithm presented with a stream S = ABC , where A , B and C are basic units (e.g. syllables or phonemes), also needs to decide whether the most cohesive higher-order unit (in this case, a word) is AB or BC 2 .", "Thus, with this simile, word segmentation can be cast in terms of a process that is isomorphic to (unsupervised) constituency parsing.", "Participants in the experiments by P&D were exposed to a speech stream formed with a randomized concatenation of the bisyllabic words in the artificial language.", "Similarly, to train a parsing model, a stream of syllables' (which is coded simply using the same symbols as P&D, i.e. A-D, X-Z) is generated with the same procedure described in the original paper.", "Thus, these symbols are the basic units (or vocabulary) for the parser.", "As in most AGL experiments, the stimuli in P&D consisted of one single stream, which was not separated into different sentences.", "However, the training data used for parsing typically consists of a large number of sentences, likely much shorter than the stimuli in AGL experiments.", "Moreover, the adults participating in the experiment are presumably not deriving one single parse during the 8 minute exposure to the artificial language, as this input greatly exceeds the average sentence length of natural language.", "More likely, humans separately processed subsequences of the stimuli, as would be expected given limited attention span and short term memory.", "This intuition is captured in some models of segmentation in AGL, which operate over subsequences of random length (Perruchet and Vinter, 1998), or an all possible subsequences up to a predefined maximum length (Alhama and Zuidema, 2017).", "Similarly, the approach proposed here is to divide the stream into subsequences (sentences'), the length of which is determined with a stochastic procedure.", "Unlike previous models, this approach samples the length of the subsequences from a Poisson distribution, with parameters derived from spoken natural language: the mean and standard deviation of the distribution were computed from the monolingual French corpus in OpenSubtitles 2 In practice, ABC could also form a word.", "For simplicity and consistency with the stimuli in P&D, this work focuses exclusively on bisyllabic words.", "However, the approach can be extended to words of any length.", "(Lison and Tiedemann, 2016) 3 .", "The corpus consisted of over 100 million sentences, and the mean sentence length was 5.93 (with standard deviation of 4.55).", "A constraint is set such that the minimum sentence length is 4, and the maximum is 10.", "This prevents too much fragmentation of the input and keeps the distribution centered around the peak.", "Figure 1 shows the distribution of the subsequences derived from the stimuli.", "It must be noted that, by breaking the stream into subsequences, boundaries are introduced in an otherwise continuous stream, and it is therefore imperative that these are not consistently aligned with word boundaries, as otherwise this would provide additional information to the model (which was not available to participants in the experiments).", "By using a stochastic procedure, the boundaries are not consistently set either within or between words, and thus no artificial cue is introduced.", "In the experiments reported in P&D, participants responded to a 2AFC test that paired words with partwords, i.e. sequences of syllables that spanned word boundaries.", "A preference for words at group level was taken as indication of having successfully identified the words of the artificial language.", "From a modelling perspective, what is required to implement the 2AFC choices is some score' that conditions the choice for for words vs. partwords.", "Previous models of segmentation derived 3 French was the native language of the participants in the experiments of P&D scores based on internal counts of the model, i.e. the amount of times that a sequence was encountered (Frank et al., 2010) or memorized (Perruchet and Vinter, 1998; Alhama and Zuidema, 2017); or alternatively, based on the reconstruction error of these items in an autoencoder (French et al., 2011).", "In this work, a different approach is required, since scores need to be derived from the predicted parse trees.", "The proposal presented here is to assign a score to each test item (word or partword) based on to what extent the parser identifies this syllable sequence as a cohesive constituent.", "Given that all the tested items are bisyllabic, the most straightforward approach is to quantify cohesiveness as the number of times a word or partword has been placed at the lowest level of the trees predicted from the familiarization stimuli (or, in other words, the amount of times that the syllables in a word or partword are siblings; see table 2 for an example).", "This computation can easily be extended to longer items by considering additional higher nodes in the tree.", "Then, for each item pair in the test, the item that has the largest score is chosen (or randomly determined in the unlikely case of a tie).", "Finally, as in the original experiments, the accuracy is the mean number of choices for words over the total number of test items.", "This section presents simulations with Deep Inside-Outside Recursive Autoencoder (DIORA, Drozdov et al., 2019), an unsupervised neural constituency parser.", "DIORA is an autoencoder network, trained with a fill-in-the-blank objective: it encodes all the words in a sentence except one in a single vector, and then decodes from this vector, predicting all the words (including the removed one).", "The encoder uses a chart to build a constituency tree, with each cell consisting of a weighted average all the possible subtrees covering the represented span.", "These subtrees are encoded as independent vectors with their corresponding score, both of which are computed recursively using a composition function.", "In a recent empirical comparison, DIORA exhibited some of the best results in unsupervised constituency parsing for English, and outperformed all the competing models in most of the experiments in Japanese (Li et al., 2020).", "To reproduce the original experiments in P&D, I trained DIORA with the input data generated according to the procedure described in 3.2 4 .", "DIORA can be used with different composition functions: a multilayer feed-forward network (MLP), a version of the MLP that shares the inside and outside parameters (MP shared ), and a TreeLSTM (Tai et al., 2015).", "The model can be optimized with either Max-Margin or Cross-Entropy loss (Softmax).", "Simulations are reported with all these variants, with the rest of hyperparameters fixed to the default values, except: batch size=20, hidden layer size= 16, maximum epochs=50 5 ).", "I trained 30 individual models for each configuration and experimental condition.", "This is roughly the larger number of participants in the experimental conditions in P&D (n=31), and the models only differed in their initial state.", "At the end of training, the models were presented with the stimuli one more time, to produce the final parse trees that would be used for evaluation.", "The evaluation metric described in 3.3 was computed for each model, and as in the original paper the mean performance of the 30 models is submitted to a one-sided Student's t-test to find whether the performance is significantly above chance level 6 .", "The first experiment reported in Perruchet and Desaulty (2008) used an artificial language in which words could be identified based on the TPs between syllables (either in the forward or the backward direction, depending on the condition).", "Table 3 reports the mean performance (i.e. mean number of 4 I used a fork of the original model, with a small adjustment to the code that prevented the model from loading pre-trained embeddings ( https://github.com/ rgalhama/diora ) 5 Performance with the default hyperparameters DIORA was low for the reported experiments, possibly due to the very reduced amount of data of the current experiment.", "6 The code used for these simulations is available at https://github.com/rgalhama/ segmentation_as_unsup_parsing correct choices in the 2AFC test), and the statistical significance when comparing against chance level.", "As can be seen, all the model variants are successful in distinguishing words from partwords.", "The mean accuracies of all the models are statistically above chance, and do not differ greatly in terms of model choices (with TreeLSTM-softmax having the best performance).", "Thus, word identification in this condition can be achieved with DIORA, slightly outperforming humans.", "The second experiment used an artificial language with controlled frequency, such that words and partwords would not differ on this regard.", "The results of simulations with this stimuli are reported in table 4.", "The pattern of results is notably different from Experiment 1: only the model with Tree-LSTM combined with Max-Margin reconstruction loss is successful in this task (with the exception of MLP-shared for the backward condition).", "Thus, this variant of DIORA, which was also successful in identifying words in Experiment 1, successfully reproduces the observed behavior of human adults, and is capable of identifying words in continuous input based solely on the transitional probabilities between syllables, regardless of whether these are more reliable in the forward or the backward direction.", "However, the fact that the accuracy dropped for the other model variants is intriguing.", "Since the evaluated performance is the mean over 30 simulations, there could be at least two reasons behind the tendency to perform at chance.", "One would be that most of these simulations do perform individually at chance, and are simply not well suited for distinguishing between words and partwords based on TPs.", "Alternatively, the mean may be around chance due to a similar number of well-performing and failing models, as would be the case if the initial state was highly influential on the final performance of the individual models.", "The greater variance found in this experiment (compared to experiment 1) suggests that this may be the case.", "To find out more, the distribution of scores is graphically reported in Figure 2.", "As can be seen, the distributions are much tighter for Experiment 1, and the spread of the scores in Experiment 2 cover almost the entire range of scores, suggesting that, as suspected, the initial state is highly influential on performance.", "The experimental design in P&D involves the use of a 2AFC test to discover whether the words in the speech sample have been discovered.", "However, the extent to which 2AFC tests reflect the discovery of words has been put into question before (Alhama et al., 2015; Kidd et al., 2020).", "In particular, success in 2AFC can happen even when words are not that clearly distinguished from partwords.", "Thus, to gain further insight on the status of words, Fig. 3 shows the amount of times that the best-performing DIORA model (TreeLSTM-margin) which was successful in the 2AFC test recognized each test item as a constituent.", "This quantity is known as the subjective' frequencies of the model (Alhama and Zuidema, 2016).", "As can be seen, the frequencies for words in Experiment 1 are much higher than those of partwords.", "A Student's t-test confirms that counts for words are statistically different from partwords (backward: [ t (30) = 14 . 54 , p = 1 . 19 e 40 ] , forward: [ t (30) = 13 . 40 , p = 1 . 52 e 35 ] ).", "However, in Experiment 2, the difference between words and partwords is less obvious, and a few partwords are identified more often than some of the words.", "The slight superiority of words was enough for this model to be successful in the 2AFC test.", "A Student's t-test over counts of words vs. partwords does not yield evidence of significant differences (backward: [ t (30) = 1 . 62 , p = 0 . 10] , forward: [ t (30) = 1 . 96 , p = 0 . 05] ).", "Together, these results suggest that the 2AFC test reveals only a slight superiority of words over partwords.", "From a computational perspective, word identification from continuous (artificial) input has always been portrayed as a segmentation task, concerned with breaking the continuous stream into combinatorial pieces.", "This work explores a completely different perspective, in which the identification of words is carried out with a syntactic constituency parser, which groups the syllables hierarchically into tree structures.", "The results for experiments 1 and 2 show that a model like DIORA (with TreeLSTM and Max-Margin loss) can successfully reproduce human behavior in the experiments.", "From a mechanistic perspective, a tentative conclusion is that, when exposed to speech-like input in an unknown language, human adults group syllables that follow statistically coherent patterns, and this grouping is hierarchical akin to the hierarchical structures attributed to syntax.", "How, then, does the process of identifying words relate to finding the syntactic relations between the identified words?", "Given the hierarchical nature of 4108 Figure 2: Distribution of accuracies for DIORA model variants on Experiments 1 and 2, for forward and backward TPs.", "the process, a possibility is that one single process builds a bottom-up hierarchy of units, grouping sub-word sequences into words and combining those into syntactic constituents.", "This is consistent with some usage-based theories of language (Kay and Fillmore, 1999; Goldberg, 2006, p.5), which deem all levels of grammatical analyses as homologous.", "This interpretation would explain the results in Onnis and Thiessen (2013), which show that humans identify words consistent with TPs in the forward or backward direction, depending on grammatical patterns in the native language (in particular, the tendency for head-directionality).", "Although DIORA reproduced, to a great extent, the pattern of results reported in P&D, there are some differences.", "To begin with, DIORA is better than humans in identifying words when those are more frequent than partwords.", "This is evidenced by the performance in Experiment 1, as well as by the distribution of frequency counts reported in section 5.3.", "On the other hand, only one of the variants of DIORA identified words in Experiment 2, when frequency information was removed.", "As shown above, there is large variance in the performance of the models, depending on their initial state.", "This is again consistent with the results observed in Onnis and Thiessen (2013): in the absence of frequency information, humans seem to rely on prior knowledge to guide the discovery of words.", "Nevertheless, to confirm whether the current results speak to the observed behavior in Onnis and Thiessen (2013), simulations using the same stimuli are required.", "Thus, a prediction from this work is that pre-training the parser with Korean or 4109 Figure 3: Subjective frequency counts of test items, as identified by the best configuration of DIORA (TreeL-STM+Margin), averaged over 30 individual models.", "English could set a bias in the model to discover words based on either TP fw or TP bw .", "The fact that DIORA was successful in both English and Japanese (a language that, like Korean, has a tendency for left-branching syntactic structures) bodes well for such experiment (Li et al., 2020).", "Finally, it must be noted that, to fully understand the role of TPs in word identification specially in the absence of frequency cues it would be useful to have experimental procedures with stricter tests, as the analyses of subjective frequencies revealed thatsuccess in the 2AFC can be achieved with only a slight difference between words and partwords.", "This paper proposes a novel approach for word identification from continuous speech-like input: word segmentation as unsupervised parsing.", "Using this framework with DIORA revealed that word identification in AGL can be explained from the perspective of unsupervised constituency parsing, suggesting this framework can be effectively used to bridge the gap between models of word identification and syntactic syntactic processing.", "This work paves the way for addressing unanswered questions on the influence of syntactic knowledge in subsequent learning; in particular, an immediate next step for future work is to pre-train DIORA with head-first and head-last languages to find whether the model can be biased towards tracking forward or backward TPs.", "The implications of this study are not limited to Cognitive Modelling: the use of techniques from Natural Language Processing to investigate human learning can also be fruitful for this field.", "In particular, one finding is that, unlike humans, DIORA discovers constituents best when those are identi-fiable by the frequency of co-occurrence of the related units rather than by transitional probabilities .", "Although this model was not designed to mimic human learning, incorporating the inductive biases of humans (i.e. a tendency for tracking forward or 4110 backward dependencies depending on the degree of leftor right-branchness of the language) may be a fruitful avenue to pursue, as humans are, after all, the best-performing syntactic parsers.", "I am grateful to Phong Le, Jelle (Willem) Zuidema and Afra Alishahi for their helpful comments on a previous version of this article.", "I also thank Phong for insightful discussions." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "objective", "abstain", "abstain", "other", "other" ]
[ "We present a robust neural abstractive summarization system for cross-lingual summarization.", "We construct summarization corpora for documents automatically translated from three low-resource languages, Somali, Swahili, and Tagalog, using machine translation and the New York Times summarization corpus.", "We train three language-specific abstractive summarizers and evaluate on documents originally written in the source languages, as well as on a fourth, unseen language: Arabic.", "Our systems achieve significantly higher fluency than a standard copy-attention summarizer on automatically translated input documents, as well as comparable content selection.", "Cross-lingual summarization is a little-explored task combining the difficulties of automatic summarization with those of machine translation.", "The goal is to summarize in one language a document available only in another language.", "Wan et al. (2010) describe two approaches: summarize then translate, and translate then summarize.", "They argue that summarize-then-translate is preferable to avoid both the computational expense of translating more sentences and sentence extraction errors caused by incorrect translations.", "However, summarize-then-translate can only be used when the source language is high-resource (Wan et al. used English as the source, for exam-ple); if the source language is one of the thousands of low-resource languages in the world, there are no summarization corpora available.", "Language-independent techniques, such as TextRank (Mihal-cea), might be used, but there may be serious difficulties in their application, such as morphologically rich languages that render token-based similarity measures useless.", "In such a case, translate-then-summarize is the only possible approach.", "We address this scenario through the development of a neural abstractive summarization system that fluently summarizes potentially disfluent, automatically-translated documents by generating short, simple phrases to replace awkward input phrases resulting from difficult to translate source documents.", "Our novel combination of existing building block systems results in a summarization solution that can be easily applied to new low-resource languages.", "We use machine translation on the New York Times annotated corpus of docu-ment/summary pairs to create summarization corpora for documents automatically translated from three low-resource languages, Somali, Swahili, and Tagalog.", "We use these corpora to train cross-lingual summarizers for these source languages, with English as the target.", "We also evaluate our systems on a fourth source language, Arabic.", "Our experiments show that our abstractive summarizers produce more fluent English summaries from automatically-translated documents, and that this improvement generalizes across source languages.", "Our main contributions are as follows: We create summarization corpora for automatically translated Somali, Swahili, and Tagalog documents: noisy English input documents paired with clean English reference summaries.", "We present a method for producing cross-lingual summarization systems for low resource languages where no summarization corpora currently exist, providing a potential summarization solution for thousands of such languages.", "Our novel approach of training on noisy input with clean references outperforms a standard copy-attention abstractive summarizer on real-world Somali, Swahili, and Tagalog documents.", "Our evaluation on Arabic documents demonstrates that our robust abstractive summarizers in the editor: why did president clinton continue to praise a program on welfare-to-work that failed in half of those assigned?", "in his comments, he praised the consultation of the community of kansas city, but was advised by gary j.", "stangler, director of the department of social service of missouri, which half of the participants failed.", "where are these people helping each other when the government cut them?", "back to the pantry of food.", "bad news, mr. president.", "the charity of the community will not help everyone who will come to us for help.", "glenn classic valley park, mo.", "Cross-Lingual Summarization.", "Orasan and Chiorean (2008) extractively summarized Romanian news articles and automatically translated the summaries into English.", "Their experiments showed that the poor quality of the translations turned reasonable Romanian summaries into barely legible English ones.", "The most extensively investigated source-target language pair is English-to-Chinese.", "Wan et al. (2010) used a predicted translation quality score as a feature in extracting sentences for their summaries.", "Wan (2011) translated the English sentences into Chinese and represented sentences in the extraction stage by both the original English and the Chinese translation.", "Yao et al. (2015) scored aligned phrases from the original English documents and the Chinese translations to perform sentence extraction and compression based on both salience and translation quality.", "Zhang et al. (2016) parsed the original English documents into predicate-argument structures that were aligned with their Chinese translations and generated the summary from these structures.", "Finally, Wan et al. (2018) experimented with extracting and ranking multiple candidate summaries.", "Abstractive Summarization.", "Rush et al. (2015) presented the first neural abstractive summarization model, a convolutional neural network encoder and feed-forward network decoder with attention, which learned to generate news headlines from the lead sentences of their articles; Chopra et al. (2016) extended their work using a recurrent network for the decoder.", "Nallapati et al. (2016) improved on the RNN encoder-decoder with attention model by adding linguistically-motivated part of speech and named entity type embeddings, as well as a pointer-network (Vinyals et al., 2015) to allow copying of rare or out-of-vocabulary words from the input document.", "In this work, we use See et", "al.'s (2017) definition of the pointer-generator network, which adds a coverage vector and coverage penalty to prevent repetition in generated words.", "The New York Times annotated corpus (Sand-haus, 2008) was first used for neural abstractive summarization by Paulus et al. (2018), who used attention over the decoder's previous predictions to both prevent repetition and to allow for coherent longer summaries.", "Celikyilmaz et al. (2018) also used the New York Times corpus, training multiple, collaborating encoders to encode long documents one paragraph at a time.", "We use the New York Times (hereafter NYT ) summarization corpus (Sandhaus, 2008), consisting of 650k articles and their human-written abstractive summaries.", "We follow the train/test/validation split and preprocessing steps used by Paulus et al. (2018), with one exception: we do not anonymize named entities.", "We first translate 112k articles from the NYT corpus into each of our three low-resource languages, Somali, Swahili, and Tagalog, using neural machine translation.", "Of the 112k articles, 100k are taken from the training set, 6k from validation, and 6k from test.", "We then translate the articles back into noisy English, again using neural machine translation.", "Figure 1 shows an example noisy English article.", "We pair each noisy English article with the clean English reference summary corresponding to the clean English article that generated it.", "Thus our abstractive summarization model learns to take a bad English input document with translation errors and disfluencies and produce a good English summary.", "For simplicity, we refer to the corpus created by translating into Somali and back as the Somali NYT corpus , and similarly with Swahili and Tagalog, but all three corpora are in (noisy) English, not Somali, Swahili, or Tagalog.", "We use neural machine translation systems built on the Marian framework (Junczys-Dowmunt", "et al., 2018) to translate the NYT corpus into Somali, Swahili, and Tagalog, and back to English.", "The systems were developed at the University of Edinburgh and were trained on a mix of clean, human-curated parallel data (about 23k sentences for Somali and Swahili and 51k for Tagalog); noisy, web-crawled parallel data (So-mali only, about 354k sentences); and synthetic, backtranslated parallel data created from monolingual sources including news articles, the Common Crawl, and Wikipedia (250-600k sentences).", "Table 1 shows the performance of the machine translation systems for each of the three languages on held-out test sets of 500 sentences taken from the clean, human-curated parallel data.", "For our abstractive summarizers (hereafter abstractors ), we implemented See et", "al.'s (2017) pointer-generator network in PyTorch (Paszke et al., 2017).", "We pre-train for 12 epochs on the unmodified NYT corpus to obtain a baseline system.", "Table 2 shows the performance of this baseline on the unmodified NYT test set; our baseline un-derperforms the more complex systems of Paulus et al. (2018) and Celikyilmaz et al. (2018), but we are more interested in the improvements our fluency-focused approach makes over this baseline than in the baseline's performance compared to state-of-the-art systems.", "We use each of the three noisy English corpora to train the baseline system for another 8 epochs, producing three language-specific abstractors.", "We also train a fourth, mixed-language abstractor using 100k articles randomly selected from the Somali, Swahili, and Tagalog training sets, evenly split among the three.", "Table 3 shows the performance of our abstractors on the Somali, Swahili, and Tagalog NYT test sets.", "Differences among the language-specific systems are not statistically significant, and the more general mixed model achieved the best scores 1 .", "However, we found that abstractors trained solely on one language and tested on another significantly ( p < 0 . 05 ) underperformed the mixed model, which was trained on all three languages, suggesting that training on some same-language data is still important.", "We also trained a bigram language model on the entire set of NYT reference summaries and 1 These results are shown in Appendix A, along with all combinations of the language-specific models on the three languages.", "Document: mange kimambi i pray for the parliamentary seat for kinondoni constituency for ticket of ccm. not special seats' kinondoni without drugs is possible i pray for the parliamentary seat for kinondoni constituency on the ticket of ccm.", "yes, it's not a special seats, khuini kinondoni, what will i do for kinondoni?", "tension is many i get but we must remember no good that is available easily.", "kinondoni without drugs is possible.", "as a friend, fan or patriotism i urge you to grant your contribution to the situation and propert.", "you can use western union or money to go to mange john kimambi.", "account of crdb bank is on blog.", "reduce my profile in my blog understand why i have decided to vie for kinondoni constituency.", "you will understand more.", "NYT-base: mange kimambi, who pray for parliamentary seat for kinondoni constituency for ticket of ccm in 0 , is on blog, and not special seats' kinondoni without drugs.", "Abs-mix: mange kimambi, who pray for parliamentary seat for kinondoni constituency for ticket of ccm, comments on his plans to vie for kinondoni' without drugs.", "calculated the average perplexity of our abstrac-tors' output as a proxy for fluency (Table 4).", "We see that Somali is the most difficult overall, but all three language-specific systems and the mixed model produce more fluent English across source languages than does the base model.", "We perform a human evaluation on 20 Somali, 20 Swahili, and 20 Tagalog weblog entries that we automatically translate into English using the same neural machine translation systems we used to create our noisy NYT corpora.", "Unlike our NYT data, which we translated from English into the low-resource languages, these weblogs are real-world Somali, Swahili, and Tagalog documents this evaluation demonstrates the performance of our system in a real use-case.", "Figure 2 shows a Swahili weblog entry and its summaries 2 .", "This example shows the advantage of our approach: unlike a machine translation system, which must translate every part of its input, our abstractor is able to delete most of the long, rambling, and disfluent blog entry, instead summing it up fluently with the generated phrase comments on his plans and the repurposed phrase to vie for.", "2 All four abstractors produced very similar summaries.", "judges were shown a translated document and a summary and asked to rate the content and fluency of the summary on a scale of 13 (Table 5).", "Our human judges rated our abstractors higher in both fluency and content, and we see again that while the language-specific systems are more fluent on their own languages than are the language-specific systems for the other languages, the mixed model still performs the best.", "We also see that, while our improvement in content is more modest, our improvement in fluency the goal of this work is significant.", "The judges achieved substantial agreement (Fleiss's = 0 . 72 ).", "Finally, we evaluate our system on a new language: Arabic.", "We use the DUC 2004 Task 3 test set, which consists of real-world Arabic news articles translated into English, each paired with four human-written summaries.", "Table 6 shows the performance of our abstractors on the Arabic data, demonstrating their ability to generalize and improve the fluency of input documents automatically translated from a previously unseen language, yielding a significant improvement in ROUGE.", "Compared to the 28 DUC 2004 systems, our performance would have ranked 1 st on summarizing the machine-translated documents; despite our use of these lower-quality, au-Document : washington 10-23 (afp) was signed by benjamin netanyahu and yasser arafat on friday at the white house agreed on the israeli military withdrawal from the west bank in return for palestinian additional security guarantees.", "tomatically -translated documents, we performed extremely well even in comparison with the DUC 2004 systems on high-quality, human -translated documents: we would have ranked 1 st , 4 th , and 5 th on ROUGE-1, -2, and -L, respectively.", "Figure 3 compares the baseline system and our abstractors on the Arabic data 3 .", "We find that the NYT-base model tends to copy heavily from the beginning of its input documents.", "Since it was trained entirely on clean English news articles, it is understandable that it tries to copy the lead sentence, but in both examples, it copies errors: the confusing run-on sentence not special seats' kinondoni without drugs is possible (shown in yellow in Figure 2) and the phrase signed by (shown in green in Figure 3), whose subject is missing.", "In contrast, our abstractors are able to correctly identify the important information in the input documents and produce fluent summaries presenting this information.", "In Figure 3, Abs-mix deletes the unnecessary wash-ington 10-23 and produces the verb agree in the plural form, agreeing with its plural subject.", "More dramatically, in Figure 2, Abs-mix identifies kinondoni without drugs as Mange Kimambi's campaign platform and succinctly summarizes this using both the purely generated phrase comments on his plans and the repurposed but still fluent and correct to vie for. 3 All four abstractor summaries were identical.", "The main limitation of our approach is that it assumes the existence of a machine translation system for the source language.", "Although our abstractors are able to handle errorful, disfluent translations, for extremely low-resource languages, there may be no translations of any kind available; in such a case, another approach, such as cross-lingual word embeddings, is necessary.", "We have presented a robust abstractive summarization system for the task of cross-lingual summarization, taking advantage of an abstractive sys-tem's ability to delete difficult to translate phrases and generate new text to use instead.", "Our straightforward method allows us to produce summarization systems for low resource languages where no summarization corpora are currently available, providing a potential summarization solution for thousands of such languages.", "Our experiments demonstrate that, by using our novel approach of training on noisy English documents and clean English reference summaries, the model learns to produce fluent summaries from disfluent inputs.", "Further, we have shown that, while training a system for a specific source language gives strong performance, the abstractive fluency of these systems generalize to other source languages.", "This research is based upon work supported in part by the National Science Foundation (NSF), under Grant No.", "IIS-1422863, and the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via contract # FA8650-17-C-9117.", "The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of NSF, ODNI, IARPA, or the U.S. Government.", "The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein." ]
[ "method", "method", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "method", "method", "result", "objective", "method", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "other", "abstain", "result", "abstain", "result", "result", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "method", "objective", "result", "other", "other", "other", "other" ]
[ "Current Open-Domain Question Answering (ODQA) models typically include a retrieving module and a reading module, where the retriever selects potentially relevant passages from open-source documents for a given question, and the reader produces an answer based on the retrieved passages.", "The recently proposed Fusion-in-Decoder (FiD) framework is a representative example, which is built on top of a dense passage retriever and a generative reader, achieving the state-of-the-art performance.", "In this paper we further improve the FiD approach by introducing a knowledge-enhanced version, namely KG-FiD.", "Our new model uses a knowledge graph to establish the structural relationship among the retrieved passages, and a graph neural network (GNN) to re-rank the passages and select only a top few for further processing.", "Our experiments on common ODQA benchmark datasets (Natural Questions and TriviaQA) demonstrate that KG-FiD can achieve comparable or better performance in answer prediction than FiD, with less than 40% of the computation cost.", "Open-Domain Question Answering (ODQA) is the task of answering natural language questions in open domains.", "A successful ODQA model relies on effective acquisition of world knowledge.", "A popular line of work treats a large collection of open-domain documents (such as Wikipedia articles) as the knowledge source, and design a ODQA system that consists of a retrieving module and a reading module .", "The retriever pulls out a small set of potentially relevant passages from the open-source documents for a given question, and the reader produces an answer based on the retrieved passages (Karpukhin et al., 2020; Guu et al., 2020; Izacard and Grave, 2020).", "An earlier example of this kind is DrQA (Chen et al., 2017), which used Work done during internship at Microsoft.", "an traditional search engine based on the bag of words (BoW) document representation with TF-IDF term weighting, and a neural reader for extracting candidate answers for each query based on the dense embedding of the retrieved passages.", "With the successful development of Pre-trained Language Models (PLMs) in neural network research, dense embedding based passage retrieval (DPR) models (Karpukhin et al., 2020; Qu et al., 2021) have shown superior performance over BoW/TF-IDF based retrieval models due to utilization of contextualized word embedding in DPR, and generative QA readers (Lewis et al., 2020; Roberts et al., 2020) usually outperform extraction based readers (Devlin et al., 2019; Guu et al., 2020) due to the capability of the former in capturing lexical variants with a richer flexibility.", "The recently proposed Fusion-in-Decoder (FiD) model (Izacard and Grave, 2021) is representative of those methods with a DPR retriever and a generative reader, achieving the state-of-the-art results on ODQA evaluation benchmarks.", "FiD also significantly improved the scalability of the system over previous generative methods by encoding the retrieved passages independently instead of encoding the concatenation of all retrieved passages (which was typical in previous methods).", "Inspired by the success of FiD, this paper aims further improvements of the state of the art of ODQA in the paradigm with a DPR retriever and a generative reader.", "Specifically, we point out two potential weaknesses or limitations of FiD as the rooms for improvements, and we propose a novel solution namely KG-FiD to address these issues with FiD.", "The two issues are: Issue", "1. The independent assumption among passages is not justified.", "Notice that both the DPR retriever and the generative reader of FiD perform independent encoding of the retrieved passages, which means that they cannot leverage the semantic relationship among passages for passage 4961 embedding and answer generation even if such relational knowledge is available.", "But we know that rich semantic connections between passages often provide clues for better answering questions (Min et al., 2019).", "Issue", "2. Efficiency Bottleneck .", "For each input question, the FiD generative reader receives about 100 passages from the DPR module, with a relatively high computational cost.", "For example, the inference per question takes more than 6 trillion floating-point operations.", "Simply reducing the number of retrieved passages sent to the reader will not be a good solution as it will significantly decrease the model performance (Izacard and Grave, 2021).", "How to overcome such inefficient computation issue is a challenging question for the success of FiD in realistic ODQA settings.", "We propose to address both of the above issues with FiD by leveraging an existing knowledge graph (KG) to establish relational dependencies among retrieved passages, and employing Graph Neural Networks (GNNs) to re-rank and prune retrieved passages for each query.", "We name our new approach as KG-FiD.", "Specifically, KG-FiD employs a two-stage passage reranking by applying GNN to model structural and semantic information of passages.", "Both stages rerank the input passages and only a few top-reranked passages are fed into subsequent modules.", "The first stage reranks passages returned by the retriever, where we use the passage embeddings generated by DPR as the initial GNN node representation.", "This allows reranking a much larger set of initial candidate passages to enhance coverage of answers.", "The second stage performs joint passage reranking and answer generation, where the node embeddings are initialized by the embeddings of passage-question pairs output from the reader encoder.", "This stage operates on a smaller candidate set but aims for more accurate reranking and passage pruning.", "To improve the efficiency, in the second-stage reranking, our GNN model adopts representations from the intermediate layer in the reader encoder instead of the final layer to initiate passage node embeddings.", "Then only a few top reranked passages will be passed into the higher layers of encoder and the decoder for answer generation, while other passages will not be further processed.", "This is coupled with a joint training of passage reranking and answer generation.", "As shown in Section 4.3, these strategies significantly reduce the computation cost while still maintaining a good QA performance.", "Our experiments on ODQA benchmark datasets Natural Questions and TriviaQA demonstrate that KG-FiD can achieve comparable or better performance in answer prediction than FiD, with only 40% of the computation cost of FiD.", "ODQA with text corpus ODQA usually assumes that a large external knowledge source is accessible and can be leveraged to help answer prediction.", "For example, previous works (Chen et al., 2017; Karpukhin et al., 2020; Izacard and Grave, 2021) mainly use Wikipedia as knowledge source which contains millions of text passages.", "In this case, current ODQA models mainly contains a retriever to select related passages and a reader to generate the answer.", "Thus, the follow-up works mainly aim to: (1) Improve the retriever: from sparse retrieval based on TF-IDF or BM25 (Chen et al., 2017; Yang et al., 2019) to dense retrieval (Karpukhin et al., 2020) based on contextualized embeddings generated by pre-trained language models (PLMs).", "Moreover, some further improvement are also proposed such as better training strategy (Qu et al., 2021), reranking based on retrieved passages (Wang et al., 2018; Nogueira and Cho, 2019; Mao et al., 2021), and knowledge distillation from reader to retriever (Izacard and Grave, 2020); (2) Improve the reader: changing from Recurrent Neural Network (Chen et al., 2017) to PLMs such as extractive reader BERT (Karpukhin et al., 2020; Iyer et al., 2021; Guu et al., 2020) and generative reader BART and T5 (Izacard and Grave, 2021; Lewis et al., 2020).", "Besides, some works (Guu et al., 2020; Lewis et al., 2020; Sachan et al., 2021) have shown that additional unsupervised pre-training on retrieval-related language modeling tasks can further improve ODQA performance.", "However, none of these methods modeled the relationships among different passages.", "ODQA with knowledge graph Besides the unstructured text corpus, world knowledge also exists in knowledge graphs (KGs), which represent entities and relations in a structural way and have been used in a variety of NLP tasks (Xu et al., 2021b; Yu et al., 2020; Xu et al., 2021a).", "Some works (Berant et al., 2013; Sun et al., 2018, 2019; Xiong et al., 2019) restrict the answer to be entities in the knowledge graph, while our work focus on more general 4962 ODQA setting where the answer can be any words or phrases.", "Under this setting, some recent efforts have been made to leverage knowledge graphs for ODQA (Min et al., 2019; Asai et al., 2020; Zhou et al., 2020).", "For example, UniK-QA (Oguz et al., 2020) transforms KG triplets into text sentences and combine them into text corpus, which loses structure information of KG.", "Other works use KG to build relationship among passages similar to ours.", "KAQA (Zhou et al., 2020) use passage graph to propagate passage retrieve scores and answer span scores.", "Graph-Retriever (Min et al., 2019) iteratively retrieve passages based on the relationship between passages, and also use passage graph to improve passage selection in an extractive reader.", "However, applying KG to improve the recent advanced FiD framework remains unstudied.", "In the following sections, we first introduce how to apply KG to build a graph structure among the retrieved passages (Section 3.1).", "Then we show how we adopt the graph-based stage-1 reranking with DPR retriever to improve passage retrieval (Section 3.2).", "Next we introduce joint stage-2 reranking and answer generation in the reading module (Section 3.3).", "Finally we illustrate the improvement of efficiency by using intermediate layer representation for stage-2 reranking (Section 3.4).", "The overview of our framework is illustrated in Figure", "1. 3.1 Construct Passage Graph using KG The intuition behind using KG is that there exists the structural relationship among the retrieved passages which can be captured by the KG.", "Similar to (Min et al., 2019), we construct the passage graph where vertices are passages of text and the edges represent the relationships that are derived from the external KGs as KG = { ( e h , r, e t ) } , where e h , r, e t are the head entity, relation and tail entity of a triplet respectively.", "First, we formalize the definition of a passage .", "Following previous works (Wang et al., 2019; Karpukhin et al., 2020), each article in the text corpus is split into multiple disjoint text blocks of 100 words called passages , which serve as the basic retrieval units.", "We assume there is a one-one mapping between the KG entities and articles in the text corpus.", "Specifically, we use English Wikipedia as the text corpus and English Wiki-data (Vrandecic and Krtzsch, 2014) as the knowledge graph, since there exists an alignment between the two resources 1 .", "For example, for the article titled with New York Yankees, it contains passages such as The New York Yankees are an American professional baseball team ....", "The article also corresponds to a KG entity with the same name as New York Yankees.", "Then we define the mapping function e = f ( p ) , where the KG entity e corresponds to the article which p belongs to.", "Note that one passage can only be mapped to one entity, but multiple passages could be mapped to the same entity.", "The final passage graph is defined as G = { ( p i , p j ) } , where passages p i and p j are connected if and only if their mapped entities are directly connected in the KG, i.e., ( f ( p i ) , r, f ( p j )) KG .", "Since the total number of passages is very large, e.g., more than 20M in Wikipedia, constructing and maintaining a graph over all the passages is inefficient and memory-consuming.", "Thus, we build a passage graph on the fly for each question, based on the retrieved passages.", "DPR Retriever: Our framework applies DPR (Karpukhin et al., 2020) as the retriever, which uses a BERT based passage encoder to encode all the N passages in the text corpus { p 1 , p 2 , , p N } .", "Suppose all the passage embeddings are fixed and stored in memory as M RN D where D is the hidden dimension: M i = BERT ( p i ) for i { 1 , 2 , N } (1) For an input question q , DPR applies another BERT-based question encoder to obtain its representation Q , then it builds on FAISS (Johnson et al., 2019) to conduct fast dot-product similarity search between Q and M , and returns N 1 ( N 1 N ) passages with the highest similarity scores.", "Stage-1 Reranking: We see that the DPR retriever returns N 1 passages which are independently retrieved based on the similarity between the question and each passage, without considering inter-passage relationship.", "Thus instead of directly retrieving N 1 passages for the reader, we propose to first retrieve N 0 ( N 0 > N 1 ) passages, then rerank them and output topN 1 reranked passages into the reader.", "Following Section 3.1, we construct a graph among the N 0 retrieved passages denoted as G 0 .", "1 Entity recognition and linking can be used if there is no such alignment.", "We aim to rerank the retrieved passages based on both the structural information and the textual semantic information of them.", "To represent the semantic information of passages, one can use another pre-trained language model to encode the passage texts, but this will not only include lots of additional model parameters, but also incur heavy computational cost as N 0 can be large.", "To avoid both additional memory and computation cost, we propose to reuse the offline passage embeddings M generated from the DPR retriever in Equation 1 as the initial node representation: E (0) i = M r i where { r i | i { 1 , 2 , , N 0 }} is the set of retrieved passage indices.", "Then we employ a graph attention network (GAT) (Velickovic et al., 2018) with L g layers as GNN model to update representations for each node based on the passage graph and initial representation.", "The l -th layer of the GNN model updates the embedding of node i as follows: E ( l ) i = h ( E ( l 1) i , { E ( l 1) j } ( i,j ) G 0 ) (2) where h is usually a non-linear learnable function which aggregates the embeddings of the node itself and its neighbor nodes.", "The reranking score for each passage p r i is calculated by s stage-1 i = QTE ( L g ) i , where Q is the question embedding also generated by the DPR retriever.", "Then we sort the retrieved passages by the reranking scores, and input the topN 1 passages into the reader.", "The training loss of passage ranking for each question is: L stage-1 r = N 0 (cid:88) i =1 y i log (cid:32) exp( s stage-1 i ) (cid:80) N 0 j =1 exp( s stage-1 j ) (cid:33) (3) where y i = 1 if p r i is the gold passage 2 that contains the answer, and 0 otherwise.", "As we only add a lightweight graph neural network and reuse the pre-computed and static DPR passage embeddings, our reranking module can process a large number of candidate passages efficiently for each question.", "In experiments, we set N 0 = 1000 and N 1 = 100 .", "In this section, we briefly introduce the vanilla FiD reading module before illustrating our joint", "2 We follow Karpukhin et al. (2020) on the definition of gold passages.", "retrieved passages { p a 1 , p a 2 , , p a N 1 } as input.", "Vanilla FiD Reading Module: We denote the hidden dimension as H and number of encoder layers and decoder layers as L , FiD reader first separately encodes each passage p a i concatenated with question q : P (0) i = T5-Embed ( q + p a i ) RT p H , (4) P ( l ) i = T5-Encoder l ( P ( l 1) i ) RT p H , (5) where T p is the sequence length of a passage concatenated with the question.", "T5-Embed ( ) is the initial embedding layer of T5 model (Raffel et al., 2019) and T5-Encoder l ( ) is the l -th layer of its encoder module.", "Then the token embeddings of all passages output from the last layer of the encoder are concatenated and sent to the decoder to generate the answer tokens A : A = T5-Decoder (cid:16) [ P ( L ) 1 ; P ( L ) 2 ; ; P ( L ) N 1 ] (cid:17) (6) Stage-2 Reranking: Note that vanilla FiD reader neglect the cross information among passages, and the joint modeling in the decoding process makes it vulnerable to the noisy irrelevant passages.", "Thus, we propose to leverage the passage graph to rerank the input N 1 passages during the encoding and only select topN 2 ( N 2 < N 1 ) reranked passages into the decoder, which is named as stage-2 reranking.", "Similar to stage-1 reranking, the reranking model is based on both the structural information and the textual semantic information of passages.", "We denote the passage graph as G 1 , which is a subgraph of G 0 .", "To avoid additional computation and memory cost, we propose to reuse the encoder-generated question-aware passage representation from FiD reader for passage reranking as it is already computed in Equation 5.", "Specifically, the initial node embeddings Z (0) i for passage p a i comes from the first token embedding of the final layer in the FiD-Encoder, i.e., Z (0) i = P ( L ) i (0) RD .", "Then same as stage-1 reranking, we also employ a GAT (Velickovic et al., 2018) with L g layers as the graph neural network (GNN) model to update representations for each node based on the passage graph, similar to Equation 2: Z ( L g ) = GAT ( Z (0) , G 1 ) .", "The reranking score of passage p a i is calculated by s stage-2 i = WTZ ( L g ) i where W is a trainable model parameter.", "After reranking, only the final topN 2 ( N 2 < N 1 ) passages are sent for decoding.", "Suppose their indices are { g 1 , g 2 , , g N 2 } , the decoding process in Equation 6 becomes: A = T5-Decoder (cid:16) [ P ( L ) g 1 ; P ( L ) g 2 ; ; P ( L ) g N 2 ] (cid:17) (7) where A is the generated answer.", "Similar to stage-1 reranking, the training loss of passage ranking for each question is: L stage-2 r = N 1 (cid:88) i =1 y i log (cid:32) exp( s stage-2 i ) (cid:80) N 1 j =1 exp( s stage-2 j ) (cid:33) (8) where y i = 1 if p a i is the gold passage that contains the answer, and 0 otherwise.", "The passage reranking and answer generation are jointly trained.", "We denote the answer generation loss for each question is L a , then the final training loss of our reader module is L = L a + L stage-2 r , where is a hyper-parameter which controls the weight of reranking task in the total loss.", "Note that the first stage reranking is based on DPR embeddings, which are are high-level (one vector per passage) and not further trained.", "While the second stage is based on reader-generated passage-question embeddings, which are semantic-level and trainable as part of the model output.", "Thus the second stage can better capture semantic information of passages and aims for more accurate reranking over a smaller candidate set.", "In the experiment, we set N 1 = 100 and N 2 = 20 .", "Recall that in the stage-2 reranking, we take the passage representation from the last layer of reader encoder for passage reranking.", "In this section, we propose to further reduce the computation cost by taking the intermediate layer representation rather than the last layer.", "The intuition is that answer generation task is more difficult than passage reranking which only needs to predict whether the passage contains the answer or not.", "Thus we may not need the representation from the whole encoder module for passage reranking.", "Suppose we take the representation from the L 1 th layer ( 1 L 1 < L ), i.e., Z (0) i = P ( L 1 ) i (0) for i { 1 , 2 , , N 1 } , and the reranking method remains the same.", "Then only the top N 2 ( N 2 < N 1 ) reranked passages will go through the rest layers of FiD-encoder.", "Suppose their indices are 4965 I g = { g 1 , g 2 , , g N 2 } , for l L 1 + 1 : P ( l ) i = (cid:40) T5-Encoder l ( P ( l 1) i ) if i I g Stop-Computing else (9) Then P ( L ) g 1 , P ( L ) g 2 , , P ( L ) g N 2 are sent into the decoder for answer generation as in Equation 7.", "In Section 4.3, we demonstrate this can reduce 60% computation cost than the original FiD while keeping the on-par performance on two benchmark datasets.", "Here we analyze the theoretical time complexity of our proposed KG-FiD compared to vanilla FiD.", "More practical computation cost comparison is shown in Appendix A.5.", "Because both the computations of DPR retrieving and stage-1 reranking are negligible compared to the reading part, we only analyze the reading module here.", "Suppose the length of answer sequence A is denoted as T a and the average length of the passage (concatenated with question) is T p .", "For vanilla FiD reader, the time complexity of the encoder module is O ( L N 1 T 2 p ) , where L, N 1 denote the number of encoder layers and the number of passages for reading.", "The square comes from the self-attention mechanism.", "The decoder time complexity is O ( L ( N 1 T p T a + T 2 a )) , where N 1 T p T a comes from the cross-attention mechanism.", "For our reading module, all the N 1 candidate passages are processed by the first L 1 layers of encoder.", "But only N 2 passages are processed by the remaining L L 1 encoder layers and sent into the decoder.", "Thus, the encoder computation complexity becomes O (( L 1 N 1 +( L L 1 ) N 2 ) T 2 p ) , and the decoder computation takes O ( L ( N 2 T p T a + T 2 a )) .", "Because L 1 < L, N 2 < N 1 , both the encoding and decoding of our method is more efficient than vanilla FiD.", "Furthermore, the answer is usually much shorter than the passage (which is the case in our experi-ments), i.e., T a T p .", "Then the decoding computation can be negligible compared to the encoding.", "In this case, the approximated ratio of saved computation cost brought by our proposed method is: S = 1 ( L 1 N 1 + ( L L 1 ) N 2 ) T 2 p L N 1 T 2 p = (1 L 1 L )(1 N 2 N 1 ) This shows that we can reduce more computation cost by decreasing L 1 or N 2 .", "For example, if setting L 1 = L/ 4 , N 2 = N 1 / 5 , we can reduce about 60% of computation cost.", "More empirical results and discussions will be presented in Section 4.3.", "In this section, we conduct extensive experiments on two most commonly-used ODQA benchmark datasets: Natural Questions (NQ) (Kwiatkowski et al., 2019) which is based on Google Search Queries, and TriviaQA (Joshi et al., 2017) which contains questions from trivia and quiz-league web-sites.", "We follow the same setting as (Izacard and Grave, 2021) to preprocess these datasets, which is introduced in Appendix A.1.", "All our experiments are conducted on 8 Tesla A100 40GB GPUs.", "Knowledge Source: Following (Karpukhin et al., 2020; Izacard and Grave, 2021), we use the English Wikipedia as the text corpus, and apply the same preprocessing to divide them into disjoint passages with 100 words, which produces 21M passages in total.", "For the knowledge graph, we use English Wikidata.", "The number of aligned entities, relations and triplets among these entities are 2.7M, 974 and 14M respectively.", "Model Details: For the retrieving module, we use the DPR retriever (Karpukhin et al., 2020) which contains two BERT (base) models for encoding question and passage separately.", "For the GNN reranking models, we adopt 3-layer Graph Attention Networks (GAT) (Velickovic et al., 2018).", "For the reading module, same as (Izacard and Grave, 2021), we initialize it with the pretrained T5-base and T5-large models (Raffel et al., 2019), and we name the former one as KG-FiD (base) and the latter one as KG-FiD (large).", "Our implementation is based on the HuggingFace Transformers library (Wolf et al., 2019).", "For number of passages, we set N 0 = 1000 , N 1 = 100 , N 2 = 20 .", "The training process of our method is introduced in Appendix A.3.", "More results about model design and hyper-parameter search is in Appendix A.4.", "Evaluation: We follow the standard evaluation metric of answer prediction in ODQA, which is the exact match score (EM) (Rajpurkar et al., 2016).", "A generated answer is considered correct if it matches any answer in the list of acceptable answers after 4966 normalization 3 .", "For all the experiments, we conduct 5 runs with different random seeds and report the averaged scores.", "We mainly compare KG-FiD with the baseline model FiD (Izacard and Grave, 2021).", "For other baselines, we compare with representative methods from each category: (1) not using external knowledge source: T5 (Roberts et al., 2020) and GPT-3 (Brown et al., 2020); (2) reranking-based methods: RIDER (Mao et al., 2021) and RECONSIDER (Iyer et al., 2021); (3) leveraging knowledge graphs or graph information between passages: Graph-Retriever (Min et al., 2019), Path-Retriever (Asai et al., 2020), KAQA (Zhou et al., 2020), and UniK-QA (Oguz et al., 2020).", "We also compare with methods (4) with additional large-scale pre-training: REALM (Guu et al., 2020), RAG (Lewis et al., 2020) and Joint Top-K (Sachan et al., 2021).", "Comparison with Baselines: Table 1 shows the results of our method and all baselines.", "We see that our proposed model KG-FiD consistently and significantly improves FiD on both NQ and TriviaQA datasets over both base and large model.", "Specifically, for large model, KG-FiD improves FiD by 1 .", "5% and 1 .", "1% on two datasets respectively, which has larger improvement compared to base model.", "We think the reason is that more expressive reader will also benefit the stage-2 reranking since the initial passage embeddings are generated by the reader encoder module.", "We also see that our proposed method outperforms all the baseline methods except UniK-QA (Oguz et al., 2020).", "However, UniK-QA uses additional knowledge source Wikipedia-Table for retrieval, which is highly related with the NQ dataset and makes it unfair to directly compare with our method.", "Efficiency & Accuracy: Table 2 show the detailed comparison between our method and FiD in the large model version.", "The results of base model version is shown in Appendix A.4.", "Besides EM score, we also report the ratio of computation flops (#FLOPs) and inference latency (per question).", "The detailed calculation of #FLOPs is shown in Appendix A.5.", "From table 2, we see 3 The normalization includes lowercasing and removing articles, punctuation and duplicated whitespace.", "that (1) for KG-FiD, decreasing L 1 can improve the computation efficiency as analyzed in Section 3.4, while increasing L 1 can improve the model performance.", "We think the performance improvement comes from the noise reduction of passage filtering.", "For a larger L 1 , the passage embeddings for reranking will have a better quality so that the gold passages are less likely to be filtered out.", "(2) Simply reducing the number of passages N 1 into vanilla FiD reader can reduce computation cost, but the performance will also drop significantly (from 51.9 to 50.3 on NQ dataset).", "(3) Our model can achieve the performance on par with FiD with only 38% of computation cost.", "When consuming the same amount of computations ( L 1 = 24 ), our model significantly outperforms FiD on both NQ and TriviaQA datasets.", "These experiments demonstrate that our model is very flexible and can improve both the efficiency and effectiveness by changing L 1 .", "Effect of Each Reranking Stage: Since our proposed graph-based reranking method are applied in both retrieving stage (Section 3.2) and reading stage (Section 3.3).", "We conduct ablation study 4967 Model #FLOPs NQ TriviaQA EM Latency (s) EM Latency (s) FiD (N 1 =40) 0.40x 50.3 0.74 (0.45x) 67.5 0.73 (0.44x) FiD (N 1 =100) 1.00x 51.9 1.65 (1.00x) 68.7 1.66 (1.00x) KG-FiD (N 1 =100, L 1 =6) 0.38x 52.0 0.70 (0.42x) 68.9 0.68 (0.41x) KG-FiD (N 1 =100, L 1 =12) 0.55x 52.3 0.96 (0.58x) 69.2 0.94 (0.57x) KG-FiD (N 1 =100, L 1 =18) 0.72x 52.6 1.22 (0.74x) 69.8 1.22 (0.73x) KG-FiD (N 1 =100, L 1 =24) 0.90x 53.4 1.49 (0.90x) 69.8 1.48 (0.89x) Table 2: Inference #FLOPs, Latency (second) and Exact match score of FiD (large) and KG-FiD (large).", "to validate the effectiveness of each one.", "Table 3 shows the experiment results by removing each module.", "We see the performance of KG-FiD drops when removing any of the two reranking modules, demonstrating both of them can improve model performance.", "Another thing we observe is that stage-1 reranking is more effective in base model while stage-2 reranking is more effective in large model.", "This is reasonable since stage-2 reranking relies on the effectiveness of reader encoder module, where the large model is usually better than the base model.", "Passage Ranking Results: We additionally show that our proposed GNN reranking method can improve the passage retrieval results.", "This is demonstrated in Figure 2, where we report Hits@K metric over NQ test set, measuring the percentage of top-K retrieved passages that contain the gold passages (passages that contain the answer).", "We see that DPR+stage-1 reranking consistently outperforms DPR for all the K { 10 , 20 , 50 , 100 } .", "With two stages of reranking, the retrieval results are further improved for K { 10 , 20 } (We only cares about K 20 for stage-2 reranking since N 2 = 20 ).", "This shows that such reranking can increase the rank of gold passages which are previ-10 20 50 100 K 76 78 80 82 84 86 88 H i t s @ KDPR DPR+stage-1 DPR+stage-1&2 Figure 2: Passage ranking results over NQ test set of DPR retriever and our proposed two-stage rerankings over base model.", "ously ranked lower by DPR retriever and improve the efficacy of passage pruning.", "This work tackles the task of Open-Domain Question Answering.", "We focus on the current best performed framework FiD and propose a novel KG-based reranking method to enhance the cross-modeling between passages and improve computation efficiency.", "Our two-stage reranking methods reuses the passage representation generated by DPR retriver and the reader encoder and apply graph neural networks to compute reranking scores.", "We further propose to use the intermediate layer of encoder to reduce computation cost while still maintaining good performance.", "Experiments on Natural Questions and TriviaQA show that our model can significantly improve original FiD by 1 .", "5% exact match score and achieve on-par performance with FiD but reducing over 60% of computation cost.", "We thank all the reviewers for their valuable comments.", "We also thank Woojeong Jin, Dong-Ho Lee, and Aaron Chan for useful discussions.", "Donghan Yu and Yiming Yang are supported in part by the United States Department of Energy via the Brookhaven National Laboratory under Contract No. 384608." ]
[ "abstain", "abstain", "result", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "method", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "other", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "other", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "other", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "method", "objective", "method", "objective", "result", "abstain", "other", "other", "other" ]
[ "In knowledge graph embedding, the theoretical relationship between the softmax cross-entropy and negative sampling loss functions has not been investigated.", "This makes it difficult to fairly compare the results of the two different loss functions.", "We attempted to solve this problem by using the Bregman divergence to provide a unified interpretation of the softmax cross-entropy and negative sampling loss functions.", "Under this interpretation, we can derive theoretical findings for fair comparison.", "Experimental results on the FB15k-237 and WN18RR datasets show that the theoretical findings are valid in practical settings.", "Negative Sampling (NS) (Mikolov et al., 2013) is an approximation of softmax cross-entropy (SCE).", "Due to its efficiency in computation cost, NS is now a fundamental loss function for various Natural Language Processing (NLP) tasks such as used in word embedding (Mikolov et al., 2013), language modeling (Melamud et al., 2017), contextualized embedding (Clark et al., 2020a,b), and knowledge graph embedding (KGE) (Trouillon et al., 2016).", "Specifically, recent KGE models commonly use NS for training.", "Considering the current usages of NS, we investigated the characteristics of NS by mainly focusing on KGE from theoretical and empirical aspects.", "First, we introduce the task description of KGE.", "A knowledge graph is a graph that describes the relationships between entities.", "It is an indispensable resource for knowledge-intensive NLP applications such as dialogue (Moon et al., 2019) and question-answering (Lukovnikov et al., 2017) systems.", "However, to create a knowledge graph, it is necessary to consider a large number of entity combinations and their relationships, making it difficult to construct a complete graph manually.", "Therefore, the prediction of links between entities is an important task.", "Currently, missing relational links between entities are predicted using a scoring method based on KGE (Bordes et al., 2011).", "With this method, a score for each link is computed on vector space representations of embedded entities and relations.", "We can train these representations through various loss functions.", "The SCE (Kadlec et al., 2017) and NS (Trouillon et al., 2016) loss functions are commonly used for this purpose.", "Several studies (Ruffinelli et al., 2020; Ali et al., 2020) have shown that link-prediction performance can be significantly improved by choosing the appropriate combination of loss functions and scoring methods.", "However, the relationship between the SCE and NS loss functions has not been investigated in KGE.", "Without a basis for understanding the relationships among different loss functions, it is difficult to make a fair comparison between the SCE and NS results.", "We attempted to solve this problem by using the Bregman divergence (Bregman, 1967) to provide a unified interpretation of the SCE and NS loss functions.", "Under this interpretation, we can understand the relationships between SCE and NS in terms of the model's predicted distribution at the optimal solution, which we called the objective distribution .", "By deriving the objective distribution for a loss function, we can analyze different loss functions, the objective distributions of which are identical under certain conditions, from a unified viewpoint.", "restricted to KGE as follows:", "The objective distribution of NS with uniform noise (NS w/ Uni) is equivalent to that of SCE.", "The objective distribution of self-adversarial negative sampling (SANS) (Sun et al., 2019) is quite similar to SCE with label smoothing (SCE w/ LS) (Szegedy et al., 2016).", "NS with frequency-based noise (NS w/ Freq) in word2vec 1 has a smoothing effect on the objective distribution.", "SCE has a property wherein it more strongly fits a model to the training data than NS.", "To check the validity of the theoretical findings in practical settings, we conducted experiments on the FB15k-237 (Toutanova and Chen, 2015) and WN18RR (Dettmers et al., 2018) datasets.", "The experimental results indicate that The relationship between SCE and SCE w/ LS is also similar to that between NS and SANS in practical settings.", "NS is prone to underfitting because it weakly fits a model to the training data compared with SCE.", "SCE causes underfitting of KGE models when their score function has a bound.", "Both SANS and SCE w/ LS perform well as pre-training methods.", "The structure of this paper is as follows: Sec. 2 introduces SCE and Bregman divergence; Sec. 3 induces the objective distributions for NS; Sec. 4 analyzes the relationships between SCE and NS loss functions; Sec. 5 summarizes and discusses our theoretical findings; Sec. 6 discusses empirically investigating the validity of the theoretical findings in practical settings; Sec. 7 explains the differences between this paper and related work; and Sec. 8 summarizes our contributions.", "Our code will be available at https://github.com/kamigaito/ acl2021kge 2 Softmax Cross Entropy and Bregman Divergence 2.1 SCE in KGE We denote a link representing a relationship r k between entities e i and e j in a knowledge graph as ( e i , r k , e j ) .", "In predicting the links from given queries ( e i , r k , ? ) and ( ? , r k , e j ) , the model must predict entities corresponding to each ?", "in the queries.", "We denote such a query as x and the entity to be 1 The word2vec uses unigram distribution as the frequency-based noise.", "predicted as y .", "By using the softmax function, the probability p ( y | x ) that y is predicted from x with the model parameter given a score function f ( x , y ) is expressed as follows: p ( y | x ) = exp ( f ( x , y )) y (cid:48) Y exp ( f ( x , y (cid:48) )) , (1) where Y is the set of all predictable entities.", "We further denote the pair of an input x and its label y as ( x , y ) .", "Let D = { ( x 1 , y 1 ) , , ( x | D | , y | D | ) } be observed data that obey a distribution p d ( x , y ) .", "Next, we introduce the Bregman divergence.", "Let ( z ) be a differentiable function; the Bregman divergence between two distributions f and g is defined as follows: d ( z ) ( f , g ) = ( f ) ( g ) ( g ) T ( f g ) .", "We can express various divergences by changing ( z ) .", "To take into account the divergence on the entire observed data, we consider the expectation of d ( f , g ) : B ( z ) ( f , g ) = x , y d ( z ) ( f ( y | x ) , g ( y | x )) p d ( x , y ) .", "To investigate the relationship between a loss function and learned distribution of a model at an optimal solution of the loss function, we need to focus on the minimization of B ( z ) .", "Gutmann and Hirayama (2011) showed that B ( z ) ( f , g ) = 0 means that f equals g almost everywhere when ( z ) is a differentiable strictly convex function in its domain.", "Note that all ( z ) in this paper satisfy this condition.", "Accordingly, by fixing f , minimization of B ( z ) ( f , g ) with respect to g is equivalent to minimization of B ( z ) ( f , g ) = x , y (cid:2) ( g )+ ( g ) T g ( g ) T f (cid:3) p d ( x , y ) (3) We use B ( f , g ) to reveal a learned distribution of a model at optimal solutions for the SCE and NS loss functions.", "For the latter explanations, we first derive the SCE loss function from Eq.", "(3).", "We denote a probability for a label y as p ( y ) , vector for all y as y , vector of probabilities for y as p ( y ) , and dimension size of z as len ( z ) .", "In Eq.", "(3), by setting f as p d ( y | x ) and g as p ( y | x ) with ( z ) = len ( z ) i = 1 z i log z i (Banerjee et al., 2005), we can derive the SCE loss function as follows: B ( z ) ( p d ( y | x ) , p ( y | x )) = x , y (cid:34) | Y | i = 1 p d ( y i | x ) log p ( y i | x ) (cid:35) p d ( x , y ) (4) = 1 | D | ( x , y ) D log p ( y | x ) .", "(5) This derivation indicates that p ( y | x ) converges to the observed distribution p d ( y | x ) through minimizing B ( z ) ( p d ( y | x ) , p ( y | x )) in the SCE loss function.", "We call the distribution of p ( y | x ) when B ( z ) equals zero an objective distribution .", "We begin by providing a definition of NS and its relationship to the Bregman divergence, following the induction of noise contrastive estimation (NCE) from the Bregman divergence that was established by Gutmann and Hirayama (2011).", "We denote p n ( y | x ) to be a known non-zero noise distribution for y of a given x .", "Given noise samples from p n ( y | x ) for each ( x , y ) D , NS estimates the model parameter for a distribution G ( y | x ; ) = exp ( f ( x , y )) .", "By assigning to each ( x , y ) a binary class label C : C = 1 if ( x , y ) is drawn from observed data D following a distribution p d ( x , y ) and C = 0 if ( x , y ) is drawn from a noise distribution p n ( y | x ) , we can model the posterior probabilities for the classes as follows: p ( C = 1 , y | x ; ) = 1 1 + exp ( f ( x , y )) = 1 1 + G ( y | x ; ) , p ( C = 0 , y | x ; ) = 1 p ( C = 1 , y | x ; ) = G ( y | x ; ) 1 + G ( y | x ; ) .", "The objective function (cid:96) NS ( ) of NS is defined as follows: (cid:96) NS ( ) = 1 | D | ( x , y ) D (cid:104) log ( P ( C = 1 , y | x ; )) + i = 1 , y i p n log ( P ( C = 0 , y i | x ; )) (cid:105) .", "(6) By using the Bregman divergence, we can induce the following propositions for (cid:96) NS ( ) .", "Proposition", "1. (cid:96) NS ( ) can be induced from Eq.", "(3) by setting ( z ) as: ( z ) = z log ( z ) ( 1 + z ) log ( 1 + z ) .", "(7) Proposition", "2. When (cid:96) NS ( ) is minimized, the following equation is satisfied: G ( y | x ; ) = p n ( y | x ) p d ( y | x ) .", "(8) Proposition", "3. The objective distribution of P ( y | x ) for (cid:96) NS ( ) is p d ( y | x ) p n ( y | x ) y i Y p d ( y i | x ) p n ( y i | x ) .", "(9) Proof.", "We can also investigate the validity of Props.", "1, 2, and 3 by comparing them with the previously reported result.", "For this purpose, we prove the following proposition: Proposition", "4. When Eq.", "(8) satisfies = 1 and p n ( y | x ) = p d ( y ) , f ( x , y ) equals point-wise mutual information (PMI).", "Proof.", "This is described in Appendix B of the supplemental material.", "This observation is consistent with that by Levy and Goldberg (2014).", "The differences between their representation and ours are as follows.", "(1) Our noise distribution is general in the sense that its definition is not restricted to a unigram distribution; (2) we mainly discuss p ( y | x ) not f ( x , y ) ; and (3) we can compare NSand SCE-based loss functions through the Bregman divergence.", "Different from the objective distribution of SCE, Eq.", "(9) is affected by the type of noise distribution p n ( y | x ) .", "To investigate the actual objective distribution for (cid:96) NS ( ) , we need to consider separate cases for each type of noise distribution.", "In this subsection, we further analyze Eq.", "(9) for each separate case.", "First, we investigated the case of a uniform distribution because it is one of the most common noise distributions for (cid:96) NS ( ) in the KGE task.", "From Eq.", "(9), we can induce the following property.", "Proposition", "5. When p n ( y | x ) is a uniform distribution, Eq.", "(9) equals p d ( y | x ) .", "Proof.", "This is described in Appendix C of the supplemental material.", "Dyer (2014) indicated that NS is equal to NCE when = | Y | and P n ( y | x ) is uniform.", "However, as we showed, in terms of the objective distribution, the value of is not related to the objective distribution because Eq.", "(9) is independent of .", "In the original setting of NS (Mikolov et al., 2013), the authors chose as p n ( y | x ) a unigram distribution of y , which is independent of x .", "Such a frequency-based distribution is calculated in terms of frequencies on a corpus and independent of the model parameter .", "Since in this case, different from the case of a uniform distribution, p n ( y | x ) remains on the right side of Eq.", "(9), p ( y | x ) decreases when p n ( y | x ) increases.", "Thus, we can interpret frequency-based noise as a type of smoothing for p d ( y | x ) .", "The smoothing of NS w/ Freq decreases the importance of high-frequency labels in the training data for learning more general vector representations, which can be used for various tasks as pre-trained vectors.", "Since we can expect pre-trained vectors to work as a prior (Erhan et al., 2010) that prevents models from overfitting, we tried to use NS w/ Freq for pre-training KGE models in our experiments.", "Sun et al. (2019) recently proposed SANS, which uses p ( y | x ) for generating negative samples.", "By replacing p n ( y | x ) with p ( y | x ) , the objective distribution when using SANS is as follows: p ( y | x ) = p d ( y | x ) p ( y | x ) y i Y p d ( y i | x ) p ( y i | x ) , (10) where is a parameter set updated in the previous iteration.", "Because both the left and right sides of Eq.", "(10) include p ( y | x ) , we cannot obtain an analytical solution of p ( y | x ) from this equation.", "However, we can consider special cases of p ( y | x ) to gain an understanding of Eq.", "(10).", "At the beginning of the training, p ( y | x ) follows a discrete uniform distribution u { 1 , | Y |} because is randomly initialized.", "In this situation, when we set p ( y | x ) in Eq.", "p ( y | x ) = u { 1 , | Y |} .", "In actual mini-batch training, is iteratively updated for every batch of data.", "Because p ( y | x ) converges to u { 1 , | Y |} when p ( y | x ) is close to p d ( y | x ) and p ( y | x ) converges to p d ( y | x ) when p ( y | x ) is close to u { 1 , | Y |} , we can approximately regard the objective distribution of SANS as a mixture of p d and u { 1 , | Y |} .", "Thus, we can represent the objective distribution of p ( y | x ) as p ( y | x ) ( 1 ) p d ( y | x ) + u { 1 , | Y |} (13) where is a hyper-parameter to determine whether p ( y | x ) is close to p d ( y | x ) or u { 1 , | Y |} .", "Assuming that p ( y | x ) starts from u { 1 , | Y |} , should start from 0 and gradually increase through training.", "Note that corresponds to a temperature for p ( y | x ) in SANS, defined as p ( y | x ) = exp ( f ( x , y )) y (cid:48) Y exp ( f ( x , y (cid:48) )) , (14) where also adjusts p ( y | x ) to be close to p d ( y | x ) or u { 1 , | Y |} .", "4 Theoretical Relationships among Loss Functions 4.1 Corresponding SCE form to NS with Frequency-based Noise We induce a corresponding cross entropy loss from the objective distribution for NS with frequency-based noise.", "We set T x , y = p n ( y | x ) y i Y p d ( y i | x ) p n ( y i | x ) , q ( y | x ) = T 1 x , y p d ( y | x ) , and ( z ) = len ( z ) i = 1 z i log z i .", "Under these conditions, following induction from Eq.", "(4) to Eq.", "(5), we can reformulate B ( z ) ( q ( y | x ) , p ( y | x )) as follows: B ( z ) ( q ( y | x ) , p ( y | x )) = x , y (cid:34) | Y | i = 1 T 1 x , y p d ( y i | x ) log p ( y i | x ) (cid:35) p d ( x , y ) = 1 | D | ( x , y ) DT 1 x , y log p ( y | x ) .", "(15)", "Except that T x , y is conditioned by x and not normalized for y , we can interpret this loss function as SCE with backward correction (SCE w/ BC) (Patrini et al., 2017).", "Taking into account that backward correction can be a smoothing method for predicting labels (Lukasik et al., 2020), this relationship supports the theoretical finding that NS can adopt a smoothing to the objective distribution.", "Because the frequency-based noise is used in word2vec as unigram noise, we specifically consider the case in which p n ( y | x ) is set to unigram noise.", "In this case, we can set p n ( y | x ) = p d ( y ) .", "Since relation tuples do not appear twice in a knowledge graph, we can assume that p d ( x , y ) is uniform.", "Accordingly, we can change T 1 x , y to 1 p d ( y ) yi Y pd ( yi | x ) pd ( yi ) = 1 p d ( y ) yi Y pd ( yi , x ) pd ( yi ) pd ( x ) = p d ( x ) p d ( y ) C , where C is a constant value, and we can reformulate Eq.", "(15) as follows: 1 | D | ( x , y ) D p d ( x ) p d ( y ) C log p ( y | x ) 1 | D | ( x , y ) D # x # y log p ( y | x ) , (16) where # x and # y respectively represent frequencies for x and y in the training data.", "We use Eq.", "(16) to pre-train models for SCE-based loss functions.", "We induce a corresponding cross entropy loss from the objective distribution for SANS by setting q ( y | x ) = ( 1 ) p d ( y | x ) + u { 1 , | Y |} and ( z ) = len ( z ) i = 1 z i log z i .", "Under these conditions, on the basis of induction from Eq.", "(4) to Eq.", "(5), we can reformulate B ( z ) ( q ( y | x ) , p ( y | x )) as follows: B ( z ) ( q ( y | x ) , p ( y | x )) = x , y (cid:20) | Y | i = 1 ( 1 ) p d ( y i | x ) log p ( y i | x ) + | Y | i = 1 u { 1 , | Y |} log p ( y i | x ) (cid:21) p d ( x , y ) = 1 | D | ( x , y ) D (cid:20) ( 1 ) log p ( y | x ) + | Y | i = 1 | Y | log p ( y i | x ) (cid:21) .", "The equation in the brackets of Eq.", "(17) is the cross entropy loss that has a corresponding objective distribution to that of SANS.", "This loss function is similar in form to SCE with label smoothing (SCE w/ LS) (Szegedy et al., 2016).", "This relationship also accords with the theoretical finding that NS can adopt a smoothing to the objective distribution.", "We summarize the theoretical findings from Sections 2, 3, and 4 in Table", "1. To compare the results from the theoretical findings, we need to understand the differences in their objective distributions and divergences.", "The objective distributions for NS w/ Uni and SCE are equivalent.", "We can also see that the objective distribution for SANS is quite similar to that for SCE w/ LS.", "These theoretical findings will be important for making a fair comparison between scoring methods trained with the NS and SCE loss functions.", "When a dataset contains low-frequency 0 0 .", "entities, SANS and SCE w/ LS can improve the link-prediction performance through their smoothing effect, even if there is no performance improvement from the scoring method itself.", "For comparing the SCE and NS loss functions fairly, therefore, it is necessary to use the vanilla SCE against NS w/ Uni and use SCE w/ LS against SANS.", "However, we still have room to discuss the relationship between SANS and SCE w/ LS because in SANS increases from zero during training, whereas in SCE w/ LS is fixed.", "To introduce the behavior of in SANS to SCE w/ LS, we tried a simple approach in our experiments that trains KGE models via SCE w/ LS using pre-trained embeddings from SCE as initial parameters.", "Though this approach is not exactly equivalent to SANS, we expected it to work similarly to increasing from zero in training.", "We also discuss the relationship between NS w/ Freq and SCE w/ BC.", "While NS w/ Freq is often used for learning word embeddings, neither NS w/ Freq nor SCE w/ BC has been explored in KGE.", "We investigated whether these loss functions are effective in pre-training KGE models 2 .", "Because SANS and SCE w/ LS are similar methods to NS w/ Freq and SCE w/ BC in terms of smoothing, in our experiments, we also compared NS w/ Freq with SANS and SCE w/ BC with SCE w/ LS as pre-training methods.", "Comparing ( z ) for NS and SCE losses is as important as focusing on their objective distributions.", "The ( z ) determines the distance between model-2 As a preliminary experiment, we also trained KGE models via NS w/ Freq and SCE w/ BC.", "However, these methods did not improve the link-prediction performance because frequency-based noise changes the data distribution drastically.", "predicted and data distributions in the loss.", "It has an important role in determining the behavior of the model.", "Figure 1 shows the distance in Eq.", "(3) between the probability p and probability 0 .", "5 for each in Table 1 3 .", "As we can see from the example, d ( z ) ( 0 . 5 , p ) of the SCE loss has a larger distance than that of the NS loss.", "In fact, Painsky and Wornell (2020) proved that the upper bound of the Bregman divergence for binary labels when ( z ) = len ( z ) i = 1 z i log z i .", "This means that the SCE loss imposes a larger penalty on the same predicted value than the NS loss when the value of the learning target is the same between the two losses 4 .", "However, this does not guarantee that the distance of SCE is always larger than NS.", "This is because the values of the learning target between the two losses are not always the same.", "To take into account the generally satisfied property, we also focus on the convexity of the functions.", "In each training instance, the first-order and second-order derivatives of these loss functions indicate that SCE is convex, but NS is not in their domains 5 .", "Since this property is independent of the objective distribution, we can consider SCE fits the model more strongly to the training data in general.", "Because of these features, SCE can be prone to overfitting.", "Whether the overfitting is a problem depends on how large the difference between training and test data is.", "To measure the difference between training and test data in a KG dataset, we calculated the Kullback-Leibler (KL) divergence for p ( y | x ) between the training and test data of commonly used KG datasets.", "To compute p ( y | x ) , we first calculated 3 In this setting, we can expand ( z ) = len ( z ) i = 1 z i log z i to ( z ) = z log z +( 1 z ) log ( 1 z ) .", "p ( e i | r k , e j ) = p ( e i | r k )+ p ( e i | e j ) on the basis of frequencies in the data then calculated p ( e j | r k , e i ) in the same manner.", "We treated both p ( e i | r k , e j ) and p ( e j | r k , e i ) as p ( y | x ) .", "We denote p ( y | x ) in the training data as P and in the test data as Q .", "With these notations, we calculated DKL ( P || Q ) as the KL divergence for p ( y | x ) between the test and training data.", "Figure 2 shows the results.", "There is a large difference in the KL divergence between FB15k-237 and WN18RR.", "We investigated how this difference affects the SCE and NS loss functions for learning KGE models.", "In a practical setting, the loss function's divergence is not the only factor to affect the fit to the training data.", "Model selection also affects the fitting.", "However, understanding a model's behavior is difficult due to the complicated relationship between model parameters.", "For this reason, we experimentally investigated which combinations of models and loss functions are suitable for link prediction.", "We evaluated the following models on the FB15k-237 and WN18RR datasets in terms of the Mean Reciprocal Rank (MRR), Hits@1, Hits@3, and Hits@10 metrics: TuckER (Balazevic et al., 2019); RESCAL (Bordes et al., 2011); ComplEx (Trouil-lon et al., 2016); DistMult (Yang et al., 2015); TransE (Bordes et al., 2013); RotatE (Sun et al., 2019).", "We used LibKGE (Broscheit et al., 2020) 6 as the implementation.", "For each model to be able to handle queries in both directions, we also trained a model for the reverse direction that shares the entity embeddings with the model for the forward direction.", "To determine the hyperparameters of these models, for RESCAL, ComplEx, DistMult, and TransE with SCE and SCE w/ LS, we used the settings that achieved the highest performance in a previous study (Ruffinelli et al., 2020) for each loss function as well as the settings from the original papers for TuckER and RotatE.", "In TransE with NS and SANS, we used the settings used by Sun 6 https://github.com/uma-pi1/kge et al. (2019).", "When applying SANS, we set to an initial value of 1.0 for LibKGE for all models except TransE and RotatE, and for TransE and RotatE, where we followed the settings of the original paper since SANS was used in it.", "When applying SCE w/ LS, we set to the initial value of LibKGE, 0.3, except on TransE and RotatE.", "In the original setting of RotatE, because the values of SANS for TransE and RotatE were tuned, we also selected from { 0.3, 0.1, 0.01 } using the development data in TransE and RotatE for fair comparison.", "Appendix D in the supplemental material details the experimental settings.", "Table 2 shows the results for each loss and model combination.", "In the following subsections, we discuss investigating whether our findings work in a practical setting on the basis of the results.", "In terms of the objective distribution, when SCE w/ LS improves performance, SANS also improves performance in many cases.", "Moreover, it accords with our finding that SCE w/ LS and SANS have similar effects.", "For TransE and RotatE, the relationship does not hold, but as we will see later, this is probably because TransE with SCE and RotatE with SCE did not fit the training data.", "If the SCE does not fit the training data, the effect of SCE w/ LS is suppressed as it has the same effect as smoothing.", "Next, let us focus on the distance of the loss functions.", "A comparison of the results of WN18RR and FB15k-237 shows no performance degradation of SCE compared with NS.", "This indicates that the difference between the training and test data in WN18RR is not so large to cause overfitting problems for SCE.", "In terms of the combination of models and loss functions, the results of NS are worse than those of SCE in TuckER, RESCAL, ComplEx, and DistMult.", "Because the four models have no constraint to prevent fitting to the training data, we consider that the lower scores are caused by underfitting.", "This conjecture is on the basis that the NS loss weakly fits model-predicted distributions to training-data distributions compared with the SCE loss in terms of divergence and convexity.", "and SCE is smaller in TransE and RotatE.", "This is because the score functions of TransE and RotatE have bounds and cannot express positive values.", "Since SCE has a normalization term, it is difficult to represent values close to 1 when the score function cannot represent positive values.", "This feature prevents TransE and RotatE from completely fitting to the training data.", "Therefore, we can assume that NS can be a useful loss function when the score function is bounded.", "We also explored pre-training for learning KGE models.", "We selected the methods in Table 2 that achieved the best MRR for each NS-based loss and each SCE-based loss in each dataset.", "In accordance with the success of word2vec, we chose unigram noise for both NS w/ Freq and SCE w/ BC.", "Table 3 shows the results.", "Contrary to our expectations, SCE w/ BC does not work well as a pre-training method.", "Because the unigram noise for SCE w/ BC can drastically change the original data distribution, SCE w/ BC is thought to be effective when the difference between training and test data is large.", "However, since the difference is not so large in the KG datasets, as discussed in the previous subsection, we believe that the unigram noise may be considered unsuitable for these datasets.", "Compared with SCE w/ BC, both SCE w/ LS and SANS are effective for pre-training.", "This is because the hyperparameters of SCE w/ LS and SANS are adjusted for KG datasets.", "When using vanilla SCE as a pre-training method, there is little improvement in prediction performance, compared with other methods.", "This result suggests that increasing in training is not as important for improving task performance.", "For RotatE, there is no improvement in pretraining.", "Because RotatE has strict constraints on its relation representation, we believe it may degrade the effectiveness of pre-training.", "Mikolov et al. (2013) proposed the NS loss function as an approximation of the SCE loss function to reduce computational cost and handle a large vocabulary for learning word embeddings.", "NS is now used in various NLP tasks, which must handle a large amount of vocabulary or labels.", "Melamud et al. (2017) used the NS loss function for training a language model.", "Trouillon et al. (2016) introduced the NS loss function to KGE.", "In contextualized pre-trained embeddings, Clark et al. (2020a) indicated that ELECTRA (Clark et al., 2020b), a variant of BERT (Devlin et al., 2019), follows the same manner of the NS loss function.", "NS is frequently used to train KGE models.", "KGE is a task to complement a knowledge graph that describes relationships between entities.", "Knowledge graphs are used in various important downstream tasks because of its convenience in incorporating external knowledge, such as in a language model (Logan et al., 2019), dialogue (Moon et al., 2019), question-answering (Lukovnikov et al., 2017), natural language inference (K M et al., 2018), and named entity recognition (He et al., 2020).", "Thus, current KGE is important in NLP.", "Due to the importance of KGE, various scoring methods including RESCAL (Bordes et al., 2011), TransE (Bordes et al., 2013), DistMult (Yang et al., 2015), ComplEx (Trouillon et al., 2016), TuckER (Balazevic et al., 2019), and RotatE (Sun et al., 2019) used in our experiment, have been proposed.", "However, the relationship between these score functions and loss functions is not clear.", "Several studies (Ruffinelli et al., 2020; Ali et al., 2020) have investigated the best combinations of scoring method, loss function, and their hyperparameters in KG datasets.", "These studies differ from ours in that they focused on empirically searching for good combinations rather than theoretical investigations.", "As a theoretical study, Levy and Goldberg (2014) showed that NS is equivalent to factorizing a matrix for PMI when a unigram distribution is selected as a noise distribution.", "Dyer (2014) investigated the difference between NCE (Gutmann and Hyvarinen, 2010) and NS.", "Gutmann and Hirayama (2011) revealed that NCE is derivable from Bregman divergence.", "Our derivation for NS is inspired by their work.", "Meister et al. (2020) proposed a framework to jointly interpret label smoothing and confidence penalty (Pereyra et al., 2017) through investigating their divergence.", "Yang et al. (2020) theoretically induced that a noise distribution that is close to the true distribution behind the training data is suitable for training KGE models in NS.", "They also proposed a variant of SANS in the basis of their investigation.", "Different from these studies, we investigated the distributions at optimal solutions of SCE and NS loss functions while considering several types of noise distribution in NS.", "We revealed the relationships between SCE and NS loss functions in KGE.", "Through theoretical analysis, we showed that SCE and NS w/ Uni are equivalent in objective distribution, which is the predicted distribution of a model at an optimal solution, and that SCE w/ LS and SANS have similar objective distributions.", "We also showed that SCE more strongly fits a model to the training data than NS due to the divergence and convexity of SCE.", "The experimental results indicate that the differences in the divergence of the two losses were not large enough to affect dataset differences.", "The results also indicate that SCE works well with highly flexible scoring methods, which do not have any bound of the scores, while NS works well with RotatE, which cannot express positive values due to its bounded scoring.", "Moreover, they indicate that SCE and SANS work better in pre-training than NS w/ Uni, commonly used for learning word embeddings.", "For future work, we will investigate the properties of loss functions in out-of-domain data.", "This work was partially supported by JSPS Kakenhi Grant nos. 19K20339, 21H03491, and 21K17801." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "abstain", "objective", "objective", "objective", "objective", "method", "abstain", "objective", "objective", "objective", "other", "other", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "objective", "result", "objective", "result", "abstain", "abstain", "abstain", "objective", "other" ]
[ "To capture salient contextual information for spoken language understanding (SLU) of a dialogue, we propose time-aware models that automatically learn the latent time-decay function of the history without a manual time-decay function.", "We also propose a method to identify and label the current speaker to improve the SLU accuracy.", "In experiments on the benchmark dataset used in Dialog State Tracking Challenge 4, the proposed models achieved significantly higher F1 scores than the state-of-the-art contextual models.", "Finally, we analyze the effectiveness of the introduced models in detail.", "The analysis demonstrates that the proposed methods were effective to improve SLU accuracy individually.", "Spoken language understanding (SLU) is a component that understands the user's utterance of a dialogue system.", "Given an utterance, SLU generates a structured meaning representation of the utterance; i.e., a semantic frame.", "SLU can be decomposed into several subtasks such as domain identification, intent prediction and slot filling; these subtasks can be jointly assigned using a single model (Hakkani-Tur et al., 2016; Liu and Lane, 2016; Chen et al., 2016b).", "The accuracy of SLU is important for the dialogue system to generate an appropriate response to a user.", "To improve the accuracy of SLU, much work has used contextual information of dialogues to alleviate the ambiguity of recognition of the given utterance.", "In SLU, selecting important history information is crucial, and it directly influences the improvement of SLU accuracy.", "To summarize this history, content-aware models (Chen et al., 2016a; Kim et al., 2017) similar to attention models in machine translation (Bahdanau et al., 2014) have been proposed.", "However, content-aware models are likely to select the wrong history when the histories are similar in content.", "To alleviate this problem, time-aware models (Chen et al., 2017; Su et al., 2018a,b) which pay attention to recent previous utterances by using the temporal distance between a previous utterance and a current utterance are being considered; the models are based on mathematical formulas, time-decay functions, which are formulated by human, and decomposed into trainable parameters.", "However, the previous time-aware models may not be sufficiently accurate.", "In the models, either a single time-decay function is used or a limited number of time-decay functions are linearly combined; these manual functions may not be sufficiently flexible to learn an optimal time-decay function.", "In this paper, we propose flexible and effective time-aware attention models to improve SLU accuracy.", "The proposed models do not need any manual time-decay function, but learn a time-decay tendency directly by introducing a trainable distance vector, and therefore have good SLU accuracy.", "The proposed models do not use long short-term memory (LSTM) to summarize histories, and therefore use fewer parameters than previous time-aware models.", "We also propose current-speaker modeling by using a speaker indicator that identifies the current speaker.", "To the best of our knowledge, this is the first method that shows improvement by considering the identity of the current speaker.", "This information may be helpful for modeling multi-party conversations in addition to human-human conversations.", "Prediction of the semantic label of the current utterance even using a conventional time-aware model can be difficult.", "(Figure 1).", "The nearest utterance is Right., but it is not the most rele-Figure 1: An example of utterances with their semantic labels (speech acts combined with associated attributes) from DSTC 4.", "The semantic labels are italicized.", "vant utterance to the current utterance; the most relevant utterance is What are the places that I can have some memorable experiences there?.", "If we do not know the current speaker is Guide , we cannot easily assess the relative importance of the nearest histories of the two speakers.", "We believe that the proposed speaker indicator' can help our model to identify such information.", "In experiments on the Dialog State Tracking Challenge 4 (DSTC", "4) dataset, the proposed models achieved significantly higher accuracy than the state-of-the-art contextual models for SLU.", "Also, we examine how the proposed methods affect the SLU accuracy in detail.", "This result shows that the proposed methods were effective to improve SLU accuracy individually.", "Our contributions are as follows: We propose a decay-function-free time-aware attention model that automatically learn the latent time-decay function of the history without a manual time-decay function.", "The proposed model achieves a new state-of-the-art F1 score.", "We propose a current-speaker modeling method that uses a speaker indicator to identify the current speaker.", "We present how to incorporate speaker indicator in the proposed attention model for further improvement of SLU accuracy.", "as well as time, which also achieved a higher F1 score than the state-of-the-art contextual models.", "We analyze the effectiveness of proposed methods in detail.", "Our source code to reproduce the experimental results is available at https://github.com/jgkimi/ Decay-Function-Free-Time-Aware .", "Joint semantic frame parsing has the goal of learning intent prediction and slot filling jointly.", "By joint learning, the model learns their shared features, and this ability is expected to improve the accuracy on both tasks.", "A model based on bidirectional LSTM for joint semantic frame parsing (Hakkani-Tur et al., 2016) is trained on the two tasks in sequence, by adding an intent label to the output of the final time-step of LSTM.", "Similarly, an attention-based LSTM predicts slot tags for each time-step, then feeds the hidden vectors and their soft-aligned vectors to a fully-connected layer for intent prediction (Liu and Lane, 2016).", "Knowledge-guided joint semantic frame parsing (Chen et al., 2016b) incorporates syntax or semantics-level parsing information into a model by using a recurrent neural network (RNN) for joint semantic frame parsing.", "Other research on SLU uses context information.", "A model based on support vector machine and a hidden Markov model uses contextual information to show the importance of contextual information in SLU tasks, intent prediction and slot detection (Bhargava et al., 2013).", "RNN-based models can exploit context to classify domains (Xu and Sarikaya, 2014), and have been combined with previously-estimated intent and slot labels to predict domain and intent (Shi et al., 2015).", "A memory network that contains historic utterance vectors encoded by RNN has been used to select the most relevant history vector by multiplicative soft-attention (Chen et al., 2016a); the selected vector is fed to an RNN-based slot tagger as context information.", "A memory network can be regarded as use of content-based similarity between the current utterance and previous utterances.", "A memory network can be separated to capture historic utterances for each speaker independently (Kim et al., 2017), and a contextual model can use different LSTM layers to separately encode a history summary for each speaker (Chi et al., 2017).", "For another task, addressee and response selection in multi-party conversations, a distinct RNN-based encoder for each speaker-role (sender, addressee, or observer) has been used to generate distinct history summaries (Zhang et al., 2018).", "Recent work on contextual SLU has introduced time information of contexts into models because content-based attention may cause a wrong choice that introduce noises.", "The reciprocal of temporal distance between a current utterance and contexts can be used as a time-decay function, and the function can be decomposed into trainable parameters (Chen et al., 2017).", "Similarly, a universal time-aware attention model (Su et al., 2018a) has been proposed; it is a trainable linear combination of three distinct (convex, linear and concave) time-decay functions .", "An extension of this model is a context-sensitive time-decay attention (Su et al., 2018b) that generates its parameters from the current utterance by using a fully-connected layer so that the content information of the current utterance is also considered in the attention.", "We propose a time-aware model that includes a speaker indicator (Figure 2).", "In addition, we propose a content-and-time-aware model that includes a speaker indicator.", "The models are trained in an end-to-end way, in which every model parameter is automatically learned based on a downstream SLU task.", "The objective of the proposed models are to optimize the conditional probability of labels of SLU, given the current utterance p ( y | x ) , by minimizing the cross-entropy loss.", "To select salient parts of contextual histories, the current utterance is used.", "To summarize a current utterance matrix U that consists of words w i as a vector (i.e., U = { w 1 , w 2 ,..., w T } ), U is fed to bidirectional LSTMs, and the final hidden vector h T R dim is taken as a current utterance summary.", "In this subsection, we introduce a decay-function-free time-aware attention model.", "To summarize contexts, we use a time difference (distance) between a historic utterance and the current utterance; this distance represents the interval between the historic utterance and the current utterance.", "We use the distance of the t th history from the current utterance as an index to select a dense vector from a distance-embedding matrix D R dim | D | , then use the vector as the t th distance vector d t .", "To compute the importance t of the t th history, both in the sentence-level attention and in the role-level attention, our time-aware attention uses the current utterance summary h T and the history distance d t simultaneously and additively: t = w Tatt (cid:0) h T + d t + b att (cid:1) , (1) where w Tatt is the transpose of a trainable weight vector for the attention, b att is a trainable bias vector for the attention, and is the hyperbolic tangent function.", "Computing a time-aware context summary vector s histtime depends on whether the role-level or sentence-level attention is considered.", "For the role-level attention, we use the softmax operation applied to all t of the same speaker, either a guide or a tourist, to obtain a role-level probabilistic importance rolet of t th history.", "We then multiply rolet by t th history vector, which is a concatenation of the corresponding intent-dense vector u t and the distance vector d t .", "We use the element-wise sum of the vectors of the same speaker to construct two summary vectors s guidetime and s touristtime .", "Finally, s guidetime and s touristtime are concatenated to form a time-aware history summary vector s histtime as: rolet = softmax role ( t ) , (2) s roletime = (cid:88) t rolet ( u t d t ) , (3) s histtime = s guidetime s touristtime , (4) where represents a concatenation operation.", "For the sentence-level attention to obtain a sentence-level probabilistic importance sentt of t th history, we use the softmax operation applied to all t regardless of the speaker, then multiply sentt by the t th history vector, which is a concatenation of the corresponding intent dense vector u t and the distance vector d t .", "We use the element-wise sum of the vectors to construct a time-aware summary vector s histtime as: sentt = softmax sent ( t ) , (5) s histtime = (cid:88) t sentt ( u t d t ) .", "(6) Then, s histtime is used as a context summary s hist in the prediction step.", "Although a time-aware attention model is powerful by itself, content can be considered at the same time to improve accuracy.", "We propose another contextual attention model that is aware of content, in addition to time.", "This model is called content-and-time-aware attention.", "The model uses an importance value t for the t th history.", "To compute t , we uses the trainable parameters w att and b att of the time attention as: t = w Tatt (cid:0) h T + u t + b att (cid:1) , (7) where u t is the intent dense vector of t th history, and is the hyperbolic tangent function.", "Then, t is used in the same way as t , but independently.", "s histtime is computed as in the previous subsection, t is used to compute s histcont for the role-level attention as: rolet = softmax role ( t ) , (8) s rolecont = (cid:88) t rolet ( u t d t ) , (9) s histcont = s guidecont s touristcont .", "(10)", "To compute s histcont for the sentence-level attention, t is used as: sentt = softmax sent ( t ) , (11) s histcont = (cid:88) t sentt ( u t d t ) .", "(12)", "Finally, the time-aware history summary s histtime and the content-aware history summary s histcont are concatenated to generate a history summary s hist regardless of the attention level: s hist = s histtime s histcont .", "Speaker indicator is a trainable vector s cur R dim which indicates the identity of the current speaker; i.e., either a tourist or a guide in DSTC 4.", "An embedding lookup method is used after a speaker embedding matrix S R dim | S | is de-fined.", "The speaker embedding matrix is randomly initialized before the model is trained.", "attentions, Eq.", "1 is rewritten as: t = w Tatt (cid:0) h T + d t + s cur + b att (cid:1) , (14) and Eq.", "7 is rewritten as: t = w Tatt (cid:0) h T + u t + s cur + b att (cid:1) .", "(15) 3.5 Prediction To predict the true label in spoken language understanding, our model consumes the current utterance U again.", "We use another bidirectional LSTM layer which is distinct from that of the current utterance summary.", "To prepare for t th input v t of the LSTM layer, we concatenate t th word vector w t of the current utterance U with the history summary vector s hist : v t = w t s hist .", "Then, we feed each v t to the LSTM layer sequentially, and the final hidden vector of the LSTM layer is used as an input of a feed-forward layer to predict the true label y .", "To test the proposed models, we conducted language-understanding experiments on a dataset of human-human conversations.", "We conducted experiments on the DSTC 4 dataset which consists of 35 dialogue sessions on touristic information for Singapore; they were collected from Skype calls of three tour guides with 35 tourists.", "The 35 dialogue sessions total 21 h, and include 31,034 utterances and 273,580 words (Kim et al., 2016).", "DSTC 4 is a suitable benchmark dataset for evaluation, because all of the dialogues have been manually transcribed and annotated with speech acts and semantic labels at each turn level.", "a semantic label consists of a speech act and associated attribute(s).", "The speaker information (guide and tourist) is also provided.", "Human-human dialogues contain rich and complex human behaviors and bring much difficulty to all tasks that are involved in SLU.", "We used the same training dataset, the same test dataset and the same validation set as in the DSTC 4 competition: 14 dialogues as the training dataset, 6 dialogues as the validation dataset, and 9 dialogues as the test dataset.", "We used Adam (Kingma and Ba, 2015) as the optimizer in training the model.", "We set the batch size to 256, and used pretrained 200-dimensional word embeddings GloVe (Pennington et al., 2014).", "We applied 30 training epochs with early stopping.", "We set the size dim of every hidden layer to 128, and the context length to 7.", "We used the ground truth intents (semantic labels) to form an intent-dense vector like previous work.", "To evaluate SLU accuracy, we used the F1 score, which is the harmonic mean of precision and recall.", "To validate the significance of improvements, we used a one-tailed t-test.", "We ran each model ten times, and report their average scores.", "As baseline models, we used the state-of-the-art contextual models, and most accurate participant of DSTC 4 (DSTC 4 Best) (Kim et al., 2016).", "For comparison with our models, we used the scores F1 score Model", "reported in the papers 1 .", "We ran three additional baseline models in which the prediction stage is the same: (1) No Context' uses no context summary; (2) LSTM-Used Context Summary without Attention' uses the context summary of bidirectional LSTM without an attention mechanism, and (3) LSTM-Used Content-Aware Attention' uses context summary of bidirectional LSTM after content-aware attention is applied to histories, as in previous approaches.", "We conducted an experiment to compare the proposed models with the baseline models in the SLU accuracy (Table 1).", "All of the proposed models achieved significant improvements compared to all the baseline models.", "We conducted an experiment to identify details of how possible combinations of the proposed methods affect the SLU accuracy (Table 2).", "In addition to the combinations of the proposed methods, we tested another content-and-time-aware attention method (Content x Time) which computes attention values using both intent and distance at a time, and shares the values to compare with the proposed content-and-time-aware attention.", "1 Su et al. (2018a) and Su et al. (2018b) specified that they used different training/valid/test datasets that had been randomly selected from the whole DSTC 4 data with different rates for the experiments.", "Therefore, we do not use the reported score in our comparison, but produced the results under the same conditions by using the open-source code.", "In the first subsection, we analyze the effectiveness of the decay-function-free time-aware attention and decay-function-free content-and-time-aware attention by comparison with others.", "In the next subsection, we analyze the effectiveness of the proposed methods in their possible combinations.", "We also analyze the effectiveness of the use of a distance vector in the history representation under the various conditions.", "Finally, we analyze attention weights of the proposed models in a qualitative way to convince of the effectiveness of them.", "We also conducted an experiment to show the effectiveness of the use of a distance vector in the history representation under the same condition as in the role-level attention (Table 3).", "Although we propose to use both intent and distance by concatenating them as a history representation, intent can be used alone; this approach is more intuitive than using both intent and distance.", "In Table 1, Decay-Function-Free Time-Aware Attention and Decay-Function-Free Content-and-Time-Aware Attention achieved significantly higher F1 scores that all baseline models.", "Especially, the role-level Decay-Function-Free Time-Aware Attention with speaker indicator achieved an F1 score of 76.56 % (row 11), which is a state-of-the-art SLU accuracy.", "The proposed methods had good SLU accuracy (Table 2).", "Every time-aware attention with and without speaker indicator (rows 4 to", "11) improved the F1 score compared to the content-aware attention with and without speaker indicator (rows 2 and", "3) and to no attention (row 1).", "This result means that the proposed time-aware attention was effective to improve the SLU accuracy.", "Any of the content-and-time-aware attention with or without speaker indicator (rows 6 to", "11) did not improve the F1 score compared to the time-aware attention with and without speaker indicator (rows 4 and 5).", "This result means that incorporating content could not make further improvement of the accuracy.", "Also, without speaker indicator, all the time-aware attention (rows 4, 6 and", "10) achieved similar F1 scores.", "Use of speaker indicator also showed tendencies.", "It did not significantly improve the SLU accuracy of Decay-Function-Free Content-Aware Attention (rows 2 and", "3) or Decay-Function-Free Inseparate Content-and-Time-Aware Attention (rows 10 and 11), but did improve the accuracy of the proposed models, Decay-Function-Free Time-Aware Attention (rows 4 and", "5) and Decay-Function-Free Content-and-Time-Aware Attention (rows 6 to 9).", "Decay-Function-Free Content-and-Time-Aware Attention with speaker indicator (rows 7 to", "9) were more accurate than Decay-Function-Free Inseparate Content-and-Time-Aware Attention with speaker indicator (row 11).", "This result means that using speaker indicator, separation of content and time improved the accuracy.", "The improvement in the role-level tended to be greater than that in the sentence-level.", "The improvement was greatest when speaker indicator was involved in the proposed role-level Decay-Function-Free Time-Aware Attention (row 5).", "In all models, the use of both intent and distance vectors significantly achieved higher F1 than the use of an intent vector only (Table 3).", "The results indicate that distance embeddings are helpful both for attention and for the history representation.", "Decay-Function-Free Time-Aware Attention achieved the biggest improvement (row", "5) among all the models.", "To assess whether the proposed time-aware attention and speaker indicator can learn a time-decay tendency of the history effectively, we inspected the weights trained in Decay-Function-Free Time-Aware Attention with and without the speaker indicator.", "We also inspected Decay-Function-Free Content-Aware Attention to compare with them.", "We observed (Figure", "3) that the weights of the proposed models were trained well compared to Decay-Function-Free Content-Aware Attention.", "The proposed time-aware attention with/without speaker indicator tended to pay attention to recent histories, whereas the content-aware attention does not.", "As a result, Decay-Function-Free Time-Aware Attention with speaker indicator could generate the true label, QST-RECOMMEND , by avoiding noisy contextual information like uh I'm staying there (FOL-EXPLAIN) or ... uh we do not encourage littering uh anywhere in the pub-lic area (FOL-INFO).", "In this paper, we propose decay-function-free time-aware attention models for SLU.", "These models summarize contextual information by taking advantage of temporal information without a manual time-decay function.", "We also propose a current-speaker detector that identifies the current speaker.", "In experiments on the DSTC 4 benchmark dataset, the proposed models achieved a state-of-the-art SLU accuracy.", "Detailed analysis of effectiveness of the proposed methods demonstrated that the proposed methods increase the accuracy of SLU individually.", "We would like to thank the reviewers for their insightful and constructive comments on this paper." ]
[ "objective", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "objective", "objective", "objective", "objective", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "objective", "other", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "abstain", "abstain", "result", "objective", "abstain", "objective", "abstain", "abstain", "other" ]
[ "We introduce CaMEL ( Ca se M arker E xtraction without L abels), a novel and challenging task in computational morphology that is especially relevant for low-resource languages.", "We propose a first model for CaMEL that uses a massively multilingual corpus to extract case markers in 83 languages based only on a noun phrase chunker and an alignment system.", "To evaluate CaMEL, we automatically construct a silver standard from UniMorph.", "The case markers extracted by our model can be used to detect and visualise similarities and differences between the case systems of different languages as well as to annotate fine-grained deep cases in languages in which they are not overtly marked.", "What is a case?", "Linguistic scholarship has shown that there is an intimate relationship between morphological case marking on the one hand and semantic content on the other (see Blake (1994) and Grimm (2011) for overviews).", "For example, the Latin case marker -ibus 1 (Ablative or Dative Plural) can express the semantic category of location.", "It has been observed that there is a small number of such semantic categories frequently found cross-linguistically (Fillmore, 1968; Jakobson, 1984), which are variously called case roles or deep cases .", "Semiotically, the described situation is complicated by the fact that the relationship between case markers and expressed semantic categories is seldom isomorphic, i.e., there is both case polysemy (one case, several meanings) and case homonymy or case syncretism (several cases, one marker) (Baer-man, 2009).", "As illustrated in Figure 1, the Latin Ablative marker -ibus can express the semantic 1 In this paper, we use italic when talking about case markers as morphemes in a linguistic context and monospace (accompanied by $ to mark word boundaries) when talking about case markers in the context of our model.", "Transliterations of Cyrillic examples are given after slashes.", "category of instrument besides location (case poly-semy), and it is also the marker of the Dative Plural expressing a recipient (case syncretism).", "In addition, there is case synonymy (one case, several markers), which further complicates morphosemi-otics; e.g., in Latin, -is is an alternative marker of the Ablative Plural.", "The key idea of this paper is to detect such complex correspondences between case markers and expressed semantic categories in an automated way.", "Specifically, we build on prior work by Cysouw (2014), who lays the theoretical foundation for our study by showing that deep cases can be induced from cross-linguistic usage patterns of case markers.", "As opposed to Latin, Russian has separate cases (with separate case markers) for the semantic categories of instrument ( / -ami ), location ( / -ax ), and recipient ( / -am ).", "Thus, knowing the Russian case marker corresponding to Latin -ibus reduces the uncertainty about the expressed 5506 case role (Figure 1).", "This reduction of uncertainty can be particularly helpful in a low-resource setting where other means of analysis are unavailable.", "In this work, we rely on the Parallel Bible Corpus (PBC; Mayer and Cysouw, 2014), a massively multilingual corpus, to investigate the relationship between surface cases and their deep meanings cross-linguistically.", "To put our idea into practice, we require an exhaustive set of case markers as well as a set of parallel noun phrases (NPs) that we can further analyze with respect to deep cases using the set of case markers.", "Both requirements pose a serious challenge for languages with limited available resources.", "We therefore introduce CaMEL ( Ca se M arker E xtraction without L abels), a novel and challenging task of finding case markers using only", "(i) a highly parallel corpus covering many languages,", "(ii) a noun phrase chunker for English, and", "(iii) word-level pre-computed alignments across languages.", "Our work uses the parallel nature of the data in two ways.", "First, we leverage the word-level alignments for the initial step of our pipeline, i.e., the marking of NPs in all languages (even where no noun phrase chunker is available).", "To do so, we mark NPs in 23 different English versions of the Bible and project these annotations from each English to each non-English version using the word-level alignments, resulting in parallel NPs that express the same semantic content across 83 languages.", "Based on the projected annotations, we leverage the frequencies of potential case markers inside and outside of NPs as a filter to distinguish case markers from lexical morphemes and other grammatical morphemes typically found outside of NPs.", "We make three main contributions .", "We define CaMEL ( Ca se M arker E xtraction without L abels), a new and challenging task with high potential for automated linguistic analysis of cases and their meanings in a multilingual setting.", "We propose a simple method for CaMEL that is efficient, requires no training, and generalises well to low-resource languages.", "We automatically construct a silver standard based on human-annotated data and evaluate our method against it, achieving an F1 of 45%.", "To foster future research on CaMEL, we make the silver standard, our code, and the extracted case markers publicly available 2 .", "Unsupervised morphology induction has long been a topic of central interest in natural language processing (Yarowsky and Wicentowski, 2000; Goldsmith, 2001; Schone and Jurafsky, 2001; Creutz and Lagus, 2002; Hammarstrm and Borin, 2011).", "Recently, unsupervised inflectional paradigm learning has attracted particular interest in the research community (Erdmann et al., 2020; Jin et al., 2020), reflected also by a shared task devoted to the issue (Kann et al., 2020).", "Our work markedly differs from this line of work in that we are operating on the level of case markers, not full paradigms, and in that we are inducing morphological structure in a massively multilingual setting.", "There also have been studies on extracting grammatical information from text by using dependency parsers (Chaudhary et al., 2020; Pratapa et al., 2021) and automatically glossing text (Zhao et al., 2020; Samardic et al., 2015) as well as compiling full morphological paradigms from it (Moeller et al., 2020).", "By contrast, our method is indepen-dent of such annotation schemata, and it is also simpler as it does not aim at generating full grammatical or morphological descriptions of the languages examined.", "There has been cross-lingual work in computational morphology before (Snyder and Barzilay, 2008; Cotterell and Heigold, 2017; Malaviya et al., 2018), but not with the objective of inducing inflectional case markers.", "Methodologically, our work is most closely related to the SuperPivot model presented by Asgari and Schtze (2017), who investigate the typology of tense in 1,000 languages from the Parallel Bible Corpus (PBC; Mayer and Cysouw, 2014) by projecting tense information from languages that overtly mark it to languages that do not.", "Based on this, Asgari and Schtze (2017) perform a typological analysis of tense systems in which they use different combinations of tense markers to further divide a single tense in any given language.", "Our work differs in a number of important ways.", "First, we do not manually select a feature to investigate 2 https://github.com/LeonieWeissweiler/ CaMEL 5507 but model all features in our chosen sphere of interest (i.e., case) at once.", "Furthermore, we have access to word-level rather than verse-level alignments and can thus make statements at a more detailed resolution (i.e., about individual NPs).", "Finally, we extract features not only for a small selection of pivot languages, but even for languages that do not mark case non-overtly, i.e., in a way that deviates to a large degree from a simple 11 mapping (see discussion in 1).", "There is ongoing discussion in linguistic typology about the extent to which syntactic categories are shared and can be compared between the world's languages (see Hartmann et al. (2014) for an overview).", "While this issue is far from being settled, there is a general consensus that (while not being a language universal) there is a core of semantic categories that are systematically found cross-linguistically, and that are expressed as morphosyntactic case in many languages.", "Here, we adopt this assumption without any theoretical commitment, drawing upon a minimal set of deep cases detailed in Table", "1. The set is loosely based on the classical approach presented by Fillmore (1968).", "Going beyond deep cases, Cysouw (2014) envisages a more fine-grained analysis of what is conventionally clustered in a deep case or semantic role.", "Briefly summarised, the theoretical concept is this: if every language has a slightly different case system, with enough languages it should be possible to divide and cluster NPs at any desired level of granularity, from the conventional case system down to a specific usage of a particular verb in conjunction with only a small set of nouns.", "For example, the semantic category of location could be further subdivided into specific types of spatial relationships such as within', over' and under'.", "Taken together, it would then be possible to perform theory-agnostic typological analysis of case-like systems across truly divergent and low-resource languages by simply describing any lan-guage's case system in terms of its clustering of very fine-grained semantic roles into larger systems that are overtly marked.", "The approach sketched in the last paragraph is not limited to case systems but has been applied to person marking (Cysouw, 2008), the causative/inchoative alternation (Cysouw, 2010), and motion verbs (Wlchli and Cysouw, 2012).", "The variety of linguistic application areas highlights the potential of developing methods that are much more automated than the work of Cysouw and collaborators.", "While we stay at the level of traditional deep cases in this paper, we hope to be able to extend our method into the direction of a more general analysis tool in the future.", "The remainder of the paper is structured as follows.", "Section 4 describes our method in detail.", "Section 5 gives an overview of our results.", "Finally, Section 6 presents two exploratory analyses.", "We work with the subset of the PBC (Mayer and Cysouw, 2014) for which the SimAlign alignment algorithm (Jalili Sabet et al., 2020) is available, resulting in 87 languages for our analysis.", "From the corpus, we only extract those verses that are available in all languages, thus providing for a relatively fair comparison, and remove Malagasy, Georgian, Breton, and Korean, as they have much lower coverage than the other languages.", "This leaves us with 83 languages and 6,045 verses as our dataset.", "We also select 23 English versions from the PBC that cover the same 6,045 verses.", "For each of the 6,045 verses, we then compute 83 23 = 1909 verse alignments: 83 (for each language) multiplied with 23 (for each English version).", "In the following, we will describe the components of our pipeline (Figure 2).", "Because our intermediate goal is to induce complete lists of case markers in all languages we cover, the first step is to restrict the scope of our search to NPs.", "We hope that this will allow us to retrieve case markers for nouns and adjectives while disregarding verb endings that might otherwise have similar distributional properties.", "As we are working with 83 languages, most of which are low-resource and lack high-quality noun phrase chunkers, we first identify NPs in English using the spaCy noun phrase chunker (Honnibal et al., 2020) and then project this annotation using the alignments to mark NPs in all other languages.", "The exception to this are German and Norwegian Bokml, for which noun phrase chunkers are available directly in spaCy.", "Because both the spaCy noun phrase chunker and the alignments are prone to error, we make use of 23 distinct English versions 5508 Deep Case Description Example Nominative The subject of the sentence He is the Messiah!", "We project the NP annotation of a given English version to a second language using the alignments.", "Specifically, we find the NP in the target language by following the alignments from all words in the English NP while maintaining the word order of the target sentence.", "We treat each annotated version of the corpus resulting from the different English versions as a separate data source.", "As an example, Figure 3 shows two English versions and the NP projections for Latin and German.", "While the alignments, particularly those from English to Latin, are not perfect, they result in complementary errors.", "The first wrongly aligns the first mention of pastor bonus , resulting in only pastor being marked as an NP.", "The second misses the alignment of life and animam .", "There are two major results from this process.", "First, we obtain the set N of all NPs marked in English, each with all of its translations in the other languages.", "An example of an entry in this set, taken from Figure 3, would be the fine shepherd, pastor bonus, der vortreffliche Hirte, ... , while the fine shepherd, pastor, der vortreffliche Hirte, ... would be another, slightly defective, example.", "Second, we obtain a pair of multisets, W l in and W l out , one for each language l .", "W l in (resp. W l out ) is the multiset of all word tokens that appear inside (resp. outside) of NPs of language l .", "In the following, we will use M ( w ) to refer to the frequency of word w in the multiset M .", "For each language, we want to remove false positives from the word types contained within NPs (which are an artefact of wrong alignments) by using the frequency of each word type inside and outside of NPs.", "In principle, this could be done by means of a POS tagger and concentrating on nouns, adjectives, articles, prepositions, and postpositions, but as we do not have access to a reliable POS tagger for most languages covered here, we use the relative frequency information gained from our NP annotations.", "More specifically, we assign each word type w W l in W l out to I l (the set of words for language l that are NP-relevant) if | W l in ( w ) | > | W l out ( w ) | , and to O l (the set of words for language l that are not NP-relevant) otherwise.", "This enhances the robustness of our method against occasional mis-annotations: for Latin, ovibus sheep', from our previous example, occurred once outside an NP but 45 times inside and is now an element of I Latin , while intellegent they understand' occurred once inside an NP but 22 times outside and is therefore an element of O Latin .", "From each language, we create a set of candidate case markers candidates( w ) for a word w by collecting all character n -grams of any length from w that are also members of I l .", "We explicitly mark the word boundaries with $ so that n grams in the middle of words are distinct from those at the edges.", "For example, candidates extracted from ovibus would be $ovi , ibus$ , but also $ovibus$ and i .", "Our first candidate set is computed as C l 1 = (cid:83) { candidates( w ) | w I l } .", "We define I l ( c ) as the number of words in I l that contain the candidate c , and O l ( c ) analogously for O l .", "As a first step, we filter out all n -grams with a frequency in I l lower than a threshold .", "3 This results in C l 2 = { c | c C l 1 , I l ( c ) } .", "For this step, we make use of the observation that case is a property of nouns.", "Hence, a case marker is expected to occur much more frequently within NPs.", "This will serve to distinguish the case markers from verb inflection markers, which should otherwise have similar distributional properties.", "To implement this basic idea, for each candidate c in language l , we first construct the contingency table shown in Table", "2. We use the table to test whether a candidate is more or less likely to appear inside NPs by comparing the frequencies of the candidate inside and outside NPs to those of all other candidates.", "Shown 3 We set = 97 based on grid search.", "in the cells are the frequencies used for the test for each candidate.", "The columns correspond to the frequency of the candidate in question versus all other candidates while the rows distinguish the frequencies inside versus outside NPs.", "We carry out a Fisher's Exact Test (Fisher, 1922) on this table, which gives us a p -value and an odds ratio r .", "r < 1 if the candidate is more likely to occur outside an NP, and r > 1 if it is more likely to occur inside.", "The p -value gives us a confidence score to support this ratio (lower is better).", "We keep for C l final only those candidates for which p < and r > .", "4 For example, ibus$ makes it past this filter with p ( ibus$ ) = 2 .", "869 10 6 and r ( ibus$ ) = 1 .", "915 it is significant and it occurs inside NPs more often than outside NPs.", "In contrast, t$ is discarded as it has p ( t$ ) = 3 .", "18 10 149 and r ( t$ ) = 0 .", "249 it is significant, but it has been found to occur much more likely outside than inside NPs.", "Suffixoidal inflection is cross-linguistically more common than prefixoidal and infixoidal inflection (Bauer, 2019).", "This is also reflected in our dataset, 4 We set = 0 .", "where not a single language has prefixoidal or infixoidal inflection.", "We hence restrict the set of considered n -grams to ones at the end of words.", "We evaluate our method for case marker extraction without labels using a silver standard.", "As we are, to the best of our knowledge, the first to introduce this task, we cannot rely on an existing set of gold case markers for each language we cover.", "As most of the languages included are low-resource, reliable grammatical resources do not always exist, which makes the handcrafting of a gold standard difficult.", "Therefore, and also to ensure relative comparability, we evaluate against a silver standard automatically created from the UniMorph (Sylak-Glassman 2016, Kirov et al. 2018, McCarthy et al. 2020) 5 dataset.", "The UniMorph data consists of a list of paradigms, which we first filter by their POS tag, keeping only nouns and adjectives and filtering out verbs and adverbs.", "An example of a paradigm is given in Table", "3. While the Nominative Singular (left column) is included in addition to the inflected forms (middle column), the straightforward approach of extracting the suffixes of the inflected forms is not optimal for every language, as the Nominative Singular form can differ from the root.", "We therefore proceed as follows.", "First, we form a multiset of all inflected forms.", "In our example, this would result in { Abflug, Abfluges, Abflug, Abflug, Abflge, Abflge, Abflgen, Abflge }.", "Next, we iterate over this multiset, removing one word each time if it occurs only once.", "This is meant to make the algorithm more robust against outlier words which do not share a common base with the rest of the paradigm.", "We then extract the longest common prefix for the remaining elements.", "We build a frequency list of these prefixes, which in our example has only one element, Abfl , with a frequency of", "3. We take the most frequent element from the frequency list and compare it to the Nominative Singular, Abflug .", "Of these two candidates, we take the longer one.", "We thereby prioritise precision over recall as roots that are too short quickly result in many different suffixes that are too long, due to the high overall number of paradigms.", "Finally, we iterate over the inflected forms again, extracting the suffix if the chosen root 5 https://unimorph.github.io Nominative inflected unused Singular forms information base suffix Abflug Abfl ug NNOMSG Abfl ug es NGEN SG Abfl ug NDAT SG Abfl ug NACC SG Abfl ge NNOMPL Abfl ge NGEN PL Abfl gen NDAT PL Abfl ge NACC PL Table 3: Example of silver standard creation.", "is a prefix, which in our example yields one new suffix: es$ , as Abflge and Abflgen are not pre-fixed by Abflug .", "We examine the results for each language and exclude the languages where either basic knowledge of the language or common sense makes it apparent that sets are much too large or too small, resulting in a diverse set of 19 languages to evaluate our methods against.", "We note that this process automatically excludes adpositions and clitics, which is in line with our focus on suffixoidal inflection (Section 4.6).", "We make our silver standard publicly available.", "Our results are provided in Table", "4. We observe that precision is higher, at times even substantially, than recall for most languages contained in the silver standard.", "Looking at Table 5 as an example, we can see that low precision is mostly due to retrieved case markers being longer ( $ / enie ) or shorter ( $ / j ) than the correct ones.", "It is one of the main challenges in this task to select the correct length of a case marker from a series of substring candidates.", "The shorter substrings will automatically be more frequent and often correct, but this is not easily solved by a frequency threshold, which excludes other correct candidates that are naturally less frequent.", "Additionally, we observe that some recall errors are due to an incorrect length of n -grams in the silver standard ( / 'jam ), highlighting that this issue also exists in its creation process, and suggesting that our performance might even improve when measured against handcrafted data.", "We conduct an ablation study to assess the effects of the different pipeline components.", "In order to evaluate how well our method of projecting NP annotation using alignments to languages without an available NP chunker (see Section 4.3) works, we evaluate it against the monolingual spaCy chunkers for Norwegian Bokml and German, which are the only available languages besides English.", "We do not directly compare annotated spans but instead their influence on our method as we have intentionally designed our pipeline to be robust to some noise.", "As I l , the set of words considered to be NP-relevant, is the essential output of the annotation projection, we compare two versions, the set as a result of direct NP chunking and the set as a result of our annotation procedure.", "Taking the former as the ground truth for evaluating the latter (assuming that the directly chunked set has superior quality), we observe an F1 of 88.5 % for German and 67.8 % for Norwegian Bokml.", "While these numbers seem low at first, the fact that our overall F1 on Norwe-pol:ach$ pol:om$ Figure 4: t-SNE plot of the contextual distribution of the Latin case marker -ibus and the Polish case markers -ach and -om .", "gian Bokml (.54, see Table", "4) is better than on German (.50) indicates that the later elements of the pipeline are to a certain extent robust against misclassification of NPs.", "We report the average Precison, Recall, and F1 across all languages in our silver standard without individual filtering components in Table 6.", "Simple frequency filtering (see ), excluding n -grams within words (see middle) and at the beginning of words (see beginning) are all necessary for good performance.", "Inside/outside filtering based on p -value is the most important component of the pipeline (see ).", "Surprisingly, inside/outside filtering based on odds ratio has almost no effect.", "We can use our automatically extracted case markers, in combination with the parallel NPs that are extracted as part of the pipeline, for innovative linguistic analyses.", "We present two examples in this section.", "case.", "We return to N (see Section 4.3), our set of parallel NPs extracted from the PBC, and for a selected subset of languages, group them by their combination of case markers.", "The basic idea is to infer an NP's (potentially very fine-grained) deep case by representing it as its combination of case markers across languages.", "For example, we can disambiguate the Latin case marker -ibus by looking at the different groups the NPs containing it form with Russian case markers.", "Recall that -ibus can express location, instrument, and recipient and that Russian expresses these categories by separate case markers: / -ax for location, / -ami for instrument, and / -am for recipient (see Figure 1) all three of which have been retrieved by our method.", "Given a Latin NP marked by the ending -ibus , the parallel NP in Russian can help us determine its deep case.", "Thus, for domibus, / dvorcax shows that the semantic category is location, i.e., in the houses'.", "For operibus bonis, / dobrymi delami shows that the semantic category is instrument, i.e., through the good deeds'.", "Finally, for patribus, / predkam shows that the semantic category is a recipient, i.e., for/to the parents'.", "We also demonstrate how we can use their distributional similarities over NPs to show how case markers that are similar in this respect correspond to similar combinations of deep cases.", "We first generate an NP-word cooccurrence matrix over the NP vocabulary of all languages in which each row, corresponding to an inflected word firm w in language l , indicates which NPs (corresponding to columns) cooccur with w .", "in the parallel data.", "We then reduce the dimensionality of the matrix by means of t-SNE (Van der Maaten and Hinton, 2008), allowing us to inspect systematic patterns with respect to the contexts in which certain case markers occur (where context refers to words the case marker is aligned to in other languages, not words the case marker coccurs with in its own lan-guage).", "In a semiotic situation like the one shown in Figure 1, this setup allows us to examine how the semantic region expressed by a certain homonymous case marker in one language is split into more fine-grained regions in another language that distinguishes the semantic categories that are lumped together by the case marker (and which, if they are at the right level of abstraction, can correspond to deep cases).", "Figure 4 shows this scenario for the Latin Ablative marker -ibus .", "It corresponds to two distinct case markers in Polish, -ach (LOC ) and -om (DAT ).", "The figure shows that the region occupied by Latin -ibus splits into two distinct clusters in Polish, allowing us to visually determine which underlying case is expressed by the homonymous suffix -ibus .", "We have introduced the new and challenging task of Case Marker Extraction without Labels (CaMEL) and presented a simple and efficient method that leverages cross-lingual alignments and achieves an F1 of 45% on 19 languages.", "We introduce an automatically created silver standard to conduct our evaluation.", "We have further demonstrated two ways in which our retrieved case markers can be used for linguistic analysis.", "We see two potential avenues for future work.", "The first is the further improvement of case marker extraction.", "The main problem to tackle here is that of small sets of overlapping substrings of which only one is the correct marker, and developing some further measures by which they can be distinguished.", "Furthermore, it would be useful to find data from more low-resource languages and languages that have typological properties different from the extensively studied large language families (Indo-European, Turkic, Sino-Tibetan etc.).", "We could then verify that our method performs well across languages and attempt to expand our silver standard to more languages while still ensuring quality.", "The second area is that of further automating the analysis of deep case and case syncretism.", "Ideally, we would develop a method that can distinguish the different possible reasons for divergent case marking in languages, with the eventual goal of creating a comprehensive overview of case and declension systems for a large number of languages.", "This work was funded by the European Research Council (#740516).", "The second author was also supported by the German Academic Scholarship Foundation and the Arts and Humanities Research Council.", "The third author was also supported by the German Federal Ministry of Education and Research (BMBF, Grant No. 01IS18036A).", "We thank the reviewers for their extremely helpful comments." ]
[ "objective", "objective", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "abstain", "abstain", "abstain", "method", "objective", "result", "result", "objective", "objective", "objective", "result", "method", "other", "other", "objective", "other", "objective", "other", "abstain", "other", "objective", "other", "abstain", "abstain", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "result", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "objective", "method", "objective", "result", "abstain", "abstain", "abstain", "result", "abstain", "objective", "other", "other", "other", "other" ]
[ "A large body of research into semantic textual similarity has focused on constructing state-of-the-art embeddings using sophisticated modelling, careful choice of learning signals and many clever tricks.", "By contrast, little attention has been devoted to similarity measures between these embeddings, with cosine similarity being used unquestionably in the majority of cases.", "In this work, we illustrate that for all common word vectors, cosine similarity is essentially equivalent to the Pearson correlation coefficient, which provides some justification for its use.", "We thoroughly characterise cases where Pearson correlation (and thus cosine similarity) is unfit as similarity measure.", "Importantly, we show that Pearson correlation is appropriate for some word vectors but not others.", "When it is not appropriate, we illustrate how common nonparametric rank correlation coefficients can be used instead to significantly improve performance.", "We support our analysis with a series of evaluations on word-level and sentence-level semantic textual similarity benchmarks.", "On the latter, we show that even the simplest averaged word vectors compared by rank correlation easily rival the strongest deep representations compared by cosine similarity.", "Textual embeddings are immensely popular because they help us reason about the abstract and fuzzy notion of semantic similarity in purely geometric terms.", "Distributed representations of words in particular (Bengio et al., 2003; Mikolov et al., 2013a; Pennington et al., 2014; Bojanowski et al., 2017; Joulin et al., 2017) have had a massive im-pact on machine learning (ML), natural language processing (NLP), and information retrieval (IR).", "Recently, much effort has also been directed towards learning representations for larger pieces of text, with methods ranging from clever compositions of word embeddings (Mitchell and Lapata, 2008; De Boom et al., 2016; Arora et al., 2017; Wieting et al., 2016; Wieting and Gimpel, 2018; Zhelezniak et al., 2019) to sophisticated neural architectures (Le and Mikolov, 2014; Kiros et al., 2015; Hill et al., 2016; Conneau et al., 2017; Gan et al., 2017; Tang et al., 2017; Zhelezniak et al., 2018; Subramanian et al., 2018; Pagliardini et al., 2018; Cer et al., 2018).", "Comparatively, there is little research into similarity measures for textual embeddings.", "Despite some investigations into alternatives ( Camacho-Collados et al., 2015; De Boom et al., 2015; Santus et al., 2018; Zhelezniak et al., 2019), cosine similarity has persistently remained the default and unquestioned choice across the field.", "This is partly because cosine similarity is very convenient and easy to understand.", "Sometimes, however, we have to resist what is convenient and instead use what is appropriate.", "The core idea behind our work is to treat each word or sentence embedding as a sample of (e.g. 300) observations from some scalar random variable.", "Hence, no matter how mysterious word vectors appear to be, just like any samples, they become subject to the full power of traditional statistical analysis.", "We first show that in practice, the widely used cosine similarity is nothing but the Pearson correlation coefficient computed from the paired sample.", "However, Pearson's r is extremely sensitive to even slight departures from normality, where a single outlier can conceal the underlying association.", "For example, we find that Pearson's r (and thus cosine similarity) is acceptable for word2vec and fastText but not for GloVe embeddings.", "Perhaps surprisingly, when we average word vectors to represent sentences, cosine similarity remains acceptable for word2vec, but not for fastText any longer.", "We show that this seemingly counterintuitive behaviour can be predicted 952 by elementary univariate statistics, something that is already well known to researchers and practitioners alike.", "Furthermore, when there are clear indications against cosine similarity, we propose to repurpose rank-based correlation coefficients, such as Spearman's (cid:26) and Kendall's (cid:28) , as similarity measures between textual embeddings.", "We support this proposition by a series of experiments on wordand sentence-level semantic textual similarity (STS) tasks.", "Our results confirm that rank-based correlation coefficients are much more effective when the majority of vectors break the assumptions of normality.", "Moreover, we show how even the simplest sentence embeddings (such as averaged word vectors) compared by rank correlation easily rival recent deep representations compared by cosine similarity.", "At the heart of our work is a simple statistical analysis of pre-trained word embeddings and exploration of various correlation coefficients as proxies for semantic textual similarity.", "Hence, any research that combines word embeddings with tools from probability and statistics is relevant.", "Of course, word embeddings themselves are typically obtained as the learned parameters of statistical machine learning models.", "These models can be trained on large corpora of text to predict a word from its context or vice versa ( Mikolov et al., 2013a).", "Alternatively, there are also supervised approaches (Wieting et al., 2015, 2016; Wieting and Gimpel , 2017, 2018).", "A different line of research tries to move away from learning word embeddings as point estimates and instead model words as parametric densities ( Vilnis and McCallum, 2014; Barkan, 2017; Athiwaratkun and Wilson, 2017).", "These approaches are quite appealing because they incorporate semantic uncertainty directly into the representations.", "Of course, such representations need to be learned explicitly.", "In some cases one could estimate the densities even for off-the-shelf embeddings, but this still requires access to the training data and the usefulness of such post-factum densities is limited (Vilnis and McCallum, 2014).", "In other words, these approaches are not very helpful to practitioners who are accustomed to using high-quality pre-trained word embeddings directly.", "Arguably, statistical analysis of pre-trained word embeddings is not as principled as applying a probabilistic treatment end-to-end.", "Any such analysis, however, is very valuable as it provides insights and justifications for methods that are already in widespread use.", "For example, removing the common mean vector and a few top principal components makes embeddings even stronger and is now a common practice (Mu and Viswanath, 2018; Arora et al., 2016, 2017; Ethayarajh, 2018).", "These works view word embeddings as observations from some D -dimensional distribution; such treatment is naturally suitable for studying the overall geometry of the embedding space.", "We, on the other hand, are interested in studying the similarities between individual word vectors and require a completely different perspective.", "To this end, we see each word embedding itself as a sample of D observations from a scalar random variable.", "It is precisely this shift in perspective that allows us to reason about semantic similarity in terms of correlations between random variables and make the connection to the widely used cosine similarity.", "Finally, we propose using rank-based correlation coefficients when cosine similarity is not appropriate.", "Recently, Santus et al. (2018) introduced a rank-based similarity measure for word embeddings, called APSynP, and demonstrated its efficacy on outlier detection tasks.", "However, the results on the word-level similarity benchmarks were mixed, which, interestingly enough, could have been predicted in advance by our analysis.", "Suppose we have a vocabulary of N words V = f w 1 ; w 2 ; : : : ; w N g and the word embeddings matrix W 2 RN (cid:2) D , where each row w ( i ) for i = 1 ; : : : ; N is a D -dimensional word vector.", "Popular pre-trained embeddings in practice typically have dimension D = 300 , while the vocabulary size N can range from thousands to millions of words.", "We now consider the following: what kinds of statistical analyses can we apply to W in order to model semantic similarity between words?", "One option is to view all word embeddings w (1) ; w (2) ; : : : w ( N ) as a sample of N observations from some D -variate distribution P ( E 1 ; : : : ED ) .", "For example, we can fit a Gaussian and study how all 300 dimensions correlate with each other.", "Perhaps we can fit a mixture model and see how the embeddings cluster.", "We 953 Figure 1: Normalised histograms of the mean distribution for three commonly used word embedding models: GloVe (Pennington et al., 2014), fastText (Bojanowski et al., 2017), and word2vec (Mikolov et al., 2013b,c).", "could also normalise them and study their distribution on the unit sphere.", "It is clear by now that P ( E 1 ; : : : ; ED ) is suitable for describing the overall geometry of the embedding space but is not very useful for our goals.", "If we are to reason about similarities between individual word vectors, we should instead be looking at the transpose of W .", "Putting it differently, we see WT as a sample of D observations from an N -variate distribution P ( W 1 ; W 2 ; : : : ; WN ) , where W i is a scalar random variable corresponding to the word w i .", "This distribution is exactly what we need because the associations between W i captured by P will become a proxy for semantic similarity.", "Often we are only interested in pairwise similarities between two given words w i and w j ; thus the main object of our study is the bivariate marginal P ( W i ; W j ) .", "To lighten up the notation slightly, we denote the two words as w x and w y , and the corresponding random variables as X and Y .", "We also refer to P ( X; Y ) as the joint and P ( X ) , P ( Y ) as the marginals.", "In practice, of course, the actual P ( X; Y ) is unknown but we can make inferences about it based on our sample ( x ; y ) = f ( x 1 ; y 1 ) ; ( x 2 ; y 2 ) ; : : : ( x D ; y D ) g .", "First, we might want to study the degree of linear association between X and Y , so we compute the sample Pearson correlation coefficient ^ r = Di =1 ( x i (cid:0) (cid:22) x )( y i (cid:0) (cid:22) y ) Di =1 ( x i (cid:0) (cid:22) x ) 2 Di =1 ( y i (cid:0) (cid:22) y ) 2 ; (1) where (cid:22) x and (cid:22) y are the sample means (cid:22) x = D i =1 x i ; (cid:22) y = D i =1 y i : (2) Let's view x and y as word embeddings momentarily and compute cosine similarity between them cos ( x ; y ) = Di =1 x i y i Di =1 x 2 i Di =1 y 2 i : (3) We see now that Equation (1) and Equation (3) look very similar; when the sample means (cid:22) x , (cid:22) y are zero, cosine similarity and Pearson's ^ r are equal.", "The real question here is whether or not they coincide in practice.", "Putting it differently, if we take any single word vector w and compute the mean (across the D dimensions), is this mean close to zero?", "It turns out that it is, and we can show this by plotting the distribution of the means across the whole vocabulary for various popular word embeddings (see Figure 1).", "We find that the means are indeed highly concentrated around zero; quantitatively, only 0.03% of them are above 0.05 in magnitude.", "It follows that in practice when we compute cosine similarity between word vectors, we are actually computing Pearson correlation between them.", "However, is this always the right thing to do?", "When the joint P ( X; Y ) is bivariate normal, Pearson correlation indeed provides a complete summary of association between X and Y , simply because the covariance is given by cov ( X; Y ) = r XY (cid:27) X (cid:27) Y .", "However, Pearson correlation is extremely sensitive to even the slightest departures from normality a single outlier can easily conceal the underlying association (Pernet et al., 2013).", "When the normality of P ( X; Y ) is in doubt, it is preferable to use robust correlation coefficients such as Spearman's ^ (cid:26) or Kendall's ^ (cid:28) .", "where r [ x i ] denotes the integer rank of x i in a vector x (similarly r [ y i ] ), while r [ x ] and r [ y ] denote the means of the ranks.", "Kendall's ^ (cid:28) is given by ^ (cid:28) = 2 D ( D (cid:0)", "and can be interpreted as a normalised difference between the number of concordant pairs and the number of discordant pairs.", "These rank correlation coefficients are more robust to outliers than Pearson's ^ r because they limit the effect of outliers to their ranks: no matter how far the outlier is, its rank cannot exceed D or fall below 1 in our case.", "There are also straightforward extensions to account for the ties in the ranks.", "The main point here is the following.", "It is tempting to chose cosine similarity as the default and apply it everywhere regardless of the embedding type.", "Sometimes, however, we should resist using what is convenient and instead use what is appropriate.", "For example, if the samples corresponding to the marginals P ( X ) and P ( Y ) already look non-normal, then we conclude the joint P ( X; Y ) cannot be a bivariate normal and the appropriateness of cosine similarity should be seriously questioned.", "In some of these cases, using a rank-based coefficient as a similarity measure be-955 Figure 3: Q-Q plots comparing the theoretical quantiles of a standard normal distribution (horizontal axis) against the sample quantiles of standardised (Mean 0, SD", "tween word embeddings would be a much better alternative.", "It will capture the association better, which could in turn lead to large improvements in performance on the downstream tasks.", "In general, of course, even normal marginals do not imply a normal joint and care should be exercised either way; however we found the normality of marginals to be a good indication for cosine similarity within the scope of the present work.", "In the next section we illustrate how the ideas discussed here can be applied in practice.", "No matter how mysterious word vectors appear to be, just like any samples, they are subject to the full power of traditional statistical analysis.", "As a concrete example, let's say we decided to use GloVe vectors (Pennington et al., 2014).", "We treat each vector w i as if it was a sample of 300 observations from some scalar random variable W i .", "We take a few hundred of these vectors, run a normality test such as Shapiro-Wilk (Shapiro and Wilk, 1965) and find that the majority of them look nonnormal ( p < 0 : 05 ).", "As there is a considerable evidence against normality, we flag these vectors as suspicious' and look at them closer.", "We pick a few vectors and examine their histograms and Q-Q plots, seen in Figure 2 and Figure 3 respectively; the latter in particular is a statistical tool used to compare empirical and theoretical data distributions, and is explained further in the caption of Figure 3.", "In both cases we observe that while the bulk of the distribution looks bell-shaped, we always get a couple of very prominent outliers.", "Next, we can also visualise our word vectors in a way more directly relevant to the task at hand.", "We take some pairs of words that are very similar (e.g. vanish' and disappear'), moderately similar (hard' and dense'), and completely dissimilar (mouse' and management') and make the scatter plots for the corresponding pairs of word vectors.", "These are also presented in Figure 2.", "We see that for similar pairs the relationship is almost linear; it becomes less linear as the similarity decreases, until we see a spherical blob (no relationship) for the most dissimilar pair.", "However, we again face the presence of bivariate outliers that are too far away from the main bulk of points.", "Given this evidence, which course of action shall we take?", "Based on the presence of heavy outliers, we reject the normality of GloVe vectors 956 and rule out the use of Pearson's r and cosine similarity.", "Instead we can use rank correlation coefficients, such as Spearman's (cid:26) or Kendall's (cid:28) , as they offer more robustness to outliers.", "Note that in this particular case, it may also be acceptable to winsorize (clip) the vectors and only then proceed with the standard Pearson's r .", "We evaluate the proposed solution on word-level similarity tasks and observe good improvement in performance over cosine similarity, as seen in Table 1.", "Of course this exploration is in no way specific to GloVe vectors.", "Note that from Figure 2 and Figure 3, we also see that word2vec vectors in particular tend to be much more normally distributed, meaning that we don't find strong evidence against using Pearson correlation; this is again backed up by Table 1.", "This example helps illustrate that proper statistical analysis applied to existing textual embeddings is extremely powerful and comparatively less time-consuming than inventing new approaches.", "Of course, this analysis can be made as fine-grained as desired.", "Quite coarsely, we could have rejected the use of cosine similarity right after the Shapiro-Wilk test; on the other hand, we could have used even more different tests and vi-sualisations.", "The decision here rests with the practitioner and depends on the task and the domain.", "To empirically validate the utility of the statistical framework presented in Section 3, we run a set of evaluations on wordand sentence-level STS tasks.", "In all experiments we rely on the following publicly available word embeddings: GloVe ( Pennington et al., 2014) trained on Common Crawl (840B tokens), fastText (Bojanowski et al., 2017) trained on Common Crawl (600B tokens), and word2vec (Mikolov et al., 2013b,c) trained on Google News.", "All the source code for our experiments is available on GitHub 1 ; in the case of the sentence-level tasks we rely also on the SentEval toolkit (Conneau and Kiela, 2018).", "First we consider a group of word-level similarity datasets that are commonly used as benchmarks in previous research: WS-353-SIM (Finkel-stein et al., 2001), YP-130 (Yang and Powers, 2005), SIMLEX-999 (Hill et al., 2015), SimVerb-3500 (Gerz et al., 2016), RW-STANFORD (Luong 1 https://github.com/Babylonpartners/ corrsim task N V COS PRS SPR KENG l o V e YP -130 .01 = 57.1 57.0 60.2 59.9 MTURK -287 .13 = 69.3 69.3 70.8 70.9 SIMLEX -999 .04 R 40.8 40.9 46.0 46.0 MC -30 .10 = 78.6 79.2 77.0 77.4 SIMVERB -3500 .04 R 28.3 28.3 34.3 34.3 RG -65 .14 = 76.2 75.9 71.0 71.1 WS -353SIM .06 = 80.3 80.2 80.1 80.1 VERB -143 .00 = 34.1 33.9 37.8 37.4 RW-STANFORD .16 R 46.2 46.2 52.8 52.9 f a s t T e x t YP -130 .73 = 62.5 62.6 65.3 65.0 MTURK -287 .88 = 72.6 72.7 73.4 73.3 SIMLEX -999 .76 = 50.3 50.2 50.4 50.2 MC -30 .90 = 85.2 85.2 84.6 84.5 SIMVERB -3500 .68 = 42.6 42.6 42.6 42.5 RG -65 .90 N 85.9 85.8 83.9 84.1 WS -353SIM .84 N 84.0 83.8 82.4 82.2 VERB -143 .21 = 44.7 44.9 43.8 44.3 RW-STANFORD .80 = 59.5 59.4 59.0 58.9 w o r d 2v ec YP -130 .95 = 55.9 56.1 55.0 54.7 MTURK -287 .94 = 68.4 68.3 67.1 67.2 SIMLEX -999 .94 = 44.2 44.2 43.9 44.0 MC -30 .92 = 78.8 77.9 76.9 76.9 SIMVERB -3500 .96 = 36.4 36.4 36.0 36.0 RG -65 .94 = 75.0 74.3 73.9 74.2 WS -353SIM .92 N 77.2 76.9 75.8 75.8 VERB -143 .98 = 49.7 50.1 48.9 49.0 RW-STANFORD .95 N 53.4 53.5 52.5 52.5 Table 1: Spearman's (cid:26) on word similarity tasks for combinations of word vectors and the following similarity metrics: cosine similarity (COS), Pearson's r (PRS), Spearman's (cid:26) (SPR), and Kendall (cid:28) (KEN).", "et al., 2013), Verb-143 (Baker et al., 2014), MTurk-287 (Radinsky et al., 2011), MC-30 (Miller and Charles, 1991).", "These datasets contain pairs of words and a human-annotated similarity score for each pair.", "The success metric for the experiments is the Spearman correlation between the human-957 annotated similarity scores and the scores generated by the algorithm.", "To avoid any confusion whatsoever, note that here Spearman correlation serves as an evaluation criterion; this is completely unrelated to using Spearman correlation as a similarity measure between word embeddings as proposed in Section 3.", "Bias-corrected and accelerated bootstrap (Efron, 1987) 95% confidence intervals were used to determine statistical signifi-cance.", "We report the results for different combinations of word vectors and similarity measures in Table 1.", "The main takeaways from these experiments are the following: There is no significant difference between the results obtained with cosine similarity and Pearson correlation.", "This is because empirically, the means across dimensions of these word vectors are approximately zero, in which case cosine similarity and Pearson correlation are approximately the same.", "Rank correlation coefficients tend to perform on par or better than cosine and Pearson on tasks and word vectors where there is a high proportion of non-normally distributed word vectors (over 90%).", "This makes sense because it is precisely in the non-normal cases where Pearson correlation fails.", "When word vectors seem mostly normal, our analysis does not tell us definitively whether cosine similarity or rank correlation should perform better, and indeed we see that cosine and Pearson perform on par or better than Spearman and Kendall.", "In the second set of experiments, we use the datasets from the sentence-level Semantic Textual Similarity shared task series 2012-2016 ( Agirre et al., 2012, 2013, 2014, 2015, 2016; Cer et al., 2017).", "The success metric for these experiments is the Pearson correlation between the human-annotated sentence similarity scores and the scores generated by the algorithm.", "Again, this use of Pearson correlation as an evaluation criterion is completely unrelated to its use as a similarity measure between sentence embeddings.", "Note that the dataset for the STS13 SMT subtask is no longer publicly available, so the mean Pearson correlations reported in our experiments involving this task have been re-calculated accordingly.", "For these experiments we use averaged word vectors as a sentence representation for various task N COS PRS SPR KEN APSG l o V e STS12 .01 52.1 52.0 53.4 52.6 53.8 STS13 .00 49.6 49.6 56.2 56.7 55.9 STS14 .00 54.6 54.5 63.2 63.0 63.0 STS15 .00 56.1 56.0 64.5 65.3 64.2 STS16 .00 51.4 51.4 62.1 63.7 60.8 f a s t T e x t STS12 .01 58.3 58.3 60.2 59.0 58.4 STS13 .01 57.9 58.0 65.1 65.3 61.8 STS14 .00 64.9 65.0 70.1 69.6 68.5 STS15 .00 67.6 67.6 74.4 74.6 72.7 STS16 .00 64.3 64.3 73.0 73.5 70.7 w o r d 2v ec STS12 .95 51.6 51.6 51.7 53.1 45.3 STS13 .94 58.2 58.3 57.9 58.2 57.2 STS14 .96 65.6 65.6 65.5 65.6 64.1 STS15 .96 67.5 67.5 67.3 68.3 66.5 STS16 .96 64.7 64.7 64.6 65.6 63.9 Table 2: Mean Pearson correlation on STS tasks for methods using combinations of word vectors and similarity metrics.", "We report these results in Table 2, and the full sig-nificance analysis for each subtask in Table 4.", "We also compare the top performing combination of averaged word vectors and correlation coefficient against several popular approaches from the literature that use cosine similarity: BoW with ELMo embeddings ( Peters et al., 2018), Skip-Thought (Kiros et al., 2015), InferSent (Conneau et al., 2017), Universal Sentence Encoder with DAN and Transformer (Cer et al., 2018), and STN multitask embeddings (Subramanian et al., 2018).", "These results are presented in Table 3.", "Our observations for the sentence-level experiments are as follows: The conclusions from the word-level tasks continue to hold and are even more pronounced: in particular, cosine and Pearson are essentially equivalent, and the increase in performance of rank-based correlation coefficients over cosine similarity on non-normal sentence vectors is quite dramatic.", "Finally, the fraction of non-normal word vectors used in sentence-level tasks is consistent with the results reported for the word-level tasks in Table 1.", "However, we observe the following curious phenomenon for fastText.", "While there is no evidence against normality for the majority of fastText vectors, perhaps surprisingly, when we average them to represent sentences, such sentence embeddings are almost entirely non-normal (Table 2).", "Empirically we observe that many high-frequency words or stopwords have prominently non-normal fastText vectors.", "Although stopwords constitute only a small fraction of the entire vocabulary, they are very likely to occur in any given sentence, thus rendering most sentence embeddings non-normal as well.", "While it's tempting to invoke the Central Limit Theorem (at least for longer sentences), under our formalism, averaging word vectors corresponds to averaging scalar random variables used to represent words, which are neither independent nor identically distributed.", "In other words, there are no easy guarantees of normality for such sentence vectors.", "In this work, we investigate statistical correlation coefficients as measures for semantic textual similarity", "similarity and make the following contributions: We show that in practice, for commonly used word vectors, cosine similarity is equivalent to the Pearson correlation coefficient, motivating an alternative statistical view of word vectors as opposed to the geometric view, which is more prevalent in the literature.", "We illustrate via a concrete example the power and benefits of using elementary statistics to analyse word vectors.", "We characterise when Pearson correlation is applied inappropriately and show that these conditions hold for some word vectors but not others, providing a basis for deciding whether or not cosine similarity is a reasonable choice for measuring semantic similarity.", "We demonstrate that when Pearson correlation is not appropriate, non-parametric rank correlation coefficients, which are known to be more robust to various departures from normality, can be used as similarity measures to significantly improve performance on wordand sentence-level STS tasks.", "Finally, we show in particular that sentence representations consisting of averaged word vectors, when compared by rank correlation, can easily rival much more complicated representations compared by cosine similarity.", "We hope that these contributions will inspire others to carefully investigate and understand alternative measures of similarity.", "This is particularly important in the realm of sentence representations, where there are many more complex ways of constructing sentence representations from word embeddings besides the simple averaging procedure tested here.", "It is worth exploring whether a more subtle application of rank correlation could help push these more complex sentence representations to even better performance on STS tasks.", "A final and fascinating direction of future work is to explain the non-normality of certain types of word vectors (and in particular the presence of outliers) by analysing their training procedures.", "Preliminary investigations suggest that unsupervised 959 GloVe fastText word2vec SPR COS 95% BCa CI SPR COS 95% BCa CI SPR COS 95% BCa CISTS 12 MSRpar 35.90 42.55 [-10.74, -2.52] 39.66 40.39 [-3.22, 1.80] 38.79 39.72 [-1.77, -0.16] MSRvid 68.80 66.21 [1.31, 4.09] 81.02 73.77 [6.16, 8.53] 77.88 78.11 [-0.52, 0.06] SMTeuroparl 48.73 48.36 [-5.26, 6.48] 50.29 53.03 [-5.41, -0.17] 16.96 16.06 [0.21, 1.34] surprise.OnWN 66.66 57.03 [6.89, 12.76] 73.15 68.92 [2.19, 6.56] 70.75 71.06 [-0.73, 0.09] surprise.SMTnews 47.12 46.27 [-4.27, 5.50] 56.67 55.20 [-2.50, 5.50] 53.93 52.91 [-0.13, 2.09] STS 13 FNWN 43.21 38.21 [-0.54, 10.24] 49.40 39.83 [2.74, 16.46] 40.73 41.22 [-2.07, 1.07] headlines 67.59 63.39 [2.58, 5.89] 71.53 70.83 [-0.17, 1.58] 65.48 65.22 [-0.12, 0.66] OnWN 57.66 47.20 [8.10, 13.02] 74.33 63.03 [9.27, 13.50] 67.49 68.29 [-1.29, -0.33] STS 14 deft-forum 39.03 30.02 [5.24, 13.52] 46.20 40.19 [2.88, 10.00] 42.95 42.66 [-0.43, 1.03] deft-news 68.99 64.95 [-0.39, 8.72] 73.08 71.15 [-0.36, 4.39] 67.33 67.28 [-0.70, 0.91] headlines 61.87 58.67 [1.15, 5.48] 66.33 66.03 [-0.68, 1.28] 62.09 61.88 [-0.22, 0.66] images 70.36 62.38 [6.30, 10.00] 80.51 71.45 [7.44, 10.96] 76.98 77.46 [-0.89, -0.09] OnWN 67.45 57.71 [7.89, 11.97] 79.37 70.47 [7.42, 10.50] 74.69 75.12 [-0.81, -0.08] tweet-news 71.23 53.87 [13.98, 21.67] 74.89 70.18 [2.60, 7.21] 68.78 69.26 [-0.92, -0.01] STS 15 answers-forums 50.25 36.66 [10.18, 17.55] 68.28 56.91 [7.99, 15.23] 53.74 53.95 [-1.28, 0.86] answers-students 69.99 63.62 [4.25, 9.59] 73.95 71.81 [0.69, 3.56] 72.45 72.78 [-0.70, 0.04] belief 58.77 44.78 [10.11, 19.05] 73.71 60.62 [9.64, 19.50] 61.73 61.89 [-0.84, 0.46] headlines 69.61 66.21 [1.65, 5.29] 72.93 72.53 [-0.40, 1.20] 68.58 68.72 [-0.48, 0.23] images 73.85 69.09 [3.45, 6.29] 83.18 76.12 [5.76, 8.58] 80.04 80.22 [-0.55, 0.18] STS 16 answer-answer 43.99 40.12 [0.90, 7.36] 54.51 45.13 [5.14, 15.93] 43.41 43.14 [-1.03, 1.43] headlines 67.05 61.38 [2.43, 9.44] 71.00 70.37 [-0.93, 2.13] 66.55 66.64 [-0.66, 0.51] plagiarism 72.25 54.61 [12.69, 23.74] 84.45 74.49 [6.38, 14.81] 75.21 76.46 [-2.31, -0.37] postediting 69.03 53.88 [12.01, 19.06] 82.73 68.76 [7.55, 22.96] 73.87 73.35 [-0.08, 1.21] question-question 58.32 47.21 [7.02, 18.18] 72.29 62.62 [6.35, 13.64] 63.94 63.74 [-1.03, 1.38] Table 4: Pearson correlations between human sentence similarity score and a generated score.", "objectives based on the distributional hypothesis are probably not to blame, as word vectors trained without relying on the distributional hypothesis, such as those of Wieting et al. (2015), still exhibit non-normality to some degree.", "The actual causes remain to be determined.", "We believe that understanding the reasons for these empirically-observed characteristics of textual embeddings would be a significant step forwards in our overall understanding of these crucial building blocks for data-driven natural language processing.", "We would like to thank Dan Busbridge for his useful comments and suggestions." ]
[ "abstain", "abstain", "method", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "method", "abstain", "objective", "abstain", "result", "abstain", "result", "objective", "abstain", "result", "result", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "method", "objective", "other", "abstain", "abstain", "other", "method", "other", "method", "abstain", "method", "other", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "method", "abstain", "abstain", "method", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "method", "abstain", "other", "other", "other", "abstain", "method", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "method", "method", "method", "other", "objective", "other", "abstain", "other", "other", "method", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "result", "other", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "other" ]
[ "Writing is, by nature, a strategic, adaptive, and more importantly, an iterative process.", "A crucial part of writing is editing and revising the text.", "Previous works on text revision have focused on defining edit intention taxonomies within a single domain or developing computational models with a single level of edit granularity, such as sentence-level edits, which differ from human's revision cycles.", "This work describes ITERATER: the first large-scale, multi-domain, edit-intention annotated corpus of iteratively revised text.", "In particular, ITERATER is collected based on a new framework to comprehensively model the iterative text revisions that generalize to various domains of formal writing, edit intentions, revision depths, and granularities.", "When we incorporate our annotated edit intentions, both generative and edit-based text revision models significantly improve automatic evaluations.", "1 Through our work, we better understand the text revision process, making vital connections between edit intentions and writing quality, enabling the creation of diverse corpora to support computational modeling of iterative text revisions.", "Writing is a complex and effortful cognitive task, where writers balance and orchestrate three distinct cognitive processes: planning, translation, and revising (Flower and Hayes, 1980).", "These processes can be hierarchical and recursive and can occur at any moment during writing.", "This work focuses on text revision as an essential part of writing (Scar-damalia, 1986).", "Revising text is a strategic, and adaptive process.", "It enables writers to deliberate over and organize their thoughts, find a better line of argument, learn afresh, and discover what was This research was performed when Wanyu Du was interning at Grammarly.", "Each comment was annotated by three different annotators, which achieved high inter-annotator agreement.", "The proposed annotation {process approach} CLARITY is also language and domain independent{, nevertheless, it was currently applied for Brazilian Portuguese} MEANING-CHANGED .", "Each comment was annotated by three different annotators, {which and} COHERENCE achieved high inter-annotator agreement.", "The {new} MEANING-CHANGED proposed annotation approach is also language and {domain independent, nevertheless, it was currentlydomain-independent (although it has been} CLARITY applied for Brazilian Por-tuguese{)} FLUENCY .", "Each comment was annotated by three different annotators {,} FLUENCY and achieved high inter-annotator agreement.", "The {new} COHERENCE proposed annotation approach is also language and domain-independent {(although it has been applied nevertheless it is currently customized} COHERENCE for Brazilian Portuguese {)} FLUENCY .", "not known before (Sommers, 1980).", "Specifically, text revision involves identifying discrepancies between intended and instantiated text, deciding what edits to make, and how to make those desired edits (Faigley and Witte, 1981; Fitzgerald, 1987; Bridwell, 1980).", "Text revision is an iterative process.", "Human writers are unable to simultaneously comprehend multiple demands and constraints of the task when producing well-written texts (Flower, 1980; Collins and Gentner, 1980; Vaughan and McDonald, 1986) for instance, expressing ideas, covering the content, following linguistic norms and discourse conventions of written prose, etc.", "Thus, they turn towards making successive iterations of revisions to reduce the number of considerations at each time.", "Previous works on iterative text revision have three major limitations: (1) simplifying the task to an noniterative \"original-to-final\" text paraphras-3573 ing; (2) focusing largely on sentence-level editing (Faruqui et al., 2018; Botha et al., 2018; Ito et al., 2019; Faltings et al., 2021); (3) developing editing taxonomies within individual domains (e.g. Wikipedia articles, academic writings) (Yang et al., 2017; Zhang et al., 2017; Anthonio et al., 2020).", "These limitations make their proposed text editing taxonomies, datasets, and models lose their generalizability and practicality.", "We present ITERATE R an annotated dataset for ITERA tive TE xt Revision that consists of 31,631 iterative document revisions with sentence-level and paragraph-level edits across multiple domains of formally human-written text, including Wikipedia 2 , ArXiv 3 and Wikinews.", "4 Table 1 shows a sample ArXiv document in ITERATER, that underwent iterative revisions.", "Our dataset includes 4K manually annotated and 196K automatically annotated edit intentions based on a sound taxonomy we developed, and is generally applicable across multiple domains and granularities (See Table 2).", "Note that ITERATER is currently only intended to support formal writing revisions, as iterative revisions are more prevalent in formal rather than informal writings (e.g. tweets, chit-chats) 5 .", "Our contributions are as follows: formulate the iterative text revision task in a more comprehensive way, capturing greater real-world challenges such as successive revisions, multi-granularity edits, and domain shifts.", "collect and release a large, multi-domain Iterative Text Revision dataset: ITERATER, which contains 31K document revisions from Wikipedia, ArXiv and Wikinews, and 4K edit actions with high-quality edit intention annotations.", "analyze how text quality evolves across iterations and how it is affected by different kinds of edits.", "show that incorporating the annotated edit-intentions is advantageous for text revision systems to generate better-revised documents.", "Edit Intention Identification.", "Identification of edit intentions is an integral part of the iterative text revision task.", "Prior works have studied the categorization of different types of edit actions to help understand why editors do what they do 2 https://www.wikipedia.org/ 3 https://arxiv.org/ 4 https://www.wikinews.org/ 5 Further extension to less formal writings (e.g. blog, emails) will be discussed in the future.", "and how effective their actions are (Yang et al., 2017; Zhang et al., 2017; Ito et al., 2019).", "However, these works do not further explore how to leverage edit intentions to generate better-revised documents.", "Moreover, some of their proposed edit intention taxonomies are constructed with a focus on specific domains of writing, such as Wikipedia articles (Anthonio et al., 2020; Bhat et al., 2020; Faltings et al., 2021) or academic essays (Zhang et al., 2017).", "As a result, their ability to generalize to other domains remains an open question.", "Noniterative Text Revision Models.", "Some prior works (Faruqui et al., 2018; Botha et al., 2018; Ito et al., 2019; Faltings et al., 2021) simplify the text revision task to a single-pass \"original-to-final\" sentence-to-sentence generation task.", "However, it is very challenging to conduct multiple perfect edits at once.", "For example, adding transition words or reordering the sentences are required to further improve the document quality.", "Therefore, single-pass sentence-to-sentence text revision models are not sufficient to deal with real-world challenges of text revision tasks.", "In this work, we explore the performance of text revision models in multiple iterations and multiple granularities.", "Iterative Text Revision Datasets.", "While some prior works have constructed iterative text revision datasets, they are limited to singular writing domains, such as Wikipedia-style articles (Anthonio et al., 2020), academic essays (Zhang et al., 2017) or news articles (Spangher and May, 2021).", "In this work, we develop a unified taxonomy to analyze the characteristics of iterative text revision behaviors across different domains and collect large scale text revisions of human writings from multiple domains.", "The differences between ITERATER and the prior datasets are summarized in Table", "2. 3574 ITERATER-FULLITERATER-HUMAN ArXiv Wikipedia Wikinews ArXiv Wikipedia Wikinews Depth #D #E #D #E #D #E #D #E #D #E #D #E 1 9,446 65,450 8,195 51,290 7,878 39891 95 618 130 1,072 173 1,227 2 1,615 11,391 1,991 12,868 1,455 8,116 76 499 38 250 25 155 3 301 2,076 415 2,786 161 1,704 6 47 10 98 4 27 4 66 444 64 723 16 71 1 13 1 12 0 0 5 15 107 9 52 4 18 0 0 0 0 0 0 Total 11,443 79,468 10,674 67,719 9,514 49,800 178 1,177 179 1,432 202 1,409 Table 3: Statistics of the ITERATER dataset, where #D indicate the number of document revisions ( R t ), and #E indicate the number of annotated edit actions.", "Edit Action.", "An edit action a k is a local change applied to a certain text object, where k is the index of the current edit action.", "The local changes include: insert, delete and modify.", "The text objects include: token, phrase 6 , sentence, and paragraph.", "This work defines local changes applied to tokens or phrases as sentence-level edits , local changes applied to sentences as paragraph-level edits and local changes applied to paragraphs as document-level edits .", "Edit Intention.", "An edit intention e k reflects the revising goal of the editor when making a certain edit action.", "In this work, we assume each edit action a k will only be labeled with one edit intention e k .", "We further describe our edit intention taxonomy in Table 4 and 4.2.1.", "Document Revision.", "A document revision is created when an editor saves changes for the current document (Yang et al., 2016, 2017).", "One revision R t is aligned with a pair of documents ( D t 1 , D t ) and contains K t edit actions, where t indicates the version of the document and K t 1 .", "A revision with K t edit actions will correspondingly have K t edit intentions: ( D t 1 , D t ) R t = { ( a tk , e tk ) } K t k =1 (1) We define t as the revision depth.", "Iterative Text Revision.", "Given a source text D t 1 , iterative text revision is the task of generating revisions of text D t at depth t until the quality 6 In this work, we define phrase as text pieces which contain more than one token and only appears within a sentence.", "where g ( D ) is a text revision system and f ( D ) is a quality evaluator of the revised text.", "The quality evaluator f ( D ) can be automatic systems or manual judgements which measure the quality of the revised text.", "The stop criteria { s i } is a set of conditions that determine whether to continue revising or not.", "In this work, we simply set them as revision depth equal to 10, and edit distance between D t 1 and D t equal to 0 (6.2).", "We will include other criteria which measures the overall quality, content preservation, fluency, coherence and readability of the revised text in future works.", "Domains.", "We select three domains Wikipedia articles, academic papers, and news articles to cover different human writing goals, formats, revision patterns, and quality standards.", "The three domains consist of formally written texts, typically edited by multiple authors.", "We describe why and how we collect text revision from each domain below: Scientific Papers.", "Scientific articles are written in a rigorous, logical manner.", "Authors generally highlight and revise their hypotheses, experimental results, and research insights in this domain.", "We collect paper abstracts submitted at different timestamps (i.e., version labels) from ArXiv.", "Wikipedia Articles.", "Encyclopedic articles are written in a formal, coherent manner, where editors typically focus on improving the clarity and structure of articles to make people easily understand all kinds of factual and abstract encyclope-3575 Edit-Intention Description Example Counts (Ratio) FLUENCY Fix grammatical errors in the text.", "dic information.", "We collect revision histories of the main contents of Wikipedia articles.", "News Articles.", "News articles are generally written in a precise and condensed way.", "News editors emphasize improving the clarity and readability of news articles to keep people updated on rapidly changing news events.", "We collect revision histories of news content from Wikinews.", "Raw Data Processing.", "We first collect all raw documents, then sort each document version according to its timestamp in ascending order.", "For each document D , we pair two consecutive versions as one revision ( D t 1 , D t ) R t , where t is the revision depth.", "For each sampled document-revision R t , we extract its full edit actions using latexdiff .", "7 We provide both the paragraph-level and sentence-level revisions where the latter is constructed by applying a sentence segmentation tool, 8 and aligning each sentence to each revision.", "For each revision pair, we have: the revision type, the document id, the revision depth, an original phrase and a revised phrase, respectively.", "9 The detailed processing of raw text is described in Appendix A. In summary, we collect 31,631 document revisions with 196,987 edit actions, and maintain a relatively balanced distribution across three domains, as shown in Table", "3. We call this large-scale dataset as ITERATER-FULL-RAW .", "To better understand the human revision process, we sample 559 document revisions from ITERATER-FULL-RAW , consisting of 4,018 human edit actions.", "We refer to this small-scale unannotated dataset as ITERATER-HUMAN-RAW .", "In 4.2.2, we then use Amazon Mechanical Turk (AMT) to crowdsource edit intention annotations for each edit action according to our proposed edit-intention taxonomy (4.2.1).", "We refer to this small-scale annotated dataset as ITERATER-HUMAN .", "10 We then scale these manual annotations to ITERATER-FULL-RAW by training edit intention prediction models on ITERATER-HUMAN , and automatically label ITERATER-FULL-RAW to construct ITERATER-FULL .", "(4.2.3) 4.2.1 Edit Intention Taxonomy For manual annotations, we propose a new edit intention taxonomy in ITERATER (Table 4), in order to comprehensively model the iterative text revision process.", "Our taxonomy builds on prior literature (Rathjens, 1985; Harris, 2017).", "At the highest level, we categorize the edit intentions into ones that change the meaning or the information contained in the text (MEANING-CHANGED ), and ones that preserve these characteristics (NON-MEANINGCHANGED ).", "Since our goal is to understand edit intentions to improve the quality of writing, we focus on categorizing edits in the latter category further into four sub-categories: FLUENCY , CLARITY , COHERENCE and STYLE .", "Our proposed taxonomy of edit intentions is generally applicable to multiple 10 We provide our annotation instruction in Appendix C. 3576 Edit-Intention Precision Recall F1 CLARITY 0.75 0.63 0.69 FLUENCY 0.74 0.86 0.80 COHERENCE 0.29 0.36 0.32 STYLE 1.00 0.07 0.13 MEANING-CHANGED 0.44 0.69 0.53 Table 5: Edit intention classifier performance on the test split of ITERATER-HUMAN .", "domains, edit-action granularities (sentence-level and paragraph-level), and revision depths.", "We also propose the OTHER category for edits that cannot be labeled using the above taxonomy.", "Since edit intention annotation is a challenging task, we design strict qualification tests to select 11 qualified AMT annotators (details in Appendix B).", "To further improve the annotation quality, we ask another group of expert linguists (English L1, bachelor's or higher degree in Linguistics) to reannotate the edits which do not have a majority vote among the AMT workers.", "Finally, we take the majority vote among 3 human annotations (ei-ther from AMT workers or from expert linguists) as the final edit intention labels.", "This represents the ITERATER-HUMAN dataset.", "We release both the final majority vote and the three raw human annotations per edit action as part of the dataset.", "To scale up the annotation, we train an edit-intention classifier to annotate ITERATER-FULLRAW and construct the ITERATER-FULL dataset.", "We split the ITERATER-HUMAN dataset into 3,254/400/364 training, validation and test pairs.", "The edit intention classifier is a RoBERTa-based (Liu et al., 2020) multi-class classifier that predicts an intent given the original and the revised text for each edit action 11 .", "Table 5 shows its performance on the test set.", "The Fluency and Clarity edit intentions are easy to predict with F1 scores of 0.8 and 0.69, respectively, while Style and Coherence edit intentions are harder to predict with F1 scores of 0.13 and 0.32, respectively, largely due to the limited occurrence of Style and Coherence intents in the training data (Table 4).", "Edit Intention Distributions.", "The iterative edit intention distributions in three domains are demonstrated in Figure 1.", "Across all three domains, authors tend to make the majority of edits at revision depth 1.", "However, the number of edits rapidly decreases at revision depth 2, and few edits are made at revision depth 3 and 4.", "We find that CLARITY is one of the most frequent edit intentions across all domains, indicating that authors focus on improving readability across all domains.", "For ArXiv, MEANING-CHANGED edits are also among the most frequent edits, which indicates that authors also focus on updating the contents of their abstracts to share new research insights or update existing ones.", "Meanwhile, ArXiv also covers many FLUENCY and COHERENCE edits, collecting edits from scientific papers and suggesting meaningful revisions would be an important future application of our dataset.", "For Wikipedia, we find that FLUENCY , COHERENCE , and MEANING-CHANGED edits roughly share a similar frequency, which indicates Wikipedia articles have more complex revision patterns than ArXiv and news articles.", "For Wikinews, FLUENCY edits are equally emphasized, indicating that improving grammatical correctness of the news articles is just as important.", "Inter-Annotator Agreement.", "We measure inter-annotator agreement (IAA) using the Fleiss' (Fleiss, 1971).", "Table 6 shows the IAA across three domains.", "After the second round of reannotation by proficient linguists, the Fleiss' increases to 0.5014, which indicates moderate agreement among annotators.", "We further look at the raw annotations where at least 1 out of 3 annotators assigns a different edit intention label.", "We find that the COHERENCE intention is the one that is the most likely to have a disagreement: 312 out of 393 COHERENCE an-11 Please refer to Appendix D for more training details.", "notations do not have consensus.", "Within those disagreements of the COHERENCE intention, 68.77% are considered to be CLARITY , and 11.96% are considered to be the FLUENCY intention.", "Annotators also often disagree on the CLARITY intention, where 1023 out of 1601 CLARITY intentions do not have a consensus.", "Among those disagreements of the CLARITY intention, 30.33% are considered to be COHERENCE , and 30.23% are considered to be STYLE .", "The above findings explain why the inter-annotator agreement scores are lower in Wikipedia and ArXiv.", "As shown in Figure 1, Wikipedia has many COHERENCE edits while ArXiv has many CLARITY edits.", "This explains the difficulty of the edit intention annotation task: it not only asks annotators to infer the edit intention from the full document context, but also requires annotators to have a wide range of domain-specific knowledge in scientific writings.", "To better understand how text revisions affect the overall quality of documents, we conduct both manual and automatic evaluations on a sampled set of document revisions.", "Evaluation Data.", "We sample two sets of text revisions for different evaluation purposes.", "The first set contains 21 iterative document revisions, consisting of 7 unique documents, each document having 3 document revisions from revision depth 1 to", "3. The second set contains 120 text pairs, each associated with exactly one edit intention of FLUENCY , COHERENCE , CLARITY or STYLE .", "We validate the following research questions: RQ1 How do human revisions affect the text quality across revision depths?", "RQ2 How does text quality vary across edit intentions?", "Manual Evaluation Configuration.", "We hire a group of proficient linguists to evaluate the overall quality of the documents/sentences, where each revision is annotated by 3 linguists.", "For each revision, we randomly shuffle the original and revised texts, and ask the evaluators to select which one has better overall quality.", "They can choose one of the two texts, or neither.", "Then, we calculate the score for the overall quality of the human revisions as follows: -1 means the revised text has worse overall quality than the original text; 0 means the revised text do not show a better overall quality than the original text, or cannot reach agreement among 3 annotators; 1 means the revised text has better overall quality than the original text.", "Automatic Evaluation Configuration.", "We select four automatic metrics to measure the document quality on four different aspects: Syntactic Log-Odds Ratio (SLOR) (Kann et al., 2018) for text fluency evaluation, Entity Grid (EG) score (La-pata and Barzilay, 2005) for text coherence evaluation, FleschKincaid Grade Level (FKGL) (Kin-caid et al., 1975) for text readability evaluation and BLEURT score (Sellam et al., 2020) for content preservation evaluation.", "We describe the detailed justification of our metric selection in Appendix E. However, in our following experiments, we find these existing automatic metrics are poorly correlate with manual evaluations.", "RQ1: Iterative Revisions vs. Quality.", "Table 7 shows the document quality changes at different revision depths.", "Generally, human revisions improve the overall quality of original documents, as indicated by the overall score at each revision depth.", "12 However, the overall quality keeps decreasing as the revision depth increases from 1 to 3, likely because it is more difficult for evaluators to grasp the 12 We further validate this observation in another set of 50 single document-revisions in Appendix F. 3578 t Overall BLEURT SLOR EG FKGL 1 0.4285 0.1982 -0.0985 -0.0132 -1.0718 2 0.4285 0.1368 -0.1025 -0.0295 -2.4973 3 0.1428 -0.0224 -0.0792 0.0278 1.8131 Table 7: Evaluation results for 21 iterative document revisions, where t indicates the revision depth.", "overall quality in the deeper revision depths in the pair-wise comparisons between the original and revised documents, because less NON-MEANINGCHANGED edits have been conducted in deeper revision depths.", "For automatic metrics, we find SLOR and EG are not well-aligned with human overall score, we further examine whether human revisions makes original documents less fluent and less coherent in the analysis of RQ2.", "RQ2: Edit Intentions vs. Quality.", "Table 8 shows how text quality varies across edit intentions.", "We find that FLUENCY and COHERENCE edits indeed improve the overall quality of original sentences according to human judgments.", "This finding suggests that SLOR and EG are not well-aligned with human judgements, and calls for the need to explore other effective automatic metrics to evaluate the fluency and coherence of revised texts.", "Besides, we observe that STYLE edits degrade the overall quality of original sentences.", "This observation also makes sense since STYLE edits reflect the writer's personal writing preferences (according to our edit intention taxonomy in Table 4), which not necessarily improve the readability, fluency or coherence of the text.", "To better understand the challenges of modeling the task of iterative text revisions, we train different types of text revision models using ITERATER.", "training the text revision models, we experiment with both edit-based and generative models.", "For the edit-based model, Model Dataset SARI BLEU R-L Avg.", "we use FELIX (Mallinson et al., 2020), and for the generative models, we use BART (Lewis et al., 2020) and PEGASUS (Zhang et al., 2020a).", "FELIX decomposes text revision into two sub-tasks: Tagging, which uses a pointer mechanism to select the subset of input tokens and their order; and Insertion, which uses a masked language model to fill in missing tokens in the output not present in the input.", "BART and PEGASUS are Transformer-based encoder-decoder models which are used in a wide range of downstream tasks such as natural language inference, question answering, and summarization.", "Training.", "We use four training configurations to evaluate whether edit intention information can help better model text revisions.", "The first configuration uses the pure revision pairs without edit intention annotations (ITERATER-HUMAN-RAW dataset).", "In the second configuration, we include the manually annotated edit intentions to the source text (ITERATER-HUMAN dataset).", "Similarly, for the third and fourth training configurations, we use ITERATER-FULL-RAW dataset (no edit intention information) and ITERATER-FULL dataset (auto-matically annotated labels, as described in 4.2.3, simply appended to the input text).", "We use these four configurations for all model architectures.", "Automatic Evaluation.", "Table 9 shows the results of the three models for our different training configurations.", "Following prior works (Malmi et al., 2019; Dong et al., 2019; Mallinson et al., 2020), we report SARI, BLEU, and ROUGE-L 3579 Human Revision Tie Model Revision Overall 83.33% 10.00% 6.67% Content 13.33% 70.00% 16.67% Fluency 50.00% 50.00% 0.00% Coherence 40.00% 56.67% 3.33% Readability 86.67% 10.00% 3.33% Table 10: Manual pair-wise comparison for 30 single document revisions without Meaning-changed edits.", "metrics, and include detailed breakdown of scores in Appendix H. It is noteworthy that the SARI score on the no-edit baseline is the lowest, which indicates the positive impact of revisions on document quality, as also corroborated by the human evaluations in 5.", "For both ITERATER-HUMAN and ITERATER-FULL datasets, we see that edit intention annotations help to improve the performance of both FELIX and PEGASUS .", "Also, both models perform better on the larger ITERATER-FULL dataset compared to the ITERATER-HUMAN dataset, showing that the additional data (and automatically-annotated annotations) are helpful.", "Manual Evaluation.", "Table 10 shows how the model revision affects the quality of the original document.", "We choose PEGASUS trained on ITERATER-FULL to generate revisions and compare with human revisions, as the model produces the best overall results 13 .", "There exists a big gap between the best-performing model revisions and human revisions, indicating the challenging nature of the modeling problem.", "Thus, while model revisions can achieve comparable performance with human revisions on fluency, coherence and meaning preservation, human revisions still outperform in terms of readability and overall quality.", "Table 11 demonstrates how model-generated text quality varies across revision depths.", "In the first two depths, human revisions win over model revisions with a ratio of 57.14%.", "However, in the last depth, model revisions stay similar with human revisions in a ratio of 57.15%.", "Upon review-13 We provide detailed manual evaluation configuration in Appendix G. Figure 2: Number of iterations made by humans and different text revision models.", "ing revisions in the last depth, we find a lot of MEANING-CHANGED edits in human revisions.", "At the same time, the model revisions only made a few FLUENCY or CLARITY edits, which the human evaluators tend to judge as tie.", "Iterativeness.", "We also compare the iterative ability between the two kinds of text revision models (best performing versions of both FELIX and PEGASUS : trained on ITERATER-FULL ), against human's iterative revisions.", "Figure 2 shows that while PEGASUS is able to finish iterating after 2.57 revisions on average, FELIX continues to make iterations until the maximum cutoff of 10 that we set for the experiment.", "In contrast, humans on average make 1.61 iterations per document.", "While FELIX is able to make meaningful revisions (as evidenced by the improvements in the SARI metric in Table 14), it lacks the ability to effectively evaluate the text quality at a given revision, and decide whether or not to make further changes.", "PEGASUS , on the other hand, is able to pick up on these nuances of iterative revision, and learns to stop revising after a certain level of quality has been reached.", "Our work is a step toward understanding the complex process of iterative text revision from human-written texts.", "We collect, annotate and release ITERATER: a novel, large-scale, domain-diverse, annotated dataset of human edit actions.", "Our research shows that different domains of text have different distributions of edit intentions, and the general quality of the text has improved over time.", "Computationally modeling the human's revision process is still under-explored, yet our results indicate some interesting findings and potential directions.", "Despite the deliberate design of our dataset collection, ITERATER only includes formally written texts.", "We plan to extend it to diverse sets of revi-3580 sion texts, such as informally written blogs and less informal but communicative texts like emails, as well as increase the size of the current dataset.", "For future research, we believe ITERATER can serve as a basis for future corpus development and computationally modeling iterative text revision.", "We collect all data from publicly available sources, and respect copyrights for original document authors.", "During the data annotation process, all human annotators are anonymized to respect their privacy rights.", "We provide fair compensation to all human annotators, where each annotator gets paid more than the minimum wage and based on the number of annotations they conducted.", "Our work has no possible harms to fall disproportionately on marginalized or vulnerable populations.", "Our dataset does not contain any identity characteristics (e.g. gender, race, ethnicity), and will not have ethical implications of categorizing people.", "We thank all linguistic expert annotators at Grammarly for annotating, evaluating and providing feedback during our data annotation and evaluation process.", "We appreciate that Courtney Napoles and Knar Hovakimyan at Grammarly helped coordinate the annotation resources.", "We also thank Yangfeng Ji at University of Virginia and the anonymous reviewers for their helpful comments." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "other", "objective", "abstain", "objective", "other", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "objective", "other", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "result", "other", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "objective", "result", "objective", "abstain", "objective", "method", "method", "abstain", "method", "abstain", "abstain", "other", "other", "other" ]
[ "Transformers (Vaswani et al., 2017) have gradually become a key component for many state-of-the-art natural language representation models.", "A recent Transformer based modelBERT (Devlin et al., 2018) achieved state-of-the-art results on various natural language processing tasks, including GLUE, SQuAD v1.1, and SQuAD v2.0.", "This model however is computationally prohibitive and has a huge number of parameters.", "In this work we revisit the architecture choices of BERT in efforts to obtain a lighter model.", "We focus on reducing the number of parameters yet our methods can be applied towards other objectives such FLOPs or latency.", "We show that much efficient light BERT models can be obtained by reducing algorithmically chosen correct architecture design dimensions rather than reducing the number of Transformer encoder layers.", "In particular, our schuBERT gives 6 .", "6% higher average accuracy on GLUE and SQuAD datasets as compared to BERT with three encoder layers while having the same number of parameters.", "Transformer (Vaswani et al., 2017) based models have achieved state-of-the-art performance for many natural language processing tasks (Dai and Le, 2015; Peters et al., 2018; Radford et al., 2018; Howard and Ruder, 2018).", "These include machine translation (Vaswani et al., 2017; Ott et al., 2018), question-answering tasks (Devlin et al., 2018), natural language inference (Bowman et al., 2015; Williams et al., 2017) and semantic role labeling (Strubell et al., 2018).", "A recent Transformer based model BERT (De-vlin et al., 2018) achieved state-of-the-art results on various natural language processing tasks including GLUE, SQuAD v1.1 and SQuAD v2.0.", "BERT's model architecture is a multi-layer bidirectional Transformer encoder based on the original implementation described in Vaswani et al. (2017).", "Following the seminal results obtained by the BERT model, several follow up studies explored methods for improving them further.", "XLNet (Yang et al., 2019) adds autoregressive capabilities to BERT, improving its quality, though at the cost of additional compute requirements.", "RoBERTa (Liu et al., 2019) modifies the training procedure of BERT and provides pre-training methods that significantly improve its performance.", "Two notable papers exploring the architecture design of the BERT are following.", "Michel et al. (2019) examines the importance of attention heads in BERT architecture, highlighting scenarios where attention heads may be pruned.", "The main objective of the paper is to provide techniques for pruning attention head, and as such the amount of experiments performed on BERT is limited to a single task (MNLI).", "ALBERT (Lan et al., 2019) proposes two methods for reducing the number of parameters in BERT.", "The first is via parameter sharing across layers, and the second is by factorizing the embedding layers.", "We note (this was mentioned in the conclusion section of the paper) that while these methods are efficient in reducing the number of parameters used by the model, they do not help in reducing its latency.", "These studies provide some advancement towards a more efficient architecture design for BERT but leave much to be explored.", "In this paper we take a broader approach examining multiple design choices.", "We parameterize each layer of BERT by five different dimensions, as opposed to Devlin et al. (2018) that parameterizes a layer with two dimensions and suggests a fixed value for the remaining three.", "We then (pre-)train multiple variants of BERT with different values chosen for these dimensions by applying pruning-based architecture search technique that jointly optimizes the architecture of the model with the objective of minimizing both the pre-training loss and the number of model parameters.", "Our experiments result in the following findings: The ratio of the architecture design dimensions within a BERT encoder layer can be modified to obtain a layer with better performance.", "Transformer design dimensions suggested in Vaswani et al. (2017) are suboptimal.", "When we aim to obtain a computationally lighter model, using a tall and narrow' architecture provides better performance than a wide and shallow' architecture.", "The fully-connected component applied to each token separately plays a much more significant role in the top layers as compared to the bottom layers.", "Following BERT's notations, we use (cid:96) to denote the number of encoder layers (i.e. Transformer blocks), h to denote the hidden size, and a to denote the number of self attention heads.", "The BERT paper (Devlin et al., 2018) primarily reports results on two models: BERTBASE ( (cid:96) = 12 , h = 768 , a = 12) and BERTLARGE ( (cid:96) = 24 , h = 1024 , a = 16) .", "BERT base has 108 M parameters and BERT large has 340 M parameters.", "Though BERT large achieves higher accuracy than BERT base, due to its prohibitively large size it finds limited use in practice.", "Since BERT base achieves higher accuracy compared to previous state-of-the-art modelsPre-OpenAI SOTA, BiL-STM+ELMo+Attn and OpenAI GPTon most of the benchmark datasets, it is widely used in practice.", "BERT base and OpenAI GPT have the same number of model parameters.", "Given its broad adoption for NLP tasks, an immediate question is: can we reduce the size of BERT base without incurring any significant loss in accuracy?", "The BERT paper (Devlin et al., 2018) provides an ablation study, Table 1, over the number of model parameters by varying the number of layers (cid:96) , the hidden size h , and the number of attention heads a .", "It can be observed that the accuracy decreases drastically when the number of encoder layers (cid:96) is reduced, and also when the number of attention heads is reduced.", "We ask the following question: are there any other design dimensions that can be reduced without incurring huge loss in accuracy?", "As noted above, the three primary design dimensions of the BERT architecture are the number of encoder layers (cid:96) , the hidden size h , and the number of attention heads a .", "BERT's Transformer encoder layers are based on the original Transformer implementation described in Vaswani et al. (2017).", "Vaswani et al. (2017) fixed dimension of key, query, and value in multi-head attention, and filter dimension in feed-forward networks as a function of the hidden size and the number of attention heads.", "However, these are variable design dimensions and can be optimized.", "Moreover, BERT architecture uses the same number of attention heads for all the encoder layers and hence all the layers are identical.", "In this work, we jointly optimize all these design dimensions of BERT architecture while allowing each encoder layer to have different design dimensions.", "In order to explore the parameter space efficiently we chose to optimize the design dimensions in a pruning framework rather than launching a pretraining job for each of these choices.", "This allows a speedup of several orders of magnitude that is crucial in order to obtain meaningful conclusions.", "We parameterize the different dimensions one can modify and jointly optimize them with a mixed target of both accuracy and parameter reduction.", "We look at how the accuracy of BERT evolves on various downstream datasets like GLUE, SQuAD v1.1, and SQuAD v2.0 when we reduce the model size via an optimization procedure.", "There is a vast literature on pruning trained neural networks.", "Starting with the classical works Le-Cun et al. (1990); Hassibi and Stork (1993) in the early 90 's to the recent works Han et al. (2015), pruning deep neural networks has received a lot of attention.", "There have been two orthogonal approaches in pruning networks: structured pruning (Li et al., 2016; Molchanov et al., 2016) and unstructured pruning (Anwar et al., 2017).", "Structured pruning gives smaller architecture whereas unstructured pruning gives sparse model parameters.", "In natural language processing, Murray and Chiang (2015) explored structured pruning in feed-forward language models.", "See et al. (2016) and Kim and Rush (2016) provided pruning approaches for machine translation.", "A closely related line of work is Neural Architecture Search (NAS).", "It aims to efficiently search the space of architectures (Pham et al., 2018; Liu et al., 2018; Singh et al., 2019).", "Quantization is another technique to reduce the model size.", "This is done by quantizing the model parameters to binary (Rastegari et al., 2016; Hubara et al., 2017), ternary (Zhu et al., 2016), or 4 or 8 bits per parameter (Han et al., 2015).", "Recently published DistilBERT (Sanh et al., 2019) shows that a BERT model with fewer number of layers can be efficiently pre-trained using knowledge distillation to give much higher accuracy as compared to the same model pre-trained in a regular way.", "We note that the distillation technique is complimentary to our work and our schuBERTs can be pre-trained using distillation to boost their accuracy.", "The ablation study in Table 1, BERT (De-vlin et al., 2018), and the above explained works (Michel et al., 2019; Lan et al., 2019) look at the problem of reducing the BERT model size by reducing one or the other design dimensions number of encoder layers, hidden size, number of attention heads, and embedding size in isolation and in a sub-optimal way.", "In this work, we address this problem comprehensively.", "In this section, we present detailed architecture of the original BERT model and explain which design dimensions of it can be optimized.", "Figure 1 shows BERT pre-training architecture.", "First, the tokenized inputs are embedded into a vector of dimension h through an embedding layer E .", "The embedded inputs pass through a sequence of encoder layers 1 to (cid:96) .", "Each encoder layer is identical in its architecture.", "The output of the last encoder layer is decoded using the same embedding layer E and softmax cross-entropy loss is computed on the masked tokens.", "A special token CLS from the last encoder layer is used to compute next-sentence-prediction (NSP) loss.", "For further details of the loss Encoder layer 2 o Encoder layer 1 o Encoder layer L o NSP MaskLM MaskLM Embeddingo classification head decoder Tokenizero Masked sentence A Masked sentence B Pre-training Figure 1: BERT pre-training BERT-base number of encoder layers (cid:96) 12 hidden size h 768 number of self-attention heads a 12 feed forward dimension f 4 h key-query dimension for attention k h/a value dimension for attention v h/a Table 2: Elements of BERT corresponding to masked tokens and the NSP loss, we refer the readers to the BERT paper (Devlin et al., 2018).", "We follow BERT notation conventions and denote the number of encoder layers as (cid:96) , the hidden size as h , and the number of attention heads as a .", "Following the original Transformer implementation described in Vaswani et al. (2017) BERT sets key-query dimension for multi-head attention k to h/a .", "Following the same Transformer implementation it sets value dimension for multi-head attention v equal to k , and feed-forward filter size f equal to 4 h .", "In total, there are three design dimensions in BERT(cid:96) , h and a , they are listed in Table 2.", "For BERT base, the number of encoder layers (cid:96) is set to 12 , the hidden size h is set to 768 , and the number of attention heads a is set to 12 .", "The other three dimensions f, k, v are function of h and a .", "Further, each encoder layer of BERT is identical and uses same value of a, f, k, v .", "First of all, BERT has no architectural constraint that requires all the encoder layers to be identical.", "This aspect of design can be optimized and in full-generality it might result in highly nonidentical layers.", "This implies that a generalized schuBERT (cid:96) (cid:96) h h a a 1 , a 2 , , a (cid:96) f f 1 , f 2 , , f (cid:96) k k 1 , k 2 , , k (cid:96) v v 1 , v 2 , , v (cid:96) Table 3: Elements of schuBERT BERT will have a 1 , a 2 , , a (cid:96) number of heads, f 1 , f 2 , , f (cid:96) filter sizes in the feed forward networks, k 1 , k 2 , , k (cid:96) key sizes and v 1 , v 2 , , v (cid:96) value sizes in the attention heads, in the layers 1 , 2 , , (cid:96) respectively.", "Table 3 lists all the design dimensions of BERT that can be optimized without changing the architecture.", "Note that we abuse the term architecture to refer to the entire BERT network and the layer operations except sizes of the parameter matrices.", "In this work, our goal is to optimize (by pruning) all these dimensions to maximize accuracy for a given size of the model.", "We refer the BERT with optimized dimensions as schuBERTSize Constricted Hidden Unit BERT.", "Now, we show which parameter matrices are tied with each of these design dimensions.", "Each design dimension is tied with more than one parameter matrix.", "This is explained by providing a detail view of an encoder cell of the BERT.", "Figure 2 shows architecture of an encoder layer of BERT.", "The notations in the figure have subscript 1 that represent first encoder layer.", "Input to an encoder layer is the hidden representation of a token which is of dimension h .", "Input first goes through a multi-head attention cell.", "Note that multi-head attention cell processes hidden representation of all the tokens in a combined way.", "For simplicity, in Figure 2 we have shown only one hidden representation.", "The multi-head attention cell consists of three parameter tensors, namely key K 1 , query Q 1 and value V 1 .", "K 1 is of size k 1 a 1 h .", "Key vector for each head of the attention is of dimension k 1 and a 1 represents the number of heads.", "Hidden representation of dimension h is projected on the key tensor K 1 to get a 1 key vectors each of dimension k 1 .", "Similarly the query tensor Q 1 is used to get a 1 query vectors each of dimension k 1 for a 1 heads of the multi-head attention cell.", "The value tensor V 1 is of dimension v 1 a 1 h .", "The hidden representation is projected on the value tensor V 1 to get a 1 value vectors each of dimension v 1 .", "Note that k 1 and v 1 can be different.", "The inner product of key and query vectors after passing through softmax layer give weights for combining value vectors.", "For details of multi-head attention cell we refer the readers to Vaswani et al. (2017).", "In nutshell, using three parameter tensorsK 1 , Q 1 , V 1 , a multi-head attention cell transforms hidden representation of size h to a vector of dimension ( v 1 a 1 ) .", "This vector is projected back to the same dimension h through a proj matrix P 1 .", "Which is then added element-wise to the hidden representation that was input to the encoder cell and layer norm is applied on the addition.", "The output is passed sequentially through two fully-connected layers namely D 1 and G 1 .", "D 1 consists of a parameter matrix of dimension f 1 h and G 1 consists of a parameter matrix of dimension h f 1 .", "The output of G 1 is added element-wise to the input of D 1 and layer norm is applied to it.", "This is the output of the encoder cell and is input to the next encoder cell.", "The color coding in Figure 2 shows which vectors need to be of the same dimension.", "The hidden representation size h needs to be same throughout all the encoder layers.", "In a multi-head attention cell, in each head key and query vectors must have the same dimension.", "Therefore, key and query tensors, K 1 , Q 1 must be of the same size k 1 a 1 h .", "The value vector can be of different dimension v 1 .", "Therefore the value tensor V 1 should be of dimension v 1 a 1 h .", "Further, the filter size f 1 in the two fully-connected layers D 1 , G 1 is a variable and can take any integral value.", "subsequent improvements such as XLNet (Yang et al., 2019) and RoBERTa (Liu et al., 2019), we set the WordPiece embedding size e equal to the hidden layer size h , i.e. e h .", "However, factorization of the embedding matrix can be incorporated as demonstrated in ALBERT (Lan et al., 2019).", "We optimize BERT design dimensions listed in Table 3 by pruning the original BERT base architecture.", "All the design dimensions are upper bounded by their original value in the BERT base as given in the Table 2.", "Since we keep the architecture same, that is we do not remove any layer, the design dimensions are lower bounded by one.", "For each design dimension that we seek to optimize, we introduce a prune-parameter vector of size equal to the original dimension.", "We take pre-trained original BERT base network, and multiply all the parameter tensors/matrices that are associated with the particular design dimension with the corresponding prune-parameter vector.", "For example, filter size of the feed-forward layer in the first encoder layer is f 1 = 3072 .", "To optimize f 1 , we introduce a prune-parameter vector f 1 R 3072 and initialize it with all ones.", "In the original BERT base, the two parameter matrices D 1 and G 1 are associated with the design dimension f 1 .", "We replace D 1 by diag( f 1 ) D 1 and G 1 by G 1 diag( f 1 ) in the BERT pre-trained model.", "Table 4 lists all the prune parameters.", "Table 5 lists all the parameter tensors/matrices for which design dimensions are optimized by multiplying prunable parameters on all the sides.", "key and query tensors K i , Q i for i { 1 , 2 , , (cid:96) } are multiplied on all the three sides with prunable parameters corresponding to key-vector, number of attention heads, and hidden size.", "Similarly multiplications are performed on value tensor V i with a different value-vector prunable parameter.", "proj tensor has same multiplication as value tensor.", "The two feed-forward matrices D i , G i have same multiplications.", "We denote the so obtained prunable tensors with tilde on their top.", "Note that we do not have prune parameters for pruning encoder layers.", "We find the optimal number of encoder layers (cid:96) by running experiments for different values of (cid:96) .", "Our approach is to optimally find which individual elements of prunable parameters { h , { a i , v i , k i , f i } i [ (cid:96) ] } can be set to zero while incurring minimal increase in the preh R h { f i R f } i =1 , 2 , ,(cid:96) { a i R a } i =1 , 2 , ,(cid:96) { k i R k } i =1 , 2 , ,(cid:96) { v i R v } i =1 , 2 , ,(cid:96) Table 4: Prunable parameters.", "K i K i [diag( k i )diag( a i )diag( h )] (cid:102) K i Q i Q i [diag( k i )diag( a i )diag( h )] (cid:102) Q i V i V i [diag( v i )diag( a i )diag( h )] (cid:101) V i P i P i [diag( h )diag( v i )diag( a i )] (cid:101) P i D i D i [diag( f i )diag( h )] (cid:102) D i G i G i [diag( h )diag( f i )] (cid:102) G i Table 5: Prunable BERT parameter matrices/tensors.", "training loss.", "After we have sparse prunable parameter vectors, we remove the corresponding rows/columns from the BERT parameter matrices { K i , Q i , V i , P i , D i , G i } i [ (cid:96) ] , and get a small-er/faster BERT model.", "Below we explain the algorithm to find the sparse prunable parameters.", "We start with the pre-trained BERT base trained on BooksCorpus ( 800 M words) and English Wikipedia ( 2500 M words) following the BERT pretraining procedure given in Devlin et al. (2018).", "Particularly, we minimize the loss given in Equation (1) to learn the optimal parameter tensors { K i , Q i , V i , P i , D i , G i } i [ (cid:96) ] and the embedding matrix E .", "Next, we introduce the prunable parameters given in Table 4 and initialize them with all ones.", "We create prunable BERT parameter matrices by multiplying the prunable parameters to the learned BERT parameter matrices, as given in Table 5.", "Then, we optimize the prunable parameters 's while fixing the learned parameters matrices as given in Equation 2.", "In addition to the MLM and NSP loss, we add sparsity inducing loss on the prunable parameters with a regularization coefficient .", "It is well known that (cid:96) 1 penalty induces sparsity in the parameters.", "Further, since our goal is to minimize the number of parameters, to account for the fact that each element of prune parameters when set to zero reduces different number of BERT parameters, we multiply the (cid:96) 1 loss terms with the cost terms 's.", "For example, a i is proportional to the number of model parameters that will be removed when an element of the prune parameter a i is set to zero.", "It is critical to incorporate 's.", "Their values are significantly different from each other.", "The values are 1 .", "0 , 0 .", "73 , 0 .", "093 , 0 .", "093 , 0 .", "0078 for a, h, k, v and f respectively.", "After training the prunable BERT model for a fixed number of steps, we truncate the smallest prune parameters to zero, and remove the corresponding rows/columns from the BERT parameter matrices { K i , Q i , V i , P i , D i , G i } i [ (cid:96) ] .", "Then we fine-tune the so obtained smaller schuBERT model.", "Algorithm 1 summarizes our approach.", "If we want to reduce the number of parameters by a fraction , we do so in T steps.", "In each step, we prune /T fraction of parameters, and at the end of the step we fine-tune the network and repeat these steps T times.", "Though we have explained the algorithm in terms of (cid:96) 1 penalty on the prunable parameters, in our experiments we tried alternative sparsity inducing penalties as well(cid:96) 0 regularization, and proximal gradient descent on prunable parameters.", "arg min { E, { K i ,Q i ,V i ,P i ,D i ,G i } i [ (cid:96) ] } L MLM+NSP ( E, { K i , Q i , V i , P i , D i , G i } i [ (cid:96) ] ) .", "(1) arg min { h , { ai , vi , ki , fi } i [ (cid:96) ] } L MLM+NSP ( E, { (cid:102) K i , (cid:102) Q i , (cid:101) V i , (cid:101) P i , (cid:102) D i , (cid:102) G i } i [ (cid:96) ] ) + { h (cid:107) h (cid:107)} + (cid:96) (cid:88) i =1 { a i (cid:107) a i (cid:107) + v i (cid:107) v i (cid:107) + k i (cid:107) k i (cid:107) + f i (cid:107) f i (cid:107)} .", "(2) 6 Experimental Results In this section, we present our experimental results.", "We apply Algorithm 1 on BERT base.", "For pre-training BERT base we use MXNET based gluon-nlp repository that uses the hyper-parameters suggested in the original BERT paper.", "Besides pretraining, our algorithm has three hyper-parameters: regularization coefficient , learning rate for prunable parameters, and the number of steps for regularizing prune parametersEquation (2).", "We run hyper-parameter optimization on these parameters to get the best results.", "For regularization loss (2), we use the same training data that we use for pre-training, BooksCorpus (800M words) and English Wikipedia (2,500M words).", "However, we run the regularization step for 1 / 1000 th steps as used for pre-training.", "We finet-une the pruned BERT Algorithm 1 Pruning Transformers Input: A Transformer model, minimization objective (FLOPs/Params/Latency), target fraction , number of iterations T .", "pretraining.", "We provide accuracy results for schuBERT on the following downstream tasksquestion answering datasetsSQuAD v1.1, SQuAD v2.0; and GLUE datasets MNLI, MRPC, SST-2 and RTE.", "For these downstream tasks, we use the fine-tuning hyper-parameters as suggested in the BERT paper.", "We create six schuBERTs by pruning one or all of the design dimensions.", "Accuracy of the downstream tasks on these schuBERTs are given in Tables 6-13.", "The BERT base has 108 million parameters.", "The schuBERT sizes 88 , 66 , 43 million are chosen to match the number of parameters in BERT with (cid:96) { 9 , 6 , 3 } layers.", "We use schuBERTx notation for x { h, f, a } to denote a schuBERT obtained by only pruning h -hidden size, f -filter size of feed-forward, a number of attention heads respectively.", "We use schuBERT-all to denote the case when all the design dimensionsh, f, a, k, v , except (cid:96) are pruned.", "We compare our results with original BERT base, and by varying its number of encoder layers (cid:96) { 12 , 9 , 6 , 3 } .", "We denote these results by BERT(cid:96) .", "Since ALBERT reduces parameters by factorizing the embedding matrix, we denote its results by model SQuAD v1.1 SQuAD v2.0 MNLI MRPC SST-2 RTE Avg BERT-base ( 108 M) 90 .", "ALBERTe .", "ALBERT provided results only for 88 million parameter model, not for any smaller models.", "Further, we also compare with the baseline case when all the design dimensions are pruned uniformly.", "We denote these results by BERT-all uniform.", "For 99 M model, Table 6, schuBERT-all beats the baseline BERT-all uniform by 0 .", "4% higher average accuracy and performs better than schuBERT-f/h/a .", "Moreover, the loss in performance in comparison to BERT base with 108 million parameters is only 0 .", "2% .", "Table 7 gives exact design dimensions for schuBERT-all with 99 million parameters.", "We see that number of hidden units remain same as in BERT base, h = 768 .", "Parameter reduction primarily comes from feed-forward layers.", "Moreover, filter size of feed-forward layer f has a clear increasing pattern across the layers.", "For 88 M model, Table 8, again schuBERT-all beats all the other models.", "It gives 1 .", "1% higher average accuracy than BERT(cid:96) with 9 layers.", "ALBERTe performs better on SQuAD datasets, but performs significantly worse on MNLI and SST-2 datasets.", "Note ALBERT's approach is complementary to our approach and it can be incorporated into our schuBERTs.", "schuBERTa performs significantly worse than schuBERT-all which implies that pruning only number of attention heads is highly sub-optimal, as is recently done in Michel et al. (2019).", "Table 9 provides the exact design dimensions for schuBERT-all with 88 million parameters.", "Similar to 99 M model, filter size of feed-forward layer f has a clear increasing pattern across the layers.", "For heavily pruned models 77 M, 66 M, 55 M and 43 M models accuracy results are shown in Table 10, Table 11, Table 12 and Table 13 respectively.", "In all these models schuBERTh beats all the other models.", "For 66 M model, schuBERTh gives 1 .", "9% higher average accuracy than BERT(cid:96) with 6 layers.", "For 43 M model, schuBERTh gives 6 .", "6% higher average accuracy than BERT(cid:96) with 3 layers.", "That is reducing the hidden units is way better than to reduce the number of layers to create a light BERT model.", "Ideally, we would expect schuBERT-all to perform better than schuBERTh , but marginally worse performance of schuBERT-all can be attributed to the high complexity of pruning all the design dimensions together.", "Table 14 provides best schuBERT architectures when the number of model parameters are restricted to different values.", "For smaller models, schuBERTh outperforms all other schuBERTs including schuBERT-all.", "Note that our schuBERT architectures are smaller in size as well as they yield lower latency.", "Based on the above described experimental results, we provide following insights on the design dimensions of schuBERT architecture.", "connected component applied to each token separately plays a much more significant role in the top layers as compared to the bottom layers.", "Figure 3 shows pattern of filter size of feed-forward layer across the encoder cells for various schuBERT-all models.", "In each of them, filter size follows an increasing pattern with min-max ratio ranging from 1 .", "5 to 4 , as opposed to same value across all the layers.", "Tall and Narrow BERT.", "When we aim to obtain a computationally lighter model, using a tall and narrow' architecture provides better performance than a wide and shallow' architecture.", "Our results in Tables 8, 11, 13 demonstrate that schuBERT with (cid:96) = 12 encoder layers significantly outperforms BERT with (cid:96) { 9 , 6 , 3 } layers for the same number of parameters.", "Expansive Multi-head Attention.", "The ratio of the design dimensions within a BERT encoder layer can be modified to obtain a better performing layer architecture.", "Transformer design dimensions suggested in (Vaswani et al., 2017) are sub-optimal.", "Following the original Transformer architecture described in (Vaswani et al., 2017), BERT and other Transformer based models set key-query k and value v dimension for multi-head attention to k = v = h/a , where h is the size of the hidden representation, and a is the number of attention heads.", "Also, following the same architecture (Vaswani et al., 2017), BERT sets feed-forward filter size f = 4 h .", "Although there is no restriction in using different output dimensions k, v and filter size f , without changing the behaviour of the attention mechanism, we are not aware of any study questioning this default value' of k = v = h/a and f = 4 h .", "Our schuBERT architecture for various model sizes given in Table 14, show that for smaller models k, v should be much larger than h/a .", "For 43 M schuBERT model h/a = 25 .", "3 whereas k = v = 64 .", "Also, f should be much larger than 4 h .", "For the same 43 M schuBERT model 4 h = 936 whereas f = 3072 .", "Table 13 shows that 43 M schuBERT ( (cid:96) = 12 , h = 304 , a = 12 , k = v = 64 , f = 3072 ) significantly outperforms BERT(cid:96) ( (cid:96) = 3 , h = 768 , a = 12 , k = v = h/a, f = 4 h )." ]
[ "abstain", "abstain", "abstain", "result", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "objective", "result", "objective", "objective", "objective", "method", "other", "other", "other", "other", "other", "abstain", "other", "other", "method", "other", "other", "other", "other", "other", "method", "objective", "other", "method", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "other", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "other", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain" ]
[ "Abstract Contextual word representations derived from large-scale neural language models are successful across a diverse set of NLP tasks, suggesting that they encode useful and transferable features of language.", "To shed light on the linguistic knowledge they capture, we study the representations produced by several recent pretrained contextualizers (variants of ELMo, the OpenAI transformer language model, and BERT) with a suite of sixteen diverse probing tasks.", "We find that linear models trained on top of frozen contextual representations are competitive with state-of-the-art task-specific models in many cases, but fail on tasks requiring fine-grained linguistic knowledge (e.g., conjunct identification).", "To investigate the transferability of contextual word representations, we quantify differences in the transferability of individual layers within contextualizers, especially between recurrent neural networks (RNNs) and transformers.", "For instance, higher layers of RNNs are more task-specific, while transformer layers do not exhibit the same monotonic trend.", "In addition, to better understand what makes contextual word representations transferable, we compare language model pretraining with eleven supervised pretraining tasks.", "For any given task, pretraining on a closely related task yields better performance than language model pretraining (which is better on average) when the pretraining dataset is fixed.", "However, language model pretraining on more data gives the best results.", "Pretrained word representations (Mikolov et al., 2013; Pennington et al., 2014) are a key component of state-of-the-art neural NLP models.", "Traditionally, these word vectors are statica single * Work done while at the Allen Institute for Artificial Intelligence.", "vector is assigned to each word.", "Recent work has explored contextual word representations (hence-forth: CWR s), which assign each word a vector that is a function of the entire input sequence; this enables them to model the use of words in context.", "CWR s are typically the outputs of a neural network (which we call a contextualizer ) trained on tasks with large datasets, such as machine translation (McCann et al., 2017) and language modeling (Peters et al., 2018a).", "CWR s are extraordinarily effectiveusing them in place of traditional static word vectors within the latest models leads to large gains across a variety of NLP tasks.", "The broad success of CWR s indicates that they encode useful, transferable features of language.", "However, their linguistic knowledge and transferability are not yet well understood.", "Recent work has explored the linguistic knowledge captured by language models and neural machine translation systems, but these studies often focus on a single phenomenon, e.g., knowledge of hierarchical syntax (Blevins et al., 2018) or morphology (Belinkov et al., 2017a).", "We extend prior work by studying CWR s with a diverse set of sixteen probing tasks designed to assess a wide array of phenomena, such as coreference, knowledge of semantic relations, and entity information, among others.", "The result is a broader view of the linguistic knowledge encoded within CWR s.", "With respect to transferability, pretraining contextualizers on the language modeling task has had the most empirical success, but we can also consider pretraining contextualizers with other supervised objectives and probing their linguistic knowledge.", "We examine how the pretraining task affects the linguistic knowledge learned, considering twelve pretraining tasks and assessing transferability to nine target tasks.", "Better understanding the linguistic knowledge and transferability of CWR s is necessary for their principled enhancement through new encoder architectures and pretraining tasks that build upon their strengths or alleviate their weaknesses (Linzen, 2018).", "This paper asks and answers: 1. What features of language do these vectors capture, and what do they miss?", "( 4) 2. How and why does transferability vary across representation layers in contextualizers?", "( 5) 3. How does the choice of pretraining task affect the vectors' learned linguistic knowledge and transferability?", "( 6)", "We use probing models 1 (Shi et al., 2016b; Adi et al., 2017; Hupkes et al., 2018; Belinkov and Glass, 2019) to analyze the linguistic information within CWR s.", "Concretely, we generate features for words from pretrained contextualizers and train a model to make predictions from those features alone (Figure 1).", "If a simple model can be trained to predict linguistic information about a word (e.g., its part-of-speech tag) or a pair of words (e.g., their semantic relation) from the CWR (s) alone, we can reasonably conclude that the CWR (s) encode this information.", "Our analysis reveals interesting insights such as: 1. Linear models trained on top of frozen CWR s are competitive with state-of-the-art task-specific models in many cases, but fail on tasks requiring fine-grained linguistic knowledge.", "In these cases, we show that task-trained contextual features greatly help with encoding the requisite knowledge.", "2. The first layer output of long short-term memory (LSTM) recurrent neural networks is consistently the most transferable, whereas it is the middle layers for transformers.", "3. Higher layers in LSTMs are more task-specific (and thus less general), while the transformer layers do not exhibit this same monotonic increase in task-specificity.", "4. Language model pretraining yields representations that are more transferable in general than eleven other candidate pretraining tasks, though pretraining on related tasks yields the strongest results for individual end tasks.", "We construct a suite of sixteen diverse English probing tasks and use it to better understand the linguistic knowledge contained within CWR s.", "In contrast to previous studies that analyze the properties and task performance of sentence embeddings (Adi et al., 2017; Conneau et al., 2018, inter alia ), we specifically focus on understanding the CWR s of individual or pairs of words.", "We release this analysis toolkit to support future work in probing the contents of representations.", "2 See Appendix A for details about task setup.", "The majority of past work in probing the internal representations of neural models has examined various token labeling tasks, where a decision is made independently for each token (Belinkov et al., 2017a,b; Blevins et al., 2018, inter alia ).", "We synthesize these disparate studies and build upon them by proposing additional probing tasks.", "The part-of-speech tagging (POS) task assesses whether CWR s capture basic syntax.", "We experiment with two standard datasets: the Penn Treebank (PTB; Marcus et al., 1993) and the Universal Dependencies English Web Treebank (UD-EWT; Silveira et al., 2014).", "The CCG supertagging (CCG) task assesses the vectors' fine-grained information about the syntactic roles of words in context.", "It is considered almost parsing (Bangalore and Joshi, 1999), since a sequence of supertags maps a sentence to a small set of possible parses.", "We use CCGbank (Hockenmaier and Steedman, 2007), a conversion of the PTB into CCG derivations.", "The syntactic constituency ancestor tagging tasks are designed to probe the vectors' knowledge of hierarchical syntax.", "For a given word, the probing model is trained to predict the constituent la-2 http://nelsonliu.me/papers/ contextual-repr-analysis Figure 2: Annotated sentences from the STREUSLE 4.0 corpus, used in the preposition supersense disambiguation task.", "In the semantic tagging task ( ST ), tokens are assigned labels that reflect their semantic role in context.", "These semantic tags assess lexical semantics, and they abstract over redundant POS distinctions and disambiguate useful cases within POS tags.", "We use the dataset of Bjerva et al. (2016); the tagset has since been developed as part of the Parallel Meaning Bank (Abzianidze et al., 2017).", "Preposition supersense disambiguation is the task of classifying a preposition's lexical semantic contribution (the function; PS-fxn ) and the semantic role or relation it mediates (the role; PS-role ).", "This task is a specialized kind of word sense disambiguation, and examines one facet of lexical semantic knowledge.", "In contrast to the tagging tasks above, the model is trained and evaluated on single-token prepositions (rather than making a decision for every token in a sequence).", "We use the STREUSLE 4.0 corpus (Schneider et al., 2018); example sentences appear in Figure 2. The event factuality (EF) task involves labeling phrases with the factuality of the events they describe (Saur and Pustejovsky, 2009, 2012; de Marneffe et al., 2012).", "For instance, in the following example reproduced from Rudinger et al. (2018), (1a) conveys that the leaving didn't happen, while the superficially similar (1b) does not.", "We use the Universal Decompositional Semantics It Happened v2 dataset (Rudinger et al., 2018), and the model is trained to predict a (non)factuality value in the range [ 3 , 3] .", "Unlike the tagging tasks above, this task is treated as a regression problem, where a prediction is made only for tokens corresponding to events (rather than every token in a sequence).", "Performance is measured using Pearson correlation ( r ); we report ( r 100 ) so metrics for all tasks fall between 0 and 100.", "Several of our probing tasks involve segmentation using BIO or IO tags.", "Here the model is trained to predict labels from only a single word's CWR .", "Syntactic chunking (Chunk) tests whether CWR s contain notions of spans and boundaries; the task is to segment text into shallow constituent chunks.", "We use the CoNLL 2000 shared task dataset (Tjong Kim Sang and Buchholz, 2000).", "Named entity recognition (NER) examines whether CWR s encode information about entity types.", "We use the CoNLL 2003 shared task dataset (Tjong Kim Sang and De Meulder, 2003).", "Grammatical error detection (GED) is the task of identifying tokens which need to be edited in order to produce a grammatically correct sentence.", "Given that CWR s are extracted from models trained on large amounts of grammatical text, this task assesses whether embeddings encode features that indicate anomalies in their input (in this case, ungrammaticality).", "We use the First Certificate in English (Yannakoudakis et al., 2011) dataset, converted into sequence-labeling format by Rei and Yannakoudakis (2016).", "The conjunct identification (Conj) task challenges the model to identify the tokens that comprise the conjuncts in a coordination construction.", "Doing so requires highly specific syntactic knowledge.", "The data comes from the coordination-annotated PTB of Ficler and Goldberg (2016).", "We also design probing tasks that examine whether relationships between words are encoded in CWR s.", "In these tasks, given a word pair w 1 , w 2 , we input [ w 1 , w 2 , w 1 (cid:12) w 2 ] into the probing model; it is trained to predict information about the relation between the tokens (Belinkov, 2018).", "We distinguish between arc prediction and arc classification tasks.", "Arc prediction is a binary classification task, where the model is trained to identify whether a relation exists between two tokens.", "Arc classification is a multiclass classification task, where the model is provided with two tokens that are linked via some relationship and trained to identify how they are related.", "For example, in the syntactic dependency arc prediction task, the model is given the representations of two tokens ( w a , w b ) and trained to predict whether the sentence's syntactic dependency parse contains a dependency arc with w a as the head and w b as the modifier.", "The syntactic dependency arc classification task presents the model with the representations of two tokens ( w head , w mod ) , where w mod is the modifier of w head , and the model is trained to predict the type of syntactic relation that link them (the label on that dependency arc).", "We use the PTB (converted to UD) and the UD-EWT.", "Similarly, semantic dependency arc prediction trains the model to predict whether two tokens are connected by a semantic dependency arc, while the semantic dependency arc classification task trains models to classify the semantic relations between tokens.", "We use the dataset from the SemEval 2015 shared task (Oepen et al., 2015) with the DELPH-IN MRS-Derived Semantic Dependencies (DM) target representation.", "The syntactic and semantic dependency arc prediction and classification tasks are closely related to state-of-the-art models for semantic and syntactic dependency parsing, which score pairs of CWR s to make head attachment and arc labeling decisions (Dozat and Manning, 2016, 2018).", "To generate negative examples for the dependency arc prediction tasks, we take each positive example ( w head , w mod ) and generate a new negative example ( w rand , w mod ) .", "w rand is a random token in the sentence that is not the head of w mod .", "Thus, the datasets used in these tasks are balanced.", "We also consider a coreference arc prediction task, where the model is trained to predict whether two entities corefer from their CWR s.", "We use the dataset from the CoNLL 2012 shared task (Prad-han et al., 2012).", "To generate negative examples, we follow a similar procedure as the dependency arc prediction tasks: given a positive example ( w a , w b ) , where w b occurs after w a and the two tokens share a coreference cluster, we create a negative example ( w random entity , w b ) , where w random entity is a token that occurs before w b and belongs to a different coreference cluster.", "Probing Model We use a linear model as our probing model; limiting its capacity enables us to focus on what information can be easily extracted from CWR s.", "See Appendix B for probing model training hyperparameters and other details.", "Contextualizers We study six publicly-available models for contextualized word representation in English.", "ELMo (Peters et al., 2018a) concatenates the output of two contextualizers independently trained on the bidirectional language modeling (biLM) task.", "ELMo (original) uses a 2-layer LSTM for contextualization.", "We also study two variations from Peters et al. (2018b): ELMo (4-layer) uses a 4-layer LSTM, and ELMo (trans-former) uses a 6-layer transformer (Vaswani et al., 2017).", "Each of these models is trained on 800M tokens of sentence-shuffled newswire text (the 1 Billion Word Benchmark; Chelba et al., 2014).", "The OpenAI transformer (Radford et al., 2018) is a left-to-right 12-layer transformer language model trained on 800M tokens of contiguous text from over 7,000 unique unpublished books (BookCorpus; Zhu et al., 2015).", "BERT (Devlin et al., 2018) uses a bidirectional transformer jointly trained on a masked language modeling task and a next sentence prediction task.", "The model is trained on BookCorpus and the English Wikipedia, a total of approximately 3300M tokens.", "We study BERT (base, cased) , which uses a 12-layer transformer, and BERT (large, cased) , which uses a 24-layer transformer.", "To better understand the linguistic knowledge captured by pretrained contextualizers, we analyze each of their layers with our set of probing tasks.", "These contextualizers differ in many respects, and it is outside the scope of this work to control for all differences between them.", "We focus on probing the models that are available to us, leaving a more systematic comparison of training regimes and model architectures to future work.", "Our probing models are trained on the representations produced by the individual layers of each contextualizer.", "We also compare to a linear probing model trained on noncontextual vectors (300-dimensional GloVe trained on the cased Common Crawl; Pennington et al., 2014) to assess the gains from contextualization.", "3,4 With just a linear model, we can readily extract much of the information needed for high performance on various NLP tasks.", "In all cases, CWR s perform significantly better than the noncontextual baseline.", "Indeed, we often see probing models rivaling or exceeding the performance of (of-ten carefully tuned and task-specific) state-of-the-art models.", "In particular, the linear probing model surpasses the published state of the art for grammatical error detection and preposition supersense identification (both role and function).", "Comparing the ELMo-based contextualizers, we see that ELMo (4-layer) and ELMo (original) are essentially even, though both recurrent models outperform ELMo (transformer).", "We also see that the OpenAI transformer significantly under-performs the ELMo models and BERT.", "Given that it is also the only model trained in a unidirectional (left-to-right) fashion, this reaffirms that bidirec-tionality is a crucial component for the highest-quality contextualizers (Devlin et al., 2018).", "In addition, the OpenAI transformer is the only model trained on lowercased text, which hinders its performance on tasks like NER.", "BERT significantly improves over the ELMo and OpenAI models.", "Our probing task results indicate that current methods for CWR do not capture much transfer-3 See Appendix C for references to the previous state of the art (without pretraining).", "4 For brevity, in this section we omit probing tasks that cannot be compared to prior work.", "See Appendix D for pretrained contextualizer performance for all layers and all tasks.", "able information about entities and coreference phenomena in their input (e.g., the NER results in Table 1 and the coreference arc prediction results in Appendix D).", "To alleviate this weakness, future work could augment pretrained contextualizers with explicit entity representations (Ji et al., 2017; Yang et al., 2017; Bosselut et al., 2017).", "Probing Failures While probing models are at or near state-of-the-art performance across a number of tasks, they also do not perform as well on several others, including NER, grammatical error detection, and conjunct identification.", "This may occur because (1) the CWR simply does not encode the pertinent information or any predictive correlates, or (2) the probing model does not have the capacity necessary to extract the information or predictive correlates from the vector.", "In the former case, learning task-specific contextual features might be necessary for encoding the requisite task-specific information into the CWR s.", "Learning task-specific contextual features with a contextual probing model also helps with (2), but we would expect the results to be comparable to increasing the probing model's capacity.", "To better understand the failures of our probing model, we experiment with (1) a contextual probing model that uses a task-trained LSTM (unidi-rectional, 200 hidden units) before the linear output layer (thus adding task-specific contextualization) or (2) replacing the linear probing model with a multilayer perceptron (MLP; adding more parameters to the probing model: a single 1024d hidden layer activated by ReLU).", "These alternate probing models have nearly the same number of parameters (LSTM + linear has slightly fewer).", "We also compare to a full-featured model to Probing Model NER GED Conj GGParent Linear 82.85 29.37 38.72 67.50 MLP (1024d) 87.19 47.45 55.09 78.80 LSTM (200d) + Linear 88.08 48.90 78.21 84.96 BiLSTM (512d) + MLP (1024d) 90.05 48.34 87.07 90.38 Table 2: Comparison of different probing models trained on ELMo (original); best-performing probing model is bolded.", "estimate an upper bound on performance for our probing setup.", "In this model, the CWR s are inputs to a 2-layer BiLSTM with 512 hidden units, and the output is fed into a MLP with a single 1024-dimensional hidden layer activated by a ReLU to predict a label.", "A similar model, augmented with a conditional random field (CRF; Lafferty et al., 2001), achieved state-of-the-art results on the CoNLL 2003 NER dataset (Peters et al., 2018a).", "We remove the CRF, since other probing models have no global context.", "For this experiment, we focus on the ELMo (original) pretrained contextualizer.", "Table 2 presents the performance of the best layer within each alternative probing model on the two tasks with the largest gap between the linear probing model and state-of-the-art methods: NER and grammatical error detection.", "We also include great-grandparent prediction and conjunct identification, two tasks that require highly specific syntactic knowledge.", "In all cases, we see that adding more parameters (either by replacing the linear model with a MLP, or using a contextual probing model) leads to significant gains over the linear probing model.", "On NER and grammatical error detection, we observe very similar performance between the MLP and LSTM + Linear modelsthis indicates that the probing model simply needed more capacity to extract the necessary information from the CWR s.", "On conjunct identification and great-grandparent prediction, two tasks that probe syntactic knowledge unlikely to be encoded in CWR s, adding parameters as a task-trained component of our probing model leads to large gains over simply adding parameters to the probing model.", "This indicates that the pretrained contextualizers do not capture the information necessary for the task, since such information is learnable by a task-specific contextualizer.", "This analysis also reveals insights about contextualizer fine-tuning, which seeks to specialize the CWR s for an end task (Howard and Ruder, 2018; Radford et al., 2018; Devlin et al., 2018).", "Our results confirm that task-trained contextualization is important when the end task requires specific information that may not be captured by the pretraining task ( 4).", "However, such end-task specific contextualization can come from either fine-tuning CWR s or using fixed output features as inputs to a task-trained contextualizer; Peters et al. (2019) begins to explore when each approach should be applied.", "We quantify the transferability of CWR s by how well they can do on the range of probing tasks representations that are more transferable will perform better than alternatives across tasks.", "When analyzing the representations produced by each layer of pretrained contextualizers, we observe marked patterns in layerwise transferability (Fig-ure 3).", "The first layer of contextualization in recurrent models (original and 4-layer ELMo) is consistently the most transferable, even outperforming a scalar mix of layers on most tasks (see Appendix D for scalar mix results).", "Schuster et al. (2019) see the same trend in English dependency parsing.", "By contrast, transformer-based contextualizers have no single most-transferable layer; the best performing layer for each task varies, and is usually near the middle.", "Accordingly, a scalar mix of transformer layers outperforms the best individual layer on most tasks (see Appendix D).", "Pretraining encourages the model to encode pretraining-taskspecific information; they learn transferable features incidentally.", "We hypothesize that this is an inherent trade-offsince these models used fixed-sized vector representations, task-specificity comes at the cost of generality and transferability.", "To investigate the task-specificity of the representations generated by each contextualizer layer, we assess how informative each layer of representation is for the pretraining task, essentially treating it as a probe.", "We focus on the ELMo-based models, since the authors have released code for training their", "con-(a) ELMo (original) Layer 0 Layer 2", "textualizers.", "Furthermore, the ELMo-based models facilitate a controlled comparisonthey only differ in the contextualizer architecture used.", "We evaluate how well CWR features perform the pretraining taskbidirectional language modeling.", "Specifically, we take the pretrained representations for each layer and relearn the language model softmax classifiers used to predict the next and previous token.", "The ELMo models are trained on the Billion Word Benchmark, so we retrain the softmax classifier on similar data to mitigate any possible effects from domain shift.", "We split the held-out portion of the Billion Word Benchmark into train (80%, 6.2M tokens) and evaluation (20%, 1.6M tokens) sets and use this data to retrain and evaluate the softmax classifiers.", "We expect that biLM perplexity will be lower when training the softmax classifiers on representations from layers that capture more information about the pretraining task.", "Figure 4 presents the performance of softmax classifiers trained to perform the bidirectional language modeling task, given just the CWR s as input.", "We notice that higher layers in recurrent models consistently achieve lower perplexities.", "Interestingly, we see that layers 1 and 2 in the 4-layer ELMo model have very similar performancethis warrants further exploration.", "On the other hand, the layers of the ELMo (transformer) model do not exhibit such a monotonic increase.", "While the topmost layer is best (which we expected, since this is the vector originally fed into a softmax classifier during pretraining), the middle layers show varying performance.", "Across all models, the representations that are better-suited for language modeling are also those that exhibit worse probing task performance (Figure 3), indicating that contextualizer layers trade off between encoding general and task-specific features.", "These results also reveal a difference in the layerwise behavior of LSTMs and transformers; moving up the LSTM layers yields more task-specific representations, but the same does not hold for transformers.", "Better understanding the differences between transformers and LSTMs is an active area of research (Chen et al., 2018; Tang et al., 2018), and we leave further exploration of these observations to future work.", "These observations motivate the gradual unfreezing method of Howard and Ruder (2018), where the model layers are progressively unfrozen (starting from the final layer) during the fine-tuning process.", "Given our observation that higher-level LSTM layers are less general (and more pretraining task-specific), they likely have to be fine-tuned a bit more in order to make them appropriately task specific.", "Meanwhile, the base layer of the LSTM already learns highly transferable features, and may not benefit from fine-tuning.", "Successful pretrained contextualizers have used self-supervised tasks such as bidirectional language modeling (Peters et al., 2018a) and next sentence prediction (Devlin et al., 2018), which enable the use of large, unannotated text corpora.", "However, contextualizers can also be pretrained on explicitly supervised objectives, as done in pretrained sentence embedding methods (Con-neau et al., 2017).", "To better understand how the choice of pretraining task affects the linguistic knowledge within and transferability of CWR s, we compare pretraining on a range of different explicitly-supervised tasks with bidirectional language model pretraining.", "To ensure a controlled comparison of different pretraining tasks, we fix the contextualizer's architecture and pretraining dataset.", "All of our contextualizers use the ELMo (original) architecture, and the training data from each of the pretraining tasks is taken from the PTB.", "Each of the (identi-cal) models thus see the same tokens, but the supervision signal differs.", "5 We compare to (1) a noncontextual baseline (GloVe) to assess the effect of contextualization, (2) a randomly-initialized, untrained ELMo (original) baseline to measure the effect of pretraining, and (3) the ELMo (original) model pretrained on the Billion Word Benchmark to examine the effect of training the bidirectional language model on more data.", "Table 3 presents the average target task performance of each layer in contextualizers pretrained on twelve different tasks (biLM and the eleven tasks from 2 with PTB annotations).", "Bidirectional language modeling pretraining is the most effective on average.", "However, the settings that achieve the highest performance for individual target tasks often involve transferring between related tasks (not shown in Table 3; see Appendix E).", "For example, when probing CWR s on 5 We omit the OpenAI transformer and BERT from this comparison, since code for pretraining these contextualizers is not publicly available.", "the syntactic dependency arc classification (EWT) task, we see the largest gains from pretraining on the task itself, but with a different dataset (PTB).", "However, pretraining on syntactic dependency arc prediction (PTB), CCG supertagging, chunking, the ancestor prediction tasks, and semantic dependency arc classification all give better performance than bidirectional language model pretraining.", "Although related task transfer is beneficial, we naturally see stronger results from training on more data (the ELMo original BiLM trained on the Billion Word Benchmark).", "This indicates that the transferability of pretrained CWR s relies on pretraining on large corpora, emphasizing the utility and importance of self-supervised pretraining.", "Furthermore, layer 0 of the BiLM is the highest-performing single layer among PTB-pretrained contextualizers.", "This observation suggests that lexical information is the source of the language model's initial generalizability, since layer 0 is the output of a character-level convolutional neural network with no token-level contextual information.", "Methodologically, our work is most similar to Shi et al. (2016b), Adi et al. (2017), and Hupkes et al. (2018), who use the internal representations of neural models to predict properties of interest.", "Conneau et al. (2018) construct probing tasks to study the linguistic properties of sentence embedding methods.", "We focus on contextual word representations, which have achieved state-of-the-art results on a variety of tasks, and examine a broader range of linguistic knowledge.", "In contemporaneous work, Tenney et al. (2019) evaluate CoVe (McCann et al., 2017), ELMo (Pe-ters et al., 2018a), the OpenAI Transformer (Rad-ford et al., 2018), and BERT (Devlin et al., 2018) on a variety of sub-sentence linguistic analysis tasks.", "Their results also suggest that the aforementioned pretrained models for contextualized word representation encode stronger notions of syntax than higher-level semantics.", "They also find that using a scalar mix of output layers is particularly effective in deep transformer-based models, aligned with our own probing results and our observation that transformers tend to encode transferable features in their intermediate layers.", "Furthermore, they find that ELMo's performance cannot be explained by a model with access to only local context, indicating that ELMo encodes linguistic features from distant tokens.", "Several other papers have examined how architecture design and choice of pretraining task affect the quality of learned CWR s.", "Peters et al. (2018b) study how the choice of neural architecture influences the end-task performance and qualitative properties of CWR s derived from bidirectional language models (ELMo).", "Bowman et al. (2018) compare a variety of pretraining tasks and explore the the impact of multitask learning.", "Prior work has employed a variety of other methods to study the learned representations in neural models, such as directly examining the activations of individual neurons (Karpathy et al., 2015; Li et al., 2015; Shi et al., 2016a, inter alia ), ablating components of the model and dataset (Kuncoro et al., 2017; Gaddy et al., 2018; Khandelwal et al., 2018), or interpreting attention mechanisms (Bahdanau et al., 2015); see Belinkov and Glass (2019) for a recent survey.", "One particularly relevant line of work involves the construction of synthetic tasks that a model can only solve if it captures a particular phenomenon (Linzen et al., 2016; Jumelet and Hupkes, 2018; Wilcox et al., 2018; Futrell and Levy, 2019, inter alia ).", "Zhang and Bowman (2018) compare the syntactic knowledge of language models and neural machine translation systems.", "We widen the range of pretraining tasks and target probing model tasks to gain a more complete picture.", "We also focus on a stronger contextualizer architecture, ELMo (origi-nal), that has produced state-of-the-art results.", "Several studies have sought to intrinsically evaluate noncontextual word representations with word similarity tasks, such as analogies (Mikolov et al., 2013).", "These methods differ from our approach in that they require no extra parameters and directly assess the vectors, while our probing models must be trained.", "In this regard, our method is similar to QVEC (Tsvetkov et al., 2015).", "We study the linguistic knowledge and transferability of contextualized word representations with a suite of sixteen diverse probing tasks.", "The features generated by pretrained contextualizers are sufficient for high performance on a broad set of tasks.", "For tasks that require specific information not captured by the contextual word representation, we show that learning task-specific contextual features helps to encode the requisite knowledge.", "In addition, our analysis of patterns in the transferability of contextualizer layers shows that the lowest layer of LSTMs encodes the most transferable features, while transformers' middle layers are most transferable.", "We find that higher layers in LSTMs are more task-specific (and thus less gen-eral), while transformer layers do not exhibit this same monotonic increase in task-specificity.", "Prior work has suggested that higher-level contextualizer layers may be expressly encoding higher-level semantic information.", "Instead, it seems likely that certain high-level semantic phenomena are incidentally useful for the contextualizer's pretraining task, leading to their presence in higher layers.", "Lastly, we find that bidirectional language model pretraining yields representations that are more transferable in general than eleven other candidate pretraining tasks.", "We thank Johannes Bjerva for sharing the semantic tagging dataset used in Bjerva et al. (2016).", "We also thank the members of the Noah's ARK group at the University of Washington, the researchers at the Allen Institute for Artificial Intelligence, and the anonymous reviewers for their valuable feedback.", "NL is supported by a Washington Research Foundation Fellowship and a Barry M. Goldwater Scholarship.", "YB is supported by the Harvard Mind, Brain, and Behavior Initiative." ]
[ "abstain", "method", "result", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "method", "abstain", "abstain", "objective", "objective", "objective", "method", "method", "method", "result", "result", "objective", "objective", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "method", "other", "abstain", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "other", "objective", "abstain", "method", "abstain", "result", "result", "result", "abstain", "abstain", "result", "other", "other", "other", "other" ]
[ "Despite the success of sequence-to-sequence (seq2seq) models in semantic parsing, recent work has shown that they fail in compositional generalization , i.e., the ability to generalize to new structures built of components observed during training.", "In this work, we posit that a span-based parser should lead to better compositional generalization.", "we propose SPANBASEDSP, a parser that predicts a span tree over an input utterance, explicitly encoding how partial programs compose over spans in the input.", "SPANBASEDSP extends Pasupat et al. (2019) to be comparable to seq2seq models by", "(i) training from programs, without access to gold trees, treating trees as latent variables,", "(ii) parsing a class of non-projective trees through an extension to standard CKY.", "On GEOQUERY , SCAN and CLOSURE datasets, SPANBASEDSP performs similarly to strong seq2seq baselines on random splits, but dramatically improves performance compared to baselines on splits that require compositional generalization: from 61 .", "0 88 .", "9 average accuracy.", "The most dominant approach in recent years for semantic parsing, the task of mapping a natural language utterance to an executable program, has been based on sequence-to-sequence (seq2seq) models (Jia and Liang, 2016; Dong and Lapata, 2016; Wang et al., 2020, inter alia ).", "In these models, the output program is decoded step-by-step (au-toregressively), using an attention mechanism that softly ties output tokens to the utterance.", "Despite the success of seq2seq models, recently, Finegan-Dollak et al. (2018) and Keysers et al. (2020) and Herzig and Berant (2019) demonstrated that such models fail at compositional generalization , that is, they do not generalize to program structures that were not seen at training time.", "For example, a model that observes at training time the questions What states border China? and What is the largest state? fails to generalize to questions such as What states border the largest state? .", "This is manifested in large performance drops on data splits designed to measure compositional generalization ( compositional splits ), and is in contrast to the generalization abilities of humans (Fodor and Pylyshyn, 1988).", "In this work, we posit that the poor generalization of seq2seq models is due to fact that the input utterance and output program are only tied softly through attention.", "We revisit a more traditional approach for semantic parsing (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005; Liang et al., 2011), where partial programs are predicted over short spans in the utterance, and are composed to build the program for the entire utterance.", "Such explicit inductive bias for compositionality should encourage compositional generalization.", "Specifically, we propose to introduce such inductive bias via a span-based parser (Stern et al., 2017; Pasupat et al., 2019), equipped with the advantages of modern neural architectures.", "Our model, SPANBASEDSP, predicts for every span in the input a category , which is either a constant from the underlying knowledge-base, a composition category, or a null category.", "Given the category predictions for all spans, we can construct a tree over the input utterance and deterministically compute the output program.", "For example, in Figure 1, the category for the tree node covering the span New York borders ? is the composition category join , indicating the composition of the predicate next_to_1 with the entity stateid('new york') .", "Categories are predicted for each span independently, resulting in a very simple training procedure.", "CKY is used at inference time to find the best span tree , which is a tree with a category predicted at every node.", "The output program is computed from join : capital(loc_2(state(next_to_1(NY))) join : capital(loc_2(state(next_to_1(NY))) join : loc_2(state(next_to_1(NY))) join : state(next_to_1(NY)) join : next_to_1(NY) join : next_to_1 ?", "We enhance the applicability of span-based semantic parsers (Pasupat et al., 2019) in terms of both supervision and expressivity , by overcoming two technical challenges.", "First, we do not use gold trees as supervision, only programs with no explicit decomposition over the input utterance.", "To train with latent trees, we use a hard-EM approach, where we search for the best tree under the current model corresponding to the gold program, and update the model based on this tree.", "Second, some gold trees are non-projective, and cannot be parsed with a binary grammar.", "Thus, we extend the grammar of CKY to capture a class of non-projective structures that are common in semantic parsing.", "This leads to a model that is comparable and competitive with the prevailing seq2seq approach.", "We evaluate our approach on three datasets, and find that SPANBASEDSP performs similarly to strong seq2seq baselines on standard i.i.d (random) splits, but dramatically improves performance on compositional splits, by 32.9, 34.6 and 13.5 abso-lute accuracy points on GEOQUERY (Zelle and Mooney, 1996), CLOSURE (Bahdanau et al., 2019), and SCAN (Lake and Baroni, 2018) respectively.", "Our code and data are available at https:// github.com/jonathanherzig/span-based-sp .", "We define span-based semantic parsing as follows.", "Given a training set { ( x i , z i ) } Mi =1 , where x i is an utterance and z i is the corresponding program, our goal is to learn a model that maps a new utterance x to a span tree T (defined below), such that program( T ) = z .", "The deterministic function program( ) maps span trees to programs.", "Span trees A span tree T is a tree (see Figure 1) where, similar to constituency trees, each node covers a span ( i, j ) with tokens x i : j = ( x i , x i +1 , . . . , x j ) .", "A span tree can be viewed as a mapping from every span ( i, j ) to a single category c C , where categories describe how the meaning of a node is derived from the meaning of its children.", "A category c is one of the following: : a set of domain-specific categories representing domain constants, including entities and predicates.", "E.g., in Figure 1, capital , state , loc_2 and next_to_1 are binary predicates, and stateid('new york') is an entity.", "join : a category for a node whose meaning is derived from the meaning of its two children.", "At most one of the children's categories can be the category.", ": a category for", "(i) a node that does not affect the meaning of the utterance.", "For example, in Figure 1, the nodes that cover What is the and ? are tagged by ;", "(ii) spans that do not correspond to constituents (tree nodes).", "Overall, the category set is C = { , join } .", "We also define the terminal nodes set + = { } , corresponding to categories that are directly over the utterance.", "Computing programs for span trees Given a mapping from spans to categories specifying a span tree T , we use the function program ( ) to find the program for T .", "Concretely, program ( T ) iterates over the nodes in T bottom-up, and generates a program z i : j for each node covering the span ( i, j ) .", "The program z i : j is computed deterministically.", "For a node with category c , z i : j = c .", "For a join node over the span ( i, j ) , we determine z i : j by composing the programs of its children, z i : s and z s,j where s is the split point .", "As in Combinatory Categorical Grammar (Steed-man, 2000), composition is simply function application, where a domain-specific type system is used to determine which child is the function and which is the argument (along with the exact argument position for predicates with multiple arguments).", "If the category of one of the children is , the program for z i : j is copied from the other child.", "E.g., in Figure 1, the span (8 , 9) , where z 8:9 = stateid('new york') combines with the span (10 , 11) , where z 10:11 = next_to_1 .", "As z 10:11 is a binary predicate that takes an argument of type state , and z 8:9 is an entity of type state , the output program is z 8:11 = next_to_1(stateid('new york')) .", "If no combination is possible according to the type system, the execution of program ( T ) fails (3.2).", "Unlike seq2seq models, computing programs with span trees is explicitly compositional.", "Our main hypothesis is that this strong inductive bias should improve compositional generalization.", "Span-based parsing had success in both syntactic (Stern et al., 2017; Kitaev and Klein, 2018) and semantic parsing (Pasupat et al., 2019).", "The intuition is that modern sequence encoders are powerful, and thus we can predict a category for every span independently , reducing the role of global structure.", "This leads to simple and fast training.", "Specifically, our parser is based on a model p ( T [ i, j ] = c ) , parameterized by , that provides for every span ( i, j ) a distribution over categories c C .", "Due to the above independence assumption, the log-likelihood of a tree T is defined as: log p ( T ) = (cid:88) i<j log p ( T [ i, j ]) , (1) where, similar to Pasupat et al. (2019), the sum is over all spans i < j and not only over constituents.", "We next describe the model p ( T [ i, j ]) and its training, assuming we have access to gold span trees at training time (3.1).", "We will later (3.3) remove this assumption, and describe a CKY-based inference procedure (3.2) that finds for every training example ( x, z ) the (approximately) most probable span tree T train , such that program ( T train ) = z .", "We use T train as a replacement for the gold tree.", "Last, we present an extension of our model that covers a class of span trees that are non-projective (3.4).", "We describe the architecture and training procedure of our model (SPANBASEDSP), assuming we are given for every utterance x a gold tree T , for which program(T) = z .", "Similar to Pasupat et al. (2019), we minimize the negative log-likelihood log p ( T ) (Eq. 1) for the gold tree T .", "The loss decomposes over spans into cross-entropy terms for every span ( i, j ) .", "This effectively results in multi-class problem, where for every span x i : j we predict a category c C .", "Training in this setup is trivial and does not require any structured inference.", "Concretely, the architecture of SPANBASEDSP is based on a BERT-base encoder (Devlin et al., 2019) that yields a contextual representation h i R h dim for each token x i in the input utterance.", "We represent each span ( i, j ) by concatenating its start and end representations [ h i ; h j ] , and apply a 1-hidden layer network to produce a real-valued score s ( x i : j , c ) for a span ( i, j ) and category c : s ( x i : j , c ) = [ W 2 relu ( W 1 [ h i ; h j ])] ind ( c ) , (2) where W 1 R 250 2 h dim , W 2 R |C| 250 , and ind ( c ) is the index of the category c .", "We take a softmax to produce the probabilities: p ( T [ i, j ] = c ) = exp [ s ( x i : j , c )] (cid:80) c (cid:48) exp [ s ( x i : j , c (cid:48) )] , (3) and train the model with a cross-entropy loss averaged over all spans, as mentioned above.", "While we assume span-independence at training time, at test time we must output a valid span tree.", "We now describe an approximate K -best CKY algorithm that searches for the K most probable trees under p ( T ) , and returns the highest-scoring one that is semantically valid , i.e., that can be mapped to a program.", "1 As we elaborate below, some trees cannot be mapped to a program, due to violations of the type system.", "We start by re-writing our objective function, as proposed in Pasupat et al. (2019).", "Given our 1 The requirement that trees are semantically valid is what prevents exact search.", "definition for p ( T [ i, j ] = c ) , the log-likelihood is: log p ( T ) = (cid:88) i<j log p ( T [ i, j ]) = (cid:88) i<j (cid:34) s ( x i : j , T [ i, j ]) log (cid:88) c (cid:48) exp [ s ( x i : j , c (cid:48) )] (cid:35) .", "We shift the scoring function s ( ) for each span, such that the score for the category is zero: s' ( x i : j , ) := s ( x i : j , ) s ( x i : j , ) .", "Because softmax is shift-invariant, we can replace s ( ) for s (cid:48) ( ) and preserve correctness.", "This is motivated by the fact that nodes, such as the one covering What is the in Figure 1, do not affect the semantics of utterance.", "By shifting scores such that for all spans s' ( x i : j , ) = 0 , their score does not affect the overall tree score.", "Spans that do not correspond to tree nodes are labeled by and also do not affect the tree score.", "Furthermore, as (cid:80) i<j log (cid:80) c (cid:48) exp [ s' ( x i : j , c (cid:48) )] does not depend on T at all, maximizing log p ( T ) is equivalent to maximizing the tree score: S ( T ) := (cid:88) i<j s' ( x i : j , T [ i, j ]) .", "This scoring function can be maximized using CKY (Cocke, 1969; Kasami, 1965; Younger, 1967).", "We now propose a grammar, which imposes further restrictions on the space of possible output trees at inference time.", "We use a small grammar G = ( N, + , R, S ) , where N = { S , join } is the set of non-terminals, + is the set of terminals (defined in 2), R is a set of four rules detailed in Figure 2, and S is a special start symbol.", "The four grammar rules impose the following constraints on the set of possible output trees:", "(a) a join or S node can have at most one child, as explained in 2;", "(b) nodes with no semantics combine with semantic elements on their left;", "(c) except at the root where they combine with elements on their right.", "Imposing such consistent tree structure is useful for training SPANBASEDSP when predicted trees are used for training (3.3).", "The grammar G can generate trees that are not semantically valid.", "For example, we could generate the program capital(placeid('mount mckinley')) , which is semantically vacuous.", "We use a domain-specific type system and assign the score S ( T ) = to every tree that yields a semantically invalid program.", "This global factor prevents exact inference, and thus we perform K best parsing, keeping the topK ( K = 5 ) best trees for every span ( i, j ) and non-terminal.", "Alg.", "1 summarizes CKY inference, that outputs ( i, j, X ) , the maximal score for a tree with nonterminal root X over the span ( i, j ) .", "In Lines 1-3 we initialize the parse chart, by going over all spans and setting ( i, j, join ) to the topK highest scoring domain constants ( ), and fixing the score for to be zero.", "We then perform the typical CKY recursion to find the topK trees that can be constructed through composition (Line 6), merge them with the domain constants found during initialization (Line 7), and keep the overall topK trees.", "Once inference is done, we retrieve the topK trees from (1 , | x | , S ) , iterate over them in descending score order, and return the first tree T that is semantically valid.", "We now remove the assumption of access to gold trees at training time, in line with standard supervised semantic parsing, where only the gold program z is given, without its decomposition over x .", "This can be viewed as a weakly-supervised setting, where the correct span tree is a discrete latent variable.", "In this setup, our goal is to maximize log p ( z | x ) = log (cid:88) T : program ( T )= z p ( T ) log argmax T : program ( T )= z p ( T ) .", "Because marginalizing over trees is intractable, we take a hard-EM approach (Liang et al., 2017; Min et al., 2019), and replace the sum over trees with an argmax .", "More concretely, to approximately solve the argmax and find the highest scoring tree, T train , we employ a constrained version of Alg.", "1, that prunes out trees that cannot generate z .", "We first remove all predictions of constants that do not appear in z by setting their score to : c { \\ const ( z ) } , i, j : s (cid:48) ( x i : j , c ) := , where const ( z ) is the set of domain constants appearing in z .", "Second, we allow a composition of two nodes covering spans ( i, s ) and ( s, j ) only if their sub-programs z i : s and z s : j can compose according to z .", "For instance, in Figure 1, a span with the sub-program capital can only compose with a span with the sub-program loc_2( ) .", "After running this constrained CKY procedure we return the highest scoring tree that yields the correct program, T train , if one is found.", "We then treat the span structure of T train as labels for training the parameters of SPANBASEDSP.", "Past work on weakly-supervised semantic parsing often used maximum marginal likelihood, especially when training from denotations only (Guu et al., 2017).", "In this work, we found hard-EM to be simple and sufficient, since we are given the program z that provides a rich signal for guiding search in the space of latent trees.", "Exact match features The challenge of weakly-supervised parsing is that SPANBASEDSP must learn to map language phrases to constants, and how the span tree is structured.", "To alleviate the language-to-constant problem we add an exact match feature, based on a small lexicon, indicating whether a phrase in x matches the language description of a category c .", "These features are considered in SPANBASEDSP when some phrase matches a category from , updating the score s ( x i : j , c ) to be: [ W 2 relu ( W 1 [ h i ; h j ])] ind ( c ) + ( x i : j , c ) , where ( x i : j , c ) is an indicator that returns 1 if c lexicon [ x i : j ] , and 0 otherwise, and is a hyper-parameter that sets the feature's importance.", "We use two types of lexicon [ ] functions.", "In the first, the lexicon is created automatically to map the names of entities (not pred-icates), as they appear in , to their corresponding constant (e.g., lexicon [ new york ] = stateid('new york') ).", "This endows SPANBASEDSP with a copying mechanism , similar to join : largest_one(pop_1(state(all))) join : pop_1 ?", "seq2seq models, for predicting entities unseen during training.", "In the second lexicon we manually add no more than two examples of language phrases for each constant in .", "E.g., for the predicate next_to_1 , we update the lexicon to include lexicon [ border ] = lexicon [ borders ] = next_to_1 .", "This requires minimal manual work (if no language phrases are available), but is done only once, and is common in semantic parsing (Zettlemoyer and Collins, 2005; Wang et al., 2015; Liang et al., 2017).", "Our span-based parser assumes composition can only be done for adjacent spans that form together a contiguous span.", "However, this assumption does not always hold (Liang et al., 2011).", "For example, in Figure 3, while the predicate pop_1 should combine with the predicate state , the spans they align to ( people and state respectively) are not contiguous, as they are separated by most , which contributes the semantics of a superlative.", "In constituency parsing, such non-projective structures are treated by adding rules to the grammar G (Maier et al., 2012; Corro, 2020; Stanojevic and Steedman, 2020).", "We identify one specific class of non-projective structures that is frequent in semantic parsing (Figure 3), and expand the grammar G and the CKY Algorithm to support this structure.", "Specifically, we add the ternary grammar rule join := join join join .", "During CKY, when calculating the topK trees for spans ( i, j ) (line 6 in Alg. 1), we also consider the following topK scores for the non-terminal join : max s 1 i... ( j 2) s 2 ( s 1 +1) ... ( j 1) [ s' ( x ij , join ) + ( i, s 1 , join ) + ( s 1 + 1 , s 2 , join ) + ( s 2 + 1 , j, join )] .", "parts.", "The score of the sub-tree is then the sum of the score of the root added to the scores of the three children.", "To compute the program for such ternary nodes, we again use our type system, where we first compose the programs of the two outer spans ( i, s 1 ) and ( s 2 + 1 , j ) and then compose the resulting program with the program corresponding to the span ( s 1 + 1 , s 2 ) .", "Supporting ternary nodes in the tree increases the time complexity of CKY from O ( n 3 ) to O ( n 4 ) for our implementation.", "2 4 Experiments and Results We now present our experimental evaluation, which demonstrates the advantage of span-based parsing for compositional generalization.", "We compare to baseline models over two types of data splits:", "(a) IID split , where the training and test sets are sampled from the same distribution, and", "(b) compositional split , where the test set includes structures that are unseen at training time.", "Details on the experimental setup are given in Appendix A. 4.1 Datasets We evaluate on the following datasets (Table 1).", "GEOQUERY Contains 880 questions about US geography (Zelle and Mooney, 1996), using the FunQL formalism (Kate et al., 2005).", "For the IID split, we use the standard train/test split, randomly sampling 10% of the training set for development.", "We additionally use two compositional splits based on program templates ( TEMPLATE ) and on program lengths ( LENGTH ).", "For the compositional split, TEMPLATE , we use the procedure from Finegan-Dollak et al. (2018) and split the 880 examples by templates.", "A template is created by anonymizing entities in the program to their type (both stateid('new 2 Corro (2020) show an O ( n 3 ) algorithm for this type of non-projective structure.", "york') and stateid('utah') are anonymized to STATE ).", "We then split to train/development/test sets, such that all examples that share a template are assigned to the same set.", "We also verify that the sizes of theses sets are as close as possible to the IID split.", "For the compositional split, LENGTH , we sort the dataset by program token length and take the longest 280 examples to be the test set.", "We then randomly split the shortest 600 examples between the train and development set, where we take 10% of the 600 examples for the latter.", "CLEVR and CLOSURE CLEVR (Johnson et al., 2017) contains synthetic questions, created using 80 templates, over synthetic images with multiple objects of different shapes, colors, materials and sizes (example in Fig. 4 in the Appendix).", "The recent CLOSURE dataset (Bahdanau et al., 2019), includes seven new question templates that are created by combining referring expressions of various types from CLEVR in new ways.", "We use the semantic parsing version of these datasets, where each image is described by a scene (knowledge-base) that holds the attributes and positional relations of all objects.", "We use programs in the DSL version from Mao et al. (2019).", "For our experiments, we take 5 K examples from the original CLEVR training set and treat them as our development set.", "We use the other 695 K examples as training data for our baselines.", "Importantly, we only use 10 K training examples for SPANBASEDSP to reduce training time.", "We then create an IID split where we test on the CLEVR original development set (test scenes are not publicly available).", "We additionally define the CLOSURE split, that tests compositional generalization, where we test on CLOSURE.", "SCAN-SP SCAN (Lake and Baroni, 2018) contains natural language navigation commands that are mapped to action sequences ( x and y in Fig. 5 in the Appendix).", "As SCAN lacks programs, we automatically translate the input to programs ( z in Fig. 5) to crate the semantic parsing version of SCAN, denoted SCAN-SP (more details are given in Appendix B).", "We experiment with the random SIMPLE split from Lake and Baroni (2018) as our IID split.", "we further use the primitive right ( RIGHT ) and primitive around right ( AROUNDRIGHT ) compositional splits from Loula et al. (2018).", "For each split we randomly assign 20% of the training set Model SCAN-SP CLEVR GEOQUERY IID RIGHT AROUNDRIGHT IIDCLOSUREIID TEMPLATE LENGTH dev test dev test dev test dev test dev test dev test dev test dev test SEQ 2S EQ 100 99.9 100 11.6 100 0.0 100 100 100 59.5 83.3 78.5 71.6 46.0 86.7 24.3 +ELMo 100 100 100 54.9 100 41.6 100 100 100 64.2 83.3 79.3 83.3 50.0 86.7 25.7 BERT2S EQ 99.9 100 99.9 77.7 99.9 95.3 100 100 100 56.4 88.3 81.1 85.0 49.6 90.0 26.1 GRAMMAR 100 100 100 0.0 100 4.2 100 100 100 51.3 78.3 72.1 76.7 54.0 81.7 24.6 BART 100 100 100 50.5 100 100 100 100 100 51.5 93.3 87.1 86.7 67.0 90.0 19.3 END 2E ND---99.9 99.8 99.9 63.3 ---SPANBASEDSP 100 100 100 100 100 100 97.0 96.7 98.9 98.8 88.3 86.1 93.3 82.2 95.0 63.6 -lexicon 100 100 100 100 100 100 99.4 99.3 98.5 88.6 88.3 78.9 86.7 65.9 90.0 41.4 -non projective -----85.0 80.0 90.0 80.2 93.3 59.3 +gold trees 100 100 100 100 100 100 100 96.8 100 96.7 91.2 86.4 100 81.8 96.7 68.6 Table 2: Denotation accuracies for all models, including SPANBASEDSP ablations.", "SEQ 2S EQ Similar to Finegan-Dollak et al. (2018), our baseline parser is a standard seq2seq model (Jia and Liang, 2016) that encodes the utterance x with a BiLSTM encoder over pre-trained GloVe (Pen-nington et al., 2014) or ELMO (Peters et al., 2018) embeddings, and decodes the program with an attention-based LSTM decoder (Bahdanau et al., 2015) assisted by a copying mechanism for handling entities unseen during training time (Gu et al., 2016).", "BERT2S EQ Same as SEQ 2S EQ , but we replace the BiLSTM encoder with BERT-base , which is identical to the encoder of SPANBASEDSP.", "GRAMMAR Grammar-based decoding has been shown to improve performance on IID splits (Kr-ishnamurthy et al., 2017; Yin and Neubig, 2017).", "Because decoding is constrained by the grammar, the model outputs only valid programs, which could potentially improve performance on compositional splits.", "We use the grammar from (Wong and Mooney, 2007) for GEOQUERY , and write grammars for SCAN-SP and CLEVR + CLOSURE.", "The model architecture is identical to SEQ 2S EQ .", "BART We additionally experiment with BART-base (Lewis et al., 2020), a seq2seq model pre-trained as a denoising autoencoder.", "END 2E ND Semantic parsers generate a program that is executed to retrieve an answer.", "However, other end-to-end models directly predict the answer from the context without an executor, where the context can be an image (Hudson and Manning, 2018; Perez et al., 2018), a table (Herzig et al., 2020), etc.", "Because CLEVR and CLOSURE have a closed set of 28 possible answers and a short context (the scene), they are a good fit for end-to-end approaches.", "To check whether end-to-end models generalize compositionally, we implement the following model.", "We use BERT-base to encode the concatenation of the input x to a representation of all objects in the scene.", "Each scene object is represented by adding learned embeddings of all of its attributes: shape, material, size, color, and relative positional rank (from left to right, and from front to back).", "We fine-tune the model on the training set using cross-entropy loss, where the [CLS] token is used to predict the answer.", "Table 2 shows denotation accuracies for all baselines (top part) and our SPANBASEDSP model (middle part).", "For SPANBASEDSP, We also ablate the use of the manually constructed lexicon (3.3) and the non-projective extension to CKY (3.4), which is relevant only for GEOQUERY , where nonprojective structures are more frequent.", "The table shows that all baselines generalize well on the IID split, but suffer from a large accuracy drop on the compositional splits (except BERT2S EQ and BART on AROUNDRIGHT ).", "For instance, on the compositional CLOSURE split, all baselines achieve accuracy in the range of 51 .", "3 64 .", "2 , while performing perfectly on the IID split.", "Conversely, SPANBASEDSP performs almost identically on both splits.", "SPANBASEDSP attains near-perfect performance on all SCAN-SP and CLEVR splits, despite training on only 10 K examples from CLEVR compared to 695 K training examples for the baselines (70x less data).", "On GEOQUERY , SPANBASEDSP performs similarly to other semantic parsers on the IID split (Dong and Lapata, 2016), and loses just 4 points on the compositional TEMPLATE split.", "On the LENGTH split, SPANBASEDSP yields an accuracy of 63.6, substantially outperforming all baselines by more than 37 accuracy points.", "Our ablations show that the lexicon is crucial for GEOQUERY , which has a small training set.", "In this setting, learning the mapping from language phrases to predicates is challenging.", "Ablating nonprojective parsing also hurts performance for GEOQUERY , and leads to a reduction of 2-6 points for all of the splits.", "We now analyze whether trees learned by SPANBASEDSP are similar to gold trees.", "For this analysis we semi-automatically annotate our datasets with gold trees.", "We do this by manually creating a domain-specific lexicon for each dataset (extending the lexicon from 3.3), mapping domain constants to possible phrases in the input utterances.", "We then, for each example, traverse the program tree (rather than the span tree) bottom-up and annotate join and categories for spans in the utterance, aided by manually-written domain-specific rules.", "In cases where the annotation is ambiguous, e.g., examples with more than two instances of a specific domain constant, we do not produce a gold tree.", "We manage to annotate 100%/94.9%/95.9% of the examples in SCAN-SP/ GEOQUERY / CLEVR + CLOSURE respectively in this manner.", "We verify the correctness of our annotation by training SPANBASEDSP from our annotated gold trees (bottom part of Table 2).", "The results shows that training from these gold trees leads to similar performance as training only from programs.", "We then train SPANBASEDSP from gold programs, as explained in 3.3, and calculate F 1 test scores, comparing the predicted span trees to the gold ones.", "F 1 is computed between the two sets of labeled spans, taking into account both the spans and their categories, but excluding spans with the category that do not contribute to the semantics.", "Table 3 shows that for GEOQUERY the trees SPANBASEDSP predicts are similar to the gold Dataset Split F 1 SCAN-SPIID 100 RIGHT 100 AROUNDRIGHT 100 CLEVRIID 70.6 CLOSURE 70.6 GEOQUERY IID 94.7 TEMPLATE 91.6 LENGTH 93.7 Table 3: F 1 scores on the test set w.r.t to the semiautomatically annotated gold trees.", "trees (with 94.7, 91.6 and 93.7 F 1 scores for the IID , TEMPLATE and LENGTH splits respectively), and in SCAN-SP we predict perfect trees.", "On CLEVR, we get a lower F 1 score of 70.6 for both the IID and CLOSURE splits.", "However, when manually inspecting predicted trees on the IID split, we notice that predicted trees that are not identical to gold trees, are actually correct.", "This happens in cases where multiple gold trees are possible.", "For instance, in Figure 4 (in the Appendix), the span x 13:15 = matte block ? can be either parsed as [matte [block ?]], as in the figure, or [[matte block]", "?].", "This phenomena is common in CLEVR and CLOSURE, as span trees tend to be deep, and thus have more ambiguity.", "Our approach assumes a one-to-one mapping between domain constants and their manifestation as phrases in language.", "This leads to strong results on compositional generalization, but hurts the flexibil-ity that is sometimes necessary in semantic parsing.", "For example, in some cases predicates do not align explicitly to a phrase in the utterance or appear several times in the program but only once in the utterance (Berant et al., 2013; Pasupat and Liang, 2015).", "This is evident in text-to-SQL parsing, where an utterance such as What is the minimum, and maximum age of all singers from France? is mapped to SELECT min(age) , max(age) FROM singer WHERE country='France' .", "Here, the constant age is mentioned only once in language (but twice in the program), and country is not mentioned at all.", "Thus, our approach is more suitable for formalisms where there is tighter alignment between the natural and formal language.", "In addition, while we handle a class of nonprojective trees (3.4), there are other nonprojective structures that SPANBASEDSP can not parse.", "Until the neural era, semantic parsers used a lexicon and composition rules to predict partial programs for spans and compose them until a full program is predicted, and typically scored with a log-linear model given features over the utterance and the program (Zettlemoyer and Collins, 2005; Liang et al., 2011).", "In this work, we use a similar compositional approach, but take advantage of powerful span representations based on modern neural architectures.", "The most similar work to ours is by Pasupat et al. (2019), who presented a neural span-based semantic parser.", "While they focused on training using projective gold trees (having more supervision and less expressivity than seq2seq models) and testing on i.i.d examples, we handle non-projective trees, given only program supervision, rather than trees.", "More importantly, we show that this approach leads to dramatic gains in compositional generalization compared to autoregressive parsers.", "In recent years, work on compositional generalization in semantic parsing mainly focused on the poor performance of parsers in compositional splits (Finegan-Dollak et al., 2018), creating new datasets that require compositional generalization (Keysers et al., 2020; Lake and Baroni, 2018; Bahdanau et al., 2019), and proposing specialized architectures mainly for the SCAN task (Lake, 2019; Nye et al., 2020; Gordon et al., 2020; Liu et al., 2020; Gupta and Lewis, 2018).", "In this work we present a general-purpose architecture for semantic parsing that incorporates an inductive bias towards compositional generalization.", "Finally, concurrently to us, Shaw et al. (2020) induced a synchronous grammar over program and utterance pairs and used it to introduce a compositional bias, showing certain improvements over compositional splits.", "Seq2seq models have become unprecedentedly popular in semantic parsing but struggle to generalize to unobserved structures.", "In this work, we show that our span-based parser, SPANBASEDSP, that precisely describes how meaning is composed over the input utterance leads to dramatic improvements in compositional generalization.", "In future work, we plan to investigate ways to introduce the explicit compositional bias, inherent to SPANBASEDSP, directly into seq2seq models.", "We thank Ben Bogin, Nitish Gupta, Matt Gardner and the anonymous reviewers for their constructive feedback, useful comments and suggestions.", "This work was completed in partial fulfillment for the PhD degree of the first author, which was also supported by a Google PhD fellowship.", "This research was partially supported by The Yandex Initiative for Machine Learning, and the European Research Council (ERC) under the European Union Horizons 2020 research and innovation programme.", "(grant ERC DELPHI 802800)." ]
[ "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "objective", "abstain", "result", "other", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "other", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "other", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "other", "method", "abstain", "other", "objective", "abstain", "abstain", "result", "objective", "other", "other", "other", "other" ]
[ "Multilingual topic models enable document analysis across languages through coherent multilingual summaries of the data.", "However, there is no standard and effective metric to evaluate the quality of multilingual topics.", "We introduce a new intrinsic evaluation of multilingual topic models that correlates well with human judgments of multilingual topic coherence as well as performance in downstream applications.", "Importantly, we also study evaluation for low-resource languages.", "Because standard metrics fail to accurately measure topic quality when robust external resources are unavailable, we propose an adaptation model that improves the accuracy and reliability of these metrics in low-resource settings.", "Topic models provide a high-level view of the main themes of a document collection (Boyd-Graber et al., 2017).", "Document collections, however, are often not in a single language, driving the development of multilingual topic models.", "These models discover topics that are consistent across languages, providing useful tools for multilingual text analysis (Vulic et al., 2015), such as detecting cultural differences (Gutirrez et al., 2016) and bilingual dictionary extraction (Liu et al., 2015).", "Monolingual topic models can be evaluated through likelihood (Wallach et al., 2009b) or coherence (Newman et al., 2010), but topic model evaluation is not well understood in multilingual settings.", "Our contributions are two-fold.", "We introduce an improved intrinsic evaluation metric for multilingual topic models, called Crosslingual Normalized Pointwise Mutual Information ( CNPMI , Section 2).", "We explore the behaviors of CNPMI at both the model and topic levels with six language pairs and varying model specifications.", "This metric correlates well with human judgments and crosslingual classification results (Sections 5 and 6).", "We also focus on evaluation in low-resource languages, which lack large parallel corpora, dictionaries, and other tools that are often used in learning and evaluating topic models.", "To adapt CNPMI to these settings, we create a coherence estimator (Section 3) that extrapolates statistics derived from antiquated, specialized texts like the Bible: often the only resource available for many languages.", "A multilingual topic contains one topic for each language.", "For a multilingual topic to be meaningful to humans (Figure 1), the meanings should be consistent across the languages, in addition to coherent within each language ( i.e. , all words in a topic are related).", "This section describes our approach to evaluating the quality of multilingual topics.", "After defining the multilingual topic model, we describe topic model evaluation extending standard monolingual approaches to multilingual settings.", "Probabilistic topic models associate each document in a corpus with a distribution over latent topics, while each topic is associated with a distribution over words in the vocabulary.", "The most widely used topic model, latent Dirichlet allocation (Blei et al., 2003, LDA ), can be extended to connect languages.", "These extensions require additional knowledge to link languages together.", "One common encoding of multilingual knowledge is document links (indicators that documents are parallel or comparable), used in polylingual topic models (Mimno et al., 2009; Ni et al., 2009).", "In these models, each document d indexes a tuple of parallel/comparable language-specific documents, 1090 d ( ) , and the language-specific views of a document share the document-topic distribution d . The generative story for the document-links model is: 1 for each topic k and each language do 2 Draw a distribution over words k Dirichlet ( ) ; 3 for each document tuple d = (cid:16) d (1) , . . . , d ( L ) (cid:17) do 4 Draw a distribution over topics d Dirichlet ( ) ; 5 for each language = 1, . . . , L do 6 for each token t d ( ) do 7 Draw a topic z n d ; 8 Draw a word w n z ; Alternatively, word translations (Jagarlamudi and Daum III, 2010), concept links (Gutirrez et al., 2016; Yang et al., 2017), and multi-level priors (Krstovski et al., 2016) can also provide multilingual knowledges. Since the polylingual topic model is the most common approach for building multilingual topic models (Vulic et al., 2013, 2015; Liu et al., 2015; Krstovski and Smith, 2016), our study will focus on this model. 2.2 Monolingual Evaluation Most automatic topic model evaluation metrics use co-occurrence statistics of word pairs from a reference corpus to evaluate topic coherence, assuming that coherent topics contain words that often appear together (Newman et al., 2010). The most successful (Lau et al., 2014) is normalized pointwise mutual information (Bouma, 2009, NPMI ). NPMI compares the joint probability of words appearing together Pr( w i , w j ) to their probability assuming independence Pr( w i ) Pr( w j ) , normalized by the joint probability: NPMI ( w i , w j ) = log Pr( w i , w j ) Pr( w i )Pr( w j ) log Pr( w i , w j ) . (1) The word probabilities are calculated from a reference corpus , R , typically a large corpus such as Wikipedia that can provide meaningful co-occurrence patterns that are independent of the target dataset. The quality of topic k is the average NPMI of all word pairs ( w i , w j ) in the topic: NPMI k = 1 (cid:0) C 2 (cid:1) X i W ( k , C ) X j 6 = i NPMI ( w i , w j ), (2) where W ( k , C ) are the C most probable words in the topic-word distribution k (the number of words is the topic's cardinality ). Higher NPMI k means the topic's top words are more coupled. computerInternetGooglewebTwitter datorkabelwebbnttetGoogle tree speciesbiologysunplants spagheteaurvincafeasos starcarcarsdeskcream yulduzmushukkabellarstolcream Topic 5 EN SV EN RO EN UZ Topic 6 Topic 7 Figure 1: Topic 5 is multilingually coherent: both the English and Swedish topics are about technology. Topic 6 is about biology in English but food in Romanian, so it is low quality although coherent monolingually. Topic 7 is monolingually incoherent, so it is a low quality topic even if it contains word translations. 2.3 Existing Multilingual Evaluations While automatic evaluation has been well-studied for monolingual topic models, there are no robust evaluations for multilingual topic models. We first consider two straightforward metrics that could be used for multilingual evaluation, both with limitations. We then propose an extension of NPMI that addresses these limitations. Internal Coherence. A simple adaptation of NPMI is to calculate the monolingual NPMI score for each language independently and take the average. We refer this as internal NPMI ( INPMI ) as it evaluates coherence within a language. However, this metric does not consider whether the topic is coherent across languagesthat is, whether a language-specific word distribution 1 k is related to the corresponding distribution in another language, 2 k . Crosslingual Consistency. Another straightforward measurement is Matching Translation Accuracy (Boyd-Graber and Blei, 2009, MTA ), which counts the number of word translations in a topic between two languages using a bilingual dictionary. This metric can measure whether a topic is well-aligned across languages literally , but cannot capture non-literal more holistic similarities across languages. 2.4 New Metric: Crosslingual NPMI We extend NPMI to multilingual models, with a metric we call crosslingual normalized pointwise mutual information ( CNPMI ). This metric will be the focus of our experiments. A multilingually coherent topic means that if w i , 1 in language 1 and w j , 2 in language 2 are in the same topic, they should appear in similar contexts in comparable or parallel corpora R ( 1 , 2 ) . 1091 T a r g e t y EN Topics RO Topics CNPMI (Wikipedia) EN Topic AM Topic Estimation of EN-AM topic coherence F ea t u r es x Target y = h ( x ) F ea t u r es x Coherence Estimator h (linear regressor) Figure 2: The coherence estimator takes multilingual topics and features from them then outputs an estimated topic coherence. Our adaptation of NPMI is based on the same principles as the monolingual version, but focuses on the co-occurrences of bilingual word pairs. Given a bilingual word pair ( w i , 1 , w j , 2 ) the co-occurrence of this word pair is the event where word w i , 1 appears in a document in language 1 and the word w j , 2 appears in a comparable or parallel document in language 2 . The co-occurrence probability of each bilingual word pair is: Pr ( w i , 1 , w j , 2 ) , (cid:12)(cid:12)(cid:8) d : w i , 1 d ( 1 ) , w j , 2 d ( 2 ) (cid:9)(cid:12)(cid:12) (cid:12)(cid:12) R ( 1 , 2 ) (cid:12)(cid:12) , (3) where d = (cid:0) d ( 1 ) , d ( 2 ) (cid:1) is a pair of paral-lel/comparable documents in the reference corpus R ( 1 , 2 ) . When one or both words in a bilingual pair do not appear in the reference corpus, the co-occurrence score is zero. Similar to monolingual settings, CNPMI for a bilingual topic k is the average of the NPMI scores of all C 2 bilingual word pairs, CNPMI ( 1 , 2 , k ) = P Ci , j NPMI ( w i , 1 , w j , 2 ) C 2 . (4) It is straightforward to generalize CNPMI from a language pair to multiple languages by averaging CNPMI ( i , j , k ) over all language pairs ( i , j ) . 3 Adapting to Low-Resource Languages CNPMI needs a reference corpus for co-occurrence statistics. Wikipedia, which has good coverage of topics and vocabularies is a common choice (Lau and Baldwin, 2016). Unfortunately, Wikipedia is often unavailable or not large enough for low-resource languages. It only covers 282 languages, 1 and only 249 languages have more than 1,000 pages: many of pages are short or unlinked to 1 https://meta.wikimedia.org/wiki/List_of_Wikipedias a high-resource language. Since CNPMI requires comparable documents, the usable reference corpus is defined by paired documents. Another option for a parallel reference corpus is the Bible (Resnik et al., 1999), which is available in most world languages; 2 however, it is small and archaic. It is good at evaluating topics such as family and religion, but not modern topics like biology and Internet. Without reference co-occurrence statistics relevant to these topics, CNPMI will fail to judge topic coherenceit must give the ambiguous answer of zero. Such a score could mean a totally incoherent topic where each word pair never appears together (Topics 6 in Figure 1), or an unjudgeable topic (Topic 5). Our goal is to obtain a reliable estimation of topic coherence for low-resource languages when the Bible is the only reference. We propose a model that can correct the drawbacks of a Bible-derived CNPMI . While we assume bilingual topics paired with English, our approach can be applied to any high-resource/low-resource language pair. We take Wikipedia's CNPMI from high-resource languages as accurate estimations. We then build a coherence estimator on topics from high-resource languages, with the Wikipedia CNPMI as the target output. We use linear regression using the below features. Given a topic in low-resource language, the estimator produces an estimated coherence (Fig-ure 2). 3.1 Estimator Features The key to the estimator is to find features that capture whether we should trust the Bible. For generality, we focus on features independent of the available resources other than the Bible. This section describes the features, which we split into four groups. Base Features ( BASE ) Our base features include information we can collect from the Bible and the topic model: cardinality C , CNPMI and INPMI , MTA , and topic word coverage ( TWC ), which counts the percentage of topic words in a topic that appear in a reference corpus. Crosslingual Gap ( GAP ) A low CNPMI score could indicate a topic pair where each language has a monolingually coherent topic but that are not about the same theme (Topic 6 in Figure 1). Thus, we add two features to capture this information 2 The Bible is available in 2,530 languages. 1092 Topic 1 (Estimator) Topic 8 (Estimator) Topic 1 (Wikipedia) Topic 8 (Wikipedia) CNPMI = 0.007 Cardinality = 40 TWC (EN) = 0.58 TWC (AM) = 0.5 MTA = 0.0 Word Era (mean) = 1823 Word Era (std) = 21 WS (mean) = 0.204 WS (std) = 0.208 MC(EN) = 0.691 MC (AM) = 0.757 ICC (EN, AM) = 1.095 T op i c 1 T op i c 8 0.04 0.06 0.08 BASE BASE+GAP BASE+GAP+ERA BASE+GAP+ERA+DRIFT Feature sets C ohe r en c e sc o r e s MC(EN) = 2.371 MC (AM) = 0.462 ICC (EN, AM) = 0.195 CNPMI = 0.081 Cardinality = 5 TWC (EN) = 0.6 TWC (AM) = 1.0 MTA = 0.0 Word Era (mean) = 1923 Word Era (std) = 60 WS (mean) = 0.212 WS (std) = 0.224 Figure 3: As the estimator adds additional features, the estimated topic coherence scores (solid lines) approach to Wikipedia CNPMI (dashed lines). using the Bible: mismatch coefficients ( MC ) and internal comparison coefficients ( ICC ): MC ( 1 ; 2 , k ) = CNPMI ( 1 , 2 , k ) INPMI ( 1 , k ) + , (5) ICC ( 1 , 2 , k ) = INPMI ( 1 , k ) + INPMI ( 2 , k ) + , (6) where is a smoothing factor ( = 0.001 in our experiments). MC recognizes the gap between crosslingual and monolingual coherence, so a higher MC score indicates a gap between coherence within and across languages. Similarly, ICC compares monolingual coherence to tell if both languages are coherent: the closer to 1 the ICC is, the more comparable internal coherence both languages have. Word Era ( ERA ) Because the Bible's vocabulary is unable to evaluate modern topics, we must tell the model what the modern words are. The word era features are the earliest usage year 3 for each word in a topic. We use both the mean and standard deviation as features. Meaning Drift ( DRIFT ). The meaning of a word can expand and drift over time. For example, in the Bible, web appears in Isaiah 59:5: They hatch cockatrice' eggs, and weave the spider's web .", "The word web could be evaluated correctly in an animal topic.", "For modern topics, however, Bible fails to capture modern meanings of web, as in Topic 5 (Figure 1).", "To address this meaning drift , we use a method similar to Hamilton et al. (2016).", "For each English word, we calculate the context vector from Bible and from Wikipedia with a window size of five and calculate the cosine similarity between them as word similarity .", "Similar context vectors mean that the usage in the Bible is consistent with Wikipedia.", "We calculate word similarities for all the English topic words in a topic and use the average and standard deviation as features.", "In Figure 3, Topic 1 is coherent while Topic 8 is not.", "From left to right, we incrementally add new feature sets, and show how the estimated topic coherence scores (dashed lines) approach the ideal CNPMI (dotted lines).", "When only using the BASE features, the estimator gives a higher prediction to Topic 8 than to Topic", "1. Their low MTA and TWC prevent accurate evaluations.", "Adding GAP does not help much.", "However, ICC ( EN , AM , k = 1) is much smaller, which might indicate a large gap of internal coherence between the two languages.", "Adding ERA makes the estimated scores flip between the two topics.", "Topic 1 has word era of 1823 , much older than Topic 8's word era of 1923 , in-1093 Pair Training Reference Wikipedia The Bible Wiktionary EN-RO 1,272 8,126 1,189 29,836 EN-SV 3,378 9,067 1,189 42,953 EN-AM 421 1,581 1,189 1,091 EN-TL 542 4,166 1,189 10,970 EN-TR 874 5,524 1,189 16,853 EN-ZH 874 10,000 1,189 22,946 Table 1: Number of document pairs in the training and reference datasets and number of dictionary entries for each language pair.", "dicating that Topic 8 includes modern words the Bible lacks ( e.g. , computer).", "Using all the features, the estimator gives more accurate topic coherence evaluations.", "We experiment on six languages (Table 1) from three corpora: Romanian ( RO ) and Swedish ( SV ) from EuroParl as representative of well-studied and rich-resource languages (Koehn, 2005); Amharic ( AM ) and Tagalog ( TL ) from collected news, as low-resource languages (Huang et al., 2002a,b); and Chinese ( ZH ) and Turkish ( TR ) from TED Talks 2013 (Tiedemann, 2012), adding language variety to our experiments.", "Each language is paired with English as a bilingual corpus.", "Typical preprocessing methods (stemming, stop word removal, etc. ) are often unavailable for low-resource languages.", "For a meaningful comparison across languages, we do not apply any stemming or lemmatization strategies, including English, except removing digit numbers and symbols.", "However, we remove words that appear in more than 30% of documents for each language.", "Each language pair is separately trained using the MALLET (McCallum, 2002) implementation of the polylingual topic model.", "Each experiment runs five Gibbs sampling chains with 1,000 iterations per chain with twenty topics.", "The hyperparameters are set to the default values ( = 0.1 , = 0.01 ), and are optimized every 50 iterations in MALLET using slice sampling (Wallach et al., 2009a).", "We use Wikipedia and the Bible as reference corpora for calculating co-occurrence statistics.", "Different numbers of Wikipedia articles are available for each language pair (Table 1), while the Bible contains a complete set of 1,189 chapters for all of its translations (Christodoulopoulos and Steed-rights, government, newspaper, country, justice, democratic (press), (free), (newspaper), (right), (journalists), (people), (system) Are these two groups of words talking about the same thing?", "man, 2015).", "We use Wiktionary as the dictionary to calculate MTA .", "In addition to experimenting on Wikipedia-based CNPMI , we also re-evaluate the topics' Bible coherence using our estimator.", "In the following experiments, we use an AdaBoost regressor with linear regression as the coherence estimator (Friedman, 2002; Collins et al., 2000).", "The estimator takes a topic and low-quality CNPMI score as input and outputs (hopefully) an improved CNPMI score.", "To make our testing scenario more realistic, we treat one language as our estimator's test language and train on multilingual topics from the other languages.", "We use three-fold cross-validation over languages to select the best hyperparameters, including the learning rate and loss function in AdaBoost.R2 (Drucker, 1997).", "We first study CNPMI at the topic level: does a particular topic make sense?", "An effective evaluation should be consistent with human judgment of the topics (Chang et al., 2009).", "In this section, we measure gold-standard human interpretability of multilingual topics to establish which automatic measures of topic interpretability work best.", "Following monolingual coherence evaluations (Lau et al., 2014), we present topic pairs to bilingual CrowdFlower users.", "Each task is a topic pair with the top ten topic words ( C = 10 ) for each language.", "We ask if both languages' top words in a multilingual topic are talking about the same concept (Figure 4), and make a judgment on a three-point scalecoherent (2 points), somewhat coherent (1 point), and incoherent (0 points).", "To ensure the users have adequate language competency, we insert several topics that are easily identifiable as incoherent as a qualification test.", "We randomly select sixty topics from each language pair ( 360 topics total), and each topic is judged by five users.", "We take the average of the judgment points and calculate Pearson correlations with the proposed evaluation metrics (Table 2).", "NPMI -based scores are separately calculated from each reference corpus.", "CNPMI (the extended metric) has higher correlations with human judgments than INPMI (the naive adaptation of monolingual NPMI ), while MTA (matching translation accuracy) correlations are comparable to CNPMI .", "Unsurprisingly, when using Wikipedia as the reference, the correlations are usually higher than when using the Bible.", "The Bible's archaic content limits its ability to estimate human judgments in modern corpora (Section 3).", "Next, we compare CNPMI to two baselines: INPMI and MTA .", "As expected, CNPMI outperforms INPMI regardless of reference corpus overall, because INPMI only considers monolingual coherence.", "MTA has higher correlations than CNPMI design, film, artist, image, beautiful (cid:2)(cid:3) (works), (cid:6)(cid:5) (art), (cid:2)(cid:3) (film) , (cid:5)(cid:4)(cid:1) (artist) , (cid:4)(cid:1) (visual) Russia, Noriega, pope, court, years Russia (Russia), pamahalaan (government), Noriega (Noriega), pope (pope), eroplano (plane) Topic 1 (EN-ZH) MTA= 0.08, CNPMI = 0.37, INPMI = 0.40 Topic 2 (EN-TL) MTA= 0.12, CNPMI = 0.16, INPMI = 0.20 Figure 5: MTA fails to capture semantically related words (Topic 1) and only looks at translation pairs regardless of internal coherence (Topic 2).", "scores from the Bible, because the Bible fails to give accurate estimates due to limited topic coverage.", "MTA , on the other hand, only depends on dictionaries, which are more comprehensive than the Bible.", "It is also possible that users are judging coherence based on translations across a topic pair, rather than the overall coherence, which would closely correlate with MTA .", "The Bibleby itselfproduces CNPMI values that do not correlate well with human judgments (Table 2).", "After training an estimator (Sec-tion 4.2), we calculate Pearson's correlation between Wikipedia's CNPMI and the estimated topic coherence score (Table 3).", "A higher correlation with Wikipedia's CNPMI means more accurate coherence.", "As a baseline, the correlation of Bible-based CNPMI without adaptation has negative and near-zero correlations with Wikipedia; 4 it does not capture coherence.", "After training the estimator, the correlations become stronger, indicating the estimated scores are closer to Wikipedia's CNPMI .", "We analyze MTA from two aspectsthe inability to capture semantically-related non-translation topic words, and insensitivity to cardinalityto show why MTA is not an ideal measurement, even though it correlates well with human judgments.", "Semantics We take two examples with EN-ZH (Topic 1) and EN-TL (Topic 2) in Figure 5.", "Topic 1 has fewer translation pairs than Topic 2, which leads to a lower MTA score for Topic", "1. However, all words in Topic 1 talk about art, while it is hard to interpret Topic", "2. Wikipedia CNPMI scores reveals 4 Normally one would not estimate CNPMI on rich-resource languages using low-resource languages.", "For completeness, however, we also include these situations.", "Topic 1 is more coherent.", "Because our experiments are on datasets with little divergence between the themes discussed across languages, this is uncommon for us but could appear in noisier datasets.", "Cardinality Increasing cardinality diminishes a topic's coherence (Lau and Baldwin, 2016).", "We vary the cardinality of topics from ten to fifty at intervals of ten (Figure 6).", "As cardinality increases, more low-probability and irrelevant words appear the topic, which lowers CNPMI scores.", "However, MTA stays stable or increases with increasing cardinality.", "Thus, MTA fails to fulfill a critical property of topic model evaluation.", "Finally, MTA requires a comprehensive multilingual dictionary, which may be unavailable for low-resource languages.", "Additionally, most languages often only have one dictionary, which makes it problematic to use the same resource (a language's single multilingual dictionary) for training and evaluating models that use a dictionary to build multilingual topics (Hu et al., 2014).", "Given these concerns, we continue the paper's focus on CNPMI as a data-driven alternative to MTA .", "However, for many applications MTA may suffice as a simple, adequate evaluation metric.", "While the previous section looked at individual topics, we also care about how well CNPMI characterizes the quality of models through an average of a model's constituent topics.", "Adding more knowledge to multilingual topic models improves topics (Hu et al., 2014), so an effective evaluation should reflect this improvement as knowlege is added to the model.", "For polylingual topic models, this knowledge takes the form of the number of linked documents.", "We start by experimenting with no multilingual knowledge: no document pairs share a topic distribution d (but the documents are in the collection as unlinked documents).", "We then increase the number of document pairs that share d from 20% of the corpus to 100% .", "Fixing the topic cardinality at ten, CNPMI captures the improvements in models (Figure 7) through a higher coherence score.", "Topic models are often used as a feature extraction technique for downstream machine learning", "applications, and topic model evaluations should reflect whether these features are useful (Ramage et al., 2009).", "For each model, we apply a document classifier trained on the model parameters to test whether CNPMI is consistent with classification accuracy.", "Specifically, we want our classifier to transfer information from training on one language to testing on another (Smet et al., 2011; Heyman et al., 2016).", "We train a classifier on one language's documents, where each document's feature vector is the document-topic distribution d .", "We apply this to TED Talks, where each document is labeled with multiple categories.", "We choose the most frequent seven categories across the corpus as labels, 5 and only have labeled documents in one side of a bilingual topic model.", "CNPMI has very strong correlations with classification results, though using the Bible as the reference corpus gives slightly lower correlationwith higher variance than Wikipedia (Figure 8).", "In Section 5.3, we improve Bible-based CNPMI scores for individual topics.", "Here, we show the estimator also improves model-level coherence.", "We apply the estimator on the models created in Section 6.2 and calculate the correlation between estimated scores and Wikipedia's CNPMI (Table 4).", "The coherence estimator substantially improves scores except for Turkish: the correlation is better before applying the estimator ( 0.911 ).", "We suspect a lack of overlap between topics between Turkish and languages other than Chinese is to blame (Fig-ure 9); the features used by the estimator do not generalize well to other kinds of features; training on many languages pairs would hopefully solve this 5 design, global issues, art, science, technology, business, and culture 1096 l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l Wikipedia 10 20 30 40 50 0.05 0.10 0.15 Cardinality CNPMI l l l l l l l ll l l l l ll l l l l ll l l l l ll l l l Bible 10 20 30 40 50 0.005 0.010 0.015 Cardinality CNPMI ll l l l l l l l l l l l l l l l l l l l l l l l l l l l l Wiktionary 10 20 30 40 50 0.1 0.2 0.3 0.4 Cardinality MTA Languages l l l l l l ENZH ENTR ENSV ENRO ENTL ENAM Figure 6: Increasing cardinality of topic pairs makes it harder to judge the coherence.", "issue.", "Turkish is also morphologically rich, and our preprocessing completely ignores morphology.", "One challenge with low-resource languages is that even if Wikipedia is available, it may have too few documents to accurately calculate coherence.", "As a final analysis, we examine how the reliability of CNPMI degrades with a smaller reference corpus.", "We randomly sample 20% to 100% of document pairs from the reference corpora and evaluate the polylingual topic model with all document links (Figure 10), again fixing the cardinality as 10 .", "erence documents, as long as the number of reference documents is sufficiently large.", "If there are too few reference documents (for example, 20% of Amharic Wikipedia is only 316 documents), then CNPMI degrades.", "Topic Coherence Many coherence metrics based on co-occurrence statistics have been proposed besides NPMI .", "Similar metricssuch as asymmetrical word pair metrics (Mimno et al., 2011) and combinations of existing measurements (Lau et al., 2014; Rder et al., 2015) correlate well with human judgments.", "NPMI has been the current gold standard for evaluation and improvements of monolingual topic models (Pecina, 2010; Newman et al., 2011).", "External Tasks Another approach is to use a model for predictive tasks: the better the results are on external tasks, the better a topic model is assumed to be.", "A common task is held-out likelihood (Wallach et al., 2009b; Jagarlamudi and Daum III, 2010; Fukumasu et al., 2012), but as Chang et al. (2009) show, this does not always reflect human interpretability.", "Other specific tasks have also been used, such as bilingual dictionary extraction (Liu et al., 2015; Ma and Nasukawa, 2017), cultural difference deteciton (Gutirrez et al., 2016), and crosslingual document clustering (Vulic et al., 2015).", "Representation Learning Topic models are one example of a broad class of techniques of learning representations of documents (Bengio et al., 2013).", "Other approaches learn respresentations at the word (Klementiev et al., 2012; Vyas and Carpuat, 2016), paragraph (Mogadala and Rettinger, 2016), or corpus level (Sgaard et al., 2015).", "However, neural representation learning approaches are often data hungry and not adaptable to low-resource languages.", "The approaches here could help improve the evaluation of all multilingual representation learning algorithms (Schnabel et al., 2015).", "We have provided a comprehensive analysis of topic model evaluation in multilingual settings, including for low-resource languages.", "While evaluation is an important area of topic model research, no previous work has studied evaluation of multilingual topic models.", "Our work provided two primary contributions to this area, including a new intrinsic evaluation metric, CNPMI , as well as a model for adapting this metric to low-resource languages without large reference corpora.", "As the first study on evaluation for multilingual topic models, there is still room for improvement and further applications.", "For example, human judgment is more difficult to measure than in monolingual settings, and it is still an open question on how to design a reliable and accurate survey for multilingual quality judgments.", "As a measurement of multilingual coherence, we plan to extend CNPMI to high-dimensional representations, e.g., multilingual word embeddings, particularly in low-resource languages (Ruder et al., 2017).", "We thank the anonymous reviewers for their insightful and constructive comments.", "Hao has been supported under subcontract to Raytheon BBN Technologies, by DARPA award HR 0011-15C -0113.", "Boyd-Graber and Paul were supported by NSF grant IIS -1564275.", "Any opinions, findings, conclusions, or recommendations expressed here are those of the authors and do not necessarily reflect the view of the sponsors." ]
[ "abstain", "abstain", "objective", "method", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "objective", "abstain", "abstain", "abstain", "other", "other", "other", "other" ]
[ "Visual modifications to text are often used to obfuscate offensive comments in social media (e.g., !d10t) or as a writing style (1337 in leet speak), among other scenarios.", "We consider this as a new type of adversarial attack in NLP, a setting to which humans are very robust, as our experiments with both simple and more difficult visual perturbations demonstrate.", "We investigate the impact of visual adversarial attacks on current NLP systems on character-, word-, and sentence-level tasks, showing that both neural and nonneural models are, in contrast to humans, extremely sensitive to such attacks, suffering performance decreases of up to 82%.", "We then explore three shielding methodsvisual character embeddings, adversarial training, and rule-based recoverywhich substantially improve the robustness of the models.", "However, the shielding methods still fall behind performances achieved in non-attack scenarios, which demonstrates the difficulty of dealing with visual attacks.", "For humans, visual similarity can play a decisive role for assessing the meaning of characters.", "Some evidence for these are: the frequent swapping of similar looking characters in Internet slang or abusive comments, creative trademark logos, and attack scenarios such as domain name spoofing (see examples in Table 1).", "Recently, some NLP systems have exploited visual features to capture visual relationships among characters in compositional writing systems such as Chinese or Korean (Liu et al., 2017).", "However, in more general cases, current neural NLP systems have no built-in notion of visual character similarity.", "Rather, they either treat characters as discrete units forming a word or they represent characters by randomly initialized embeddings and Internet slang writing style n00b ,", "update them during trainingtypically in order to generate a character-based word representation that is robust to morphological variation or spelling mistakes ( Ma and Hovy, 2016).", "Intriguingly, this marked distinction between human and machine processing can be exploited as a blind spot of NLP systems.", "For example, spammers might send malicious emails or post toxic comments to online discussion forums ( Hosseini et al., 2017) by visually perturbing' the input text in such a way that it is still easily recoverable by humans.", "The issue of exposing and addressing the weaknesses of deep learning models to adversarial inputs , i.e., perturbed versions of original input samples, has recently received considerable attention.", "For instance, Goodfellow et al. (2015) showed that small perturbations in the pixels of an image can mislead a neural classifier to predict an incorrect label for the image.", "In NLP, Jia and Liang (2017) inserted grammatically correct but semantically irrelevant paragraphs to stories to fool neural reading comprehension models.", "Singh et al. (2018) showed significant drops in the performance of neural models for question answering when using simple paraphrases of the original questions.", "Unlike previous NLP attack scenarios, visual attacks, i.e., the exchange of characters in the input with visually similar alternatives, have the following advantages': 1) They do not require any linguistic knowledge beyond the character level, making the attacks straightforwardly applicable across languages, domains, and tasks.", "2) They are allegedly less damaging to human perception and understanding than, e.g., syntax errors or the insertion of negations (Hosseini et al., 2017).", "3) They do not require knowledge of the attacked model's parameters or loss function (Ebrahimi et al., 2018).", "In this work, we investigate to what extent re-cent state-of-the-art (SOTA) deep learning models are sensitive to visual attacks and explore various shielding techniques.", "Our contributions are: We introduce VIPER , a Visual Perturber that randomly replaces characters in the input with their visual nearest neighbors in a visual embedding space.", "We show that the performance of SOTA deep learning models substantially drops for various NLP tasks when attacked by VIPER .", "On individual tasks (e.g., Chunking) and attack scenarios, our observed drops are up to 82%.", "We show that, in contrast to NLP systems, humans are only mildly or not at all affected by visual perturbations.", "We explore three methods to shield from visual attacks, viz.", ", visual character embeddings, adversarial training ( Goodfellow et al., 2015), and rule-based recovery.", "We quantify to which degree and in which circumstances these are helpful.", "We point out that integrating visual knowledge with deep learning systems, as our visual character embeddings do, aims to make NLP models behave more like humans by taking cues directly from sensory information such as vision.", "1 2 Related Work Our work connects to two strands of literature: adversarial attacks and visually informed character embeddings.", "Adversarial Attacks are modifications to a clas-sifier's input, that are designed to fool the system into making an incorrect decision, while the original meaning is still understood by a human observer.", "Different forms of attacks have been studied in NLP and computer vision (CV), including at a character, syntactic, semantic and, in CV, the visual level.", "Ebrahimi et al. (2018) propose a character flipping algorithm to generate adversarial examples and use it to trick a character-level neural 1 Code and data available from https://github.com/ UKPLab/naacl2019-like-humans-visual-attacks classifier.", "They show that the accuracy decreases significantly after a few manipulations if certain characters are swapped.", "Their character flipping approach requires very strong knowledge in the form of the attacked networks' gradients in a so-called white box attack setup.", "Chen et al. (2018) find that reading comprehension systems often ignore important question terms, thus giving incorrect answers when these terms are replaced.", "Belinkov and Bisk (2018) show that neural machine translation systems break for all kinds of noise to which humans are robust, such as reordering characters in words, keyboard typos and spelling mistakes.", "Alzantot et al. (2018) replace words by synonyms to fool text classifiers.", "Iyyer et al. (2018) reorder sentences syntactically to generate adversarial examples.", "In contrast to those related works which perform attacks on the character level, our attacks allow perturbation of any character in a word while potentially minimizing impairment for humans.", "For example, the strongest attack in Belinkov and Bisk (2018) is random shuffling of all characters, which is much more difficult to restore for humans.", "To cope with adversarial attacks, adversarial training ( Goodfellow et al., 2015) has been proposed as a standard remedy in which training data is augmented with data that is similar to the data used to attack the neural classifiers.", "Rodriguez and Rojas-Galeano (2018) propose simple rule-based corrections to address a limited number of attacks, including obfuscation (e.g., idiots to !d10ts) and negation (e.g., idiots to NOT idiots).", "Most other approaches have been explored in the context of CV, such as adding a stability objective during training ( Zheng et al., 2016) and distillation (Pa-pernot et al., 2016).", "However, methods to increase the robustness in CV have been shown to be less effective against more sophisticated attacks (Carlini and Wagner, 2017).", "Visual Character Embeddings were originally proposed to address large character vocabularies in compositional' languages like Chinese and Japanese.", "Shimada et al. (2016) and Dai and Cai (2017) employ a convolutional autoencoder to generate image-based character embeddings (ICE) for Japanese and Chinese text and show improvement on author and publisher identification tasks.", "Similarly, Liu et al. (2017) create ICEs from a CNN and show that ICEs carry more semantic content and are more suitable for rare characters.", "However, existing work on visual character embeddings has not used visual information to attack NLP systems or to them.", "To investigate the effects of visual attacks and propose methods for shielding, we introduce 1) a visual text perturber, 2) three character embedding spaces, and 3) methods for obtaining word embeddings from character embeddings, used as input representations in some of our experiments.", "Our visual perturber VIPER disturbs an input text in such a way that (ideally) it is still readable by humans but causes NLP systems to fail blatantly.", "We parametrize VIPER by a probability p and a character embedding space, CES: 2 For each character c in the input text a flip decision is made (i.i.d. Bernoulli distributed with probability p ), and if a replacement takes place, one of up to 20 nearest neighbors in the CES is chosen.", "3 Thus, we denote VIPER as taking two arguments: VIPER = VIPER ( p ; CES ) : Note that VIPER is a black-box attacker as it does not require any knowledge of the attacked system.", "It would also be possible to design a more intelligent perturber that only disturbs content words (or hot words), similar to Ebrahimi et al. (2018), but this would increase the difficulty for realizing VIPER as a black-box attacker because different types of hot words may be relevant for different tasks.", "We consider three different character embedding spaces.", "The first is continuous, assigning each character a dense 576 dimensional representation, which allows, e.g., for computing cosine similarities between any two characters as well as nearest neighbors for each input character.", "The other two are discrete and merely used as arguments to VIPER .", "Thus, they are only required to specify nearest neighbors for standard input characters.", "For them, each character c in a selected range (e.g., standard English alphabet a-zA-Z) is assigned a set 2 CES may be any embedding space' that can be used to identify the nearest neighbors of characters.", "of nearest neighbors, and all nearest neighbors are equidistant to c .", "All three CES carry visual information, i.e., nearest neighbors are visually similar to the character in question.", "For practical reasons, we limit all our perturbations to the first 30k Unicode characters throughout.", "Image-based character embedding space (ICES) provides a continuous image-based character embedding (ICE) for each Unicode character.", "We retrieve a 24 (cid:2) 24 image representation of the character (using Python's PIL library), then stack the rows of this matrix (with entries between 0 and 255 ) to form a 24 (cid:1) 24 = 576 dimensional embedding vector.", "Description-based character embedding space (DCES) is based on the textual descriptions of Unicode characters.", "We first obtain descriptions of each character from the Unicode 11.0.0 final names list (e.g., latin small letter a for the character a').", "Then we determine a set of nearest neighbors by choosing all characters whose descriptions refer to the same letter in the same case, e.g., an alternative to latin small letter a is latin small letter a with grave as it contains the keywords small and a.", "Easy character embedding space (ECES) provides manually selected simple visual perturbations.", "It contains exactly one nearest neighbor for each of the 52 characters a-zA-Z, chosen as a diacritic below or above a character, such as c for the character c .", "Differences between the CESs The three embedding spaces play different roles in our experiments.", "We use ICES as character representations in deep learning systems.", "DCES and ECES are used as input to VIPER to perturb our test data.", "4 ECES models a minimal perturbance with maximal impact' scenario: we assume that ECES perturbations do not or only minimally affect human perception but may still have a large impact upon NLP systems.", "Indeed, we could have chosen an even simpler embedding space, e.g., by considering visually identical characters in different alphabets, such as the Cyrillic a' (Unicode 1072) for a Latin a' (Unicode 97).", "DCES is a more difficult 4 We do not attack with ICES because we also shield with ICES and this would be a (very unrealistic) white box defense scenario.", "Besides, ICES is also more difficult to restore for humans (see below), making it less desirable for an attacker.", "test-bed designed for evaluating our approaches under more realistic conditions with more varied and stronger attacks.", "Table 2 exemplifies the differences between ICES, DCES, and ECES by comparing the nearest neighbors of a given character.", "As expected, ICES contains neighbors of characters which are merely visually similar without representing the same underlying character (such as L as a neighbor of A, or as a neighbor of", "i).", "In contrast, DCES sometimes has neighbors with considerable visual dissimilarity to the original character such as Cyrillic small letter i ( ) which rather resembles a mirror-inverted n.", "The overlap between ICES and DCES is modest: out of 20 neighbors, a character has on average only four to five common neighbors in ICES and DCES.", "Most neural NLP architectures encode text either on a character or word level.", "For the latter, word embeddings are needed.", "In this work, we use the ELMo architecture ( Peters et al., 2018) to obtain (contextualized) word embeddings based on characters, i.e., there exists no fixed vocabulary and there will be no (word-level) out-of-vocabulary issues due to perturbation.", "In the following, we outline our ELMo variant and a visual extension that includes visual signals from the input characters.", "SELMo: ELMo as proposed by Peters et al. (2018) first retrieves embeddings for every character in the input, which are learned as part of the network.", "ELMo then infers non-contextualized word embeddings by applying CNNs over all character embeddings in a word.", "Two layers of a deep bidirectional language model further process the word embeddings in their local sentential context and output contextualized word embeddings.", "We slightly extend ELMo to include character embeddings for the first 30k Unicode characters (instead of the default 256).", "We call this variant SELMo (Standard ELMo).", "It is worth pointing out that the learned character embeddings of SELMo carry almost no visual information, as illustrated in Table 2. That is, except for a few very standard cases, nearest neighbors of characters do not visually resemble the orginal characters, even when trained on the 1 billion word benchmark (Chelba et al., 2013).", "5 5 We believe SELMo nearest neighbors are more likely to be Chinese/Japanese/Korean (CJK) characters because these VELMo: To obtain a visually informed variant of ELMo, we replace learned character embeddings with the ICEs and keep the character embeddings fixed during training.", "This means that during training, the ELMo model learns to utilize visual features of the input, thus potentially being more robust against visual attacks.", "We call this variant VELMo (Visually-informed ELMo).", "To keep training times of SELMo and VELMo feasible, we use an output dimensionality of 512 instead of the original ELMo's 1024d output.", "Our detailed hyperparameter setup is given in A.1.", "We asked 6 human annotators, university employees and students with native or near-native English language skills, to recover the original underlying English sentences given some perturbed text (data taken from the POS tagging and Chunking tasks, see Table 4).", "We considered different conditions:", "(i) clean : VIPER ( 0 ; _ ) , i.e., no perturbation;", "(ii) VIPER ( p ; ICES ) for p = 0 : 2 ; 0 : 4 ; 0 : 6 ; 0 : 8 ;", "(iii) VIPER ( p ; DCES ) for p = 0 : 2 ; 0 : 4 ; 0 : 6 ; 0 : 8 ;", "(iv) easy : VIPER ( p ; ECES ) for p = 0 : 4 ; 0 : 8 .", "For each condition, we used 60-120 sentences, where at most 20 sentences of one condition were given to an annotator.", "Examples of selected conditions are shown in Table 3. Our rationale for including this recovery task is to test robustness of human perception under (our) visual perturbations.", "We focus on recovery instead of an extrinsic task such as POS because the latter would have required expert/trained annotators.", "We evaluate by measuring the normalized edit distance between the recovered sentence and the underlying original, averaged over all sequence pairs and all human annotators.", "We normalize by the maximum lengths of the two sequences.", "In our case, this metric can be interpreted as the fraction of characters that have been, on average, wrongly recovered by human annotators.", "We refer to the metric as error rate.", "Results are shown in Figure 1.", "In easy , there is almost no difference between perturbation levels p = 0 : 4 and p = 0 : 8 , so we merge the two conditions.", "Humans make copy mistakes even when the input is not perturbed, as evidenced by a positive nearest neighbors are largely random and there are far more CJK characters in our subset of Unicode.", "error rate in clean .", "Such mistakes are typically misspellings or the wrong type of quotation marks ( vs. ). We observe a slightly higher error rate in easy than in clean . However, on average 75% of all sentences are (exactly) correctly recovered in easy while this number is lower (72.5%) in clean . By chance, clean contains fewer sentences with quotation marks than easy , for which a copy mistake was more likely. This may explain easy 's higher error rate. As we increase the perturbation level, the error rate increases consistently for DCES/ICES. It is noteworthy that DCES perturbations are easier to parse for humans than ICES perturbations. We think this is because DCES perturbations always retain a variant of the same character, while ICES may also disturb one character to another character (such as h to b ). Another explanation is that ICES, unlike DCES and ECES, also disturbs numbers and punctuation. Numbers, especially, are more difficult to recover. However, even at 80% disturbance level, humans can, on average, correctly recover at least 93% of all characters in the input text in all conditions. In summary, humans appear very good at understanding visual perturbations, and are almost perfectly robust to the easy perturbations of ECES. Since adversarial attacks should have minimal impact on humans ( Szegedy et al., 2014), the good performance of humans especially on ECES and DCES makes these two spaces ideal candidates for attacks on NLP systems. 5 Computational Experiments We now evaluate the capabilities of SOTA neural network models to deal with visual attacks in four extrinsic evaluation tasks described in 5.1 and illustrated in Table 4. Hyperparameters of all our models are given in A.2. We first examine the robustness of all architectures to visual perturbations in 5.2 and then evaluate different shielding approaches in 5.3. 5.1 Tasks G2P: As our first task, we consider the character-level task of grapheme-to-phoneme (G2P) conversion. It consists of transcribing a character input stream into a phonetic representation. As our dataset, we choose the Combilex pronunciation dataset of American English (Richmond et al., Task Task Type Input Target / Label(s) Train/Dev/Test G2P char-lvl hl r E < @ d 5 i 5K/1K/1K POS word-lvl ::: exng it contra ::: ::: VBG PRP NN ::: 212K/44K/47K Chunking word-lvl ::: exng it contra ::: ::: B-VP B-NP I-NP ::: 212K/44K/47K Toxic Comments sent-lvl , yo t e . toxic, obscene, insult 149K/10K/64K Table 4: NLP tasks considered in this work, along with (perturbed) examples and data split statistics. 2009). We frame G2P as a sequence tagging task. To do so, we first hard-align input and output sequences using a 1-0,1-1,1-2 alignment scheme (Schnober et al., 2016) in which an input character is matched with zero, one, or two output characters. Once this preprocessing is done, input and output sequences have equal lengths and we can apply a standard BiLSTM on character-level to the aligned sequences (Reimers and Gurevych, 2017). POS & Chunking: We consider two word-level tasks. POS tagging associates each token with its corresponding word class (e.g., noun, adjective, verb ). Chunking groups words into syntactic chunks such as noun and verb phrases (NP and VP), assigning a unique tag to each word, which encodes the position and type of the syntactic constituent, e.g., begin-noun-phrase (B-NP). We use the training, dev and test splits provided by the CoNLL-2000 shared task (Sang and Buchholz, 2000) and use the same BiLSTM architecture as above with SELMo/VELMo embeddings. Toxic comment (TC) classification: A very realistic use case for adversarial attacks is the toxic comment classification task. One could easily think of a scenario where a person with malicious intent explicitly aims to fool automated methods for detecting toxic comments or insults by obfuscating text with non-standard characters that are still human-readable. We conduct experiments on the TC classification task provided by Kag-gle. 6 It is a multi-label sentence classification task with six classes, i.e., toxic , severe toxic , obscene , threat , insult , identity hate . We use average SELMo/VELMo embeddings as input to an MLP. 5.2 VIPER attacks In Figure 2, we plot how various SOTA systems degrade as we perturb the test data using DCES. We do not only include our own systems, but also existing SOTA models: Marmot (Mller et al., 2013) 6 https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge/ 0 0.10.20.30.40.50.60.70.80.91 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 s (cid:3) ( p ) p POSChunkG2PTC Figure 2: Degradation of SOTA systems for different perturbation levels when attacked by VIPER ( p ,DCES). The colored regions show how the performance of other SOTA systems relate to ours (i.e., they all suffer from similar degradation). and Stanford POS tagger (SPT) (Manning et al., 2014). Marmot is a feature-based POS tagger and trained on our data splits. SPT is a bi-directional dependency network tagger that mostly employs lexical features. For SPT, we used the pretrained English model provided by the toolkit. Further, we include a FastText TC classifier which has achieved SOTA performance. 7 We additionally experiment with word level dependency embeddings for POS tagging and TC classification ( Komninos and Man-andhar, 2016). To compare the performance of different tasks, Figure 2 shows scores computed by: s (cid:3) ( p ) = s ( p ) s ( 0 ) ; where p is the perturbation level and s ( p ) is the score for each task at p , measured in edit distance for G2P, accuracy for POS tagging, micro-F1 for chunking, and AUCROC for TC classification. We invert the scores g of G2P by 1 = g since lower scores are better for edit distance. Thus, s (cid:3) ( 0 ) is always 1 and s (cid:3) ( p ) is the relative performance compared to the clean case of no perturbations. We see that all systems degrade considerably. For example, all three POS taggers have a performance of below 60% of the clean score when 40% 7 https://www.kaggle.com/yekenot/pooled-gru-fasttext of the input characters are disturbed. Chunking degrades even more strongly, and G2P has the highest drop: 10% perturbation level causes a 40% performance deterioration. This may be because G2P is a character-level task and the perturbation of a single character is analogous to perturbing a complete word in the word-level tasks. Finally, TC classification degrades least, i.e., only at p = 0 : 9 do we see a degradation of 30% relative to the clean score. These results appear to suggest that character-level tasks suffer the most from our VIPER attacks and sentence-level tasks the least. However, it is worthwhile pointing out that lower-bounds for individual tasks may depend on the evaluation metric (e.g., AUCROC always yields 0.5 for majority class voting) as well as task-specific idiosyncrasies such as the size of the label space. We note that the degradation curves look virtually identical for both DCES or ECES perturbations (given in A.3). This is in stark contrast to human performance, where ECES was much easier to parse than DCES, indicating the discrepancies between human and machine text processing. 5.3 Shielding We study four forms of shielding against VIPER attacks: adversarial training ( AT ), visual character embeddings ( CE ), AT + CE , and rule-based recovery ( RBR ). For AT , we include visually perturbed data at train time. We do not augment the training data, but replace clean examples using VIPER in the same way as for the test data. Based on preliminary experiments with the G2P task, we apply VIPER to the training data using p train = 0 : 2 . Higher levels of p train did not appear to improve performance. For CE , we use fixed ICEs, either fed directly into a model (G2P) or via VELMo (all other tasks). For AT + CE , we combine adversarial training with visual embeddings. Finally, for RBR , we replace each non-standard character in the input stream with its nearest standard neighbor in ICES, where we define the standard character set as a-zA-Z plus punctuation. Rather than absolute scores, we report differences between the scores in one of the shielding treatments and original scores: t : = s (cid:3) ( p ) (cid:0) s (cid:3) ( p ) ; s (cid:3) ( p ) : = s ( p ) = s ( 0 ) where s ( p ) is the score for each task using a form of shielding. The value t denotes the improvement of the scores from shielding method t over the original scores without shielding. We normalize s ( p ) by the score s ( 0 ) of the systems without shielding on clean data. We also note that our test perturbations are unseen during training for DCES; for ECES this would not make sense, because each character has only one nearest neighbor. In the following, we report results mostly for DCES and show the ECES results in A.3. We highlight marked differences between the results, however. All tasks typically profit considerably from AT (Figure 3 left). Chunking scores improve most; e.g., at p = 0 : 5 , s (cid:3) is 17 percent points (pp) higher than s (cid:3) . AT does not help for G2P in the DCES setting but it does help for ECES (see A.3), where test perturbations may have been seen during training. We conjecture that AT makes systems generally aware that the input can be broken in some way and forces them to shield against such situations, an effect similar to dropout. However, such shielding appears more difficult in character-level tasks, where a missing token is considerably more damaging than in wordor sentence-level tasks. In Figure 3 (right), we observe that CE helps a lot for G2P, but much less particularly for POS and Chunking. We believe that for G2P, the visual character embeddings restore part of the input and thus have considerable effect. It is surprising, however, that visual embeddings have no positive effect for both word-level tasks, and instead lead to small deteriorations. A possible explanation is that, as the character embeddings are fed into the ELMo architecture, their effect is dampened. Indeed, we performed a sanity check (see A.5) to test how (co-sine) similar a word or sentence w is to a perturbed version w of w under both SELMo and VELMo. We found that VELMo assigns consistently better similarities but the overall gap is small. We observe that the combined effect of AT and CE ( AT + CE , Figure 4 left) is always substantially better than either of the two alone. For instance, at p = 0 : 5 , POS improves by about 20pp, while AT alone had an effect of only 12pp and the effect of CE was even negative. Thus, it appears that AT is able to kick-start the benefits of CE, especially in the case when they alone are not effective. RBR is excellent for ECES (see A.3). It has a small negative effect on clean data, meaning that there is some foreign material in English texts which gets corrupted by RBR, but for any p > 0 the performance under RBR is almost on the level of p = 0 for ECES. RBR is also consistently better -0.1 0 0.1 0.2 0.3 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 -0.1 0 0.1 0.2 0.3 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 AT p POSChunkTCG2P CE p POSChunkTCG2P Figure 3: AT (with ICES replacements) and CE tested on DCES perturbed data. The colored regions show AT (with random replacements). -0.1 0 0.1 0.2 0.3 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 -0.1 0 0.1 0.2 0.3 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 AT + CE p POSChunk TCG2P RBR p POSChunk TCG2P Figure 4: AT + CE (with ICES replacements) and RBR on DCES perturbed data. The colored regions show AT (with random replacements). than CE, even though both depend on ICES: CE in a soft' way and RBR in a hard' way. Our best explanation is that RBR is analogous to machine translating' a foreign text into English and then applying a trained classifier, while CE is analogous to a direct transfer approach (McDonald et al., 2011) which trains in one domain and is then applied to another. This causes a form of domain shift to which neural nets are quite vulnerable (Ruder and Plank , 2018; Eger et al., 2018a). For DCES, RBR is outperformed by AT + CE, which better mitigates the domain shift than CE, except for TC. We note that even with all our shielding approaches, the performance of the shielded systems is still considerably below the performance on clean data at some perturbation levels. E.g., at p = 0 : 9 , AT + CE shielded Chunking has a score of less than 60% of the clean performance. While it may be partially due to our character embeddings not being optimal (i.e., they assign low similarity to major and minor variants of the same letter such as and s, which could be improved by size-invariant CNNs), a main reason for this could be the domain-shift induced by the perturbations, for which even AT cannot always help when attacks are unseen during training. This is another major distinction between human and machine processing. 6 Discussion Random AT. We discussed that the effect of AT may be similar to dropout. If so, AT with random rather than visual character replacements should be similarly effective. Indeed, the graphs in Figures 3 and 4 show that injecting random noise generally improves the robustness, but not to the level of visually informed AT (exemplarily for POS/TC). Error analysis. We analyze the cases in which our perturbation with VIPER changes the prediction to a wrong class. We perform our analysis for TC as it represents a real-world attack scenario. We define the toxicity level (TL) of a comment x for a class y 2 f 1 ;:::; 6 g and model q as TL ( x ) = (cid:229) y q ( x ; y ) ; e.g., a comment x which has been classified as insult (probability 0.8) and obscene (prob-ability 0.7) gets a TL of 1 : 5 . We consider a successful attack to decrease TL after perturbation. VIPER with DCES and p = 0 : 1 achieves a success rate of 24.1%i.e., roughly one fourth of the toxic comments receive a lower TL. In contrast, the Perturbation (CES) Perturbed Text Toxicity Level No Shielding AT CE AT + CE RBR clean u are a gigantic faggot 4.00 3.29 3.16 2.93 3.11 3.57 VIPER ( p ; ECES ) u are gigantic faggo 4.00 0.11 1.05 1.37 2.05 3.57 VIPER ( p ; DCES ) u are a igantic faggo 4.00 0.82 1.18 3.36 3.16 1.69 clean he is also a faggot . 4.00 3.26 3.68 2.81 2.14 3.79 VIPER ( p ; ECES ) e is also a faggot . 4.00 3.39 3.70 2.64 2.23 3.79 VIPER ( p ; DCES ) he is alo a fago . 4.00 0.48 1.48 1.55 1.78 0.40 Table 5: Two examples of toxic/non-toxic comments that show the effects of the different shielding methods. We report the averaged sum over the six toxicity classes, e.g., 4 : 00 is equal to a positive example in four classes. p = 0 : 1 . impact on non-toxic comments is smallTL increased in only 3.2% of the cases. Table 5 shows sample comments and their TL for different shielding and perturbation methods. As can be seen, perturbing specific words ( hot words for TC) substantially reduces the TL score of a non-shielded approach (e.g., from 3.29 to 0.11), while perturbing non-hot' words like he' has lit-tle effect. The shielding approaches help in these show-cased examples to various degrees and the shielding with AT + CE is more robust to stronger attacks (higher visual dissimilarity) than RBR. This illustrates that a malicious attacker may aim to increase the success rate of an attack by only perturbing offensive words (in the TC task). To test whether VIPER benefits from perturbing such hot words, we manually compiled a list of 20 hand-selected offensive words (see A.6) which we believe are indicators of toxic comments. We then analyzed how often a perturbation of a word from this list co-occurs with a successful attack. We observe that in 55% of successful attacks, a word from our list was among the perturbed words of the comment. As our list is only a small subset of all possible offensive words, the perturbation of hot words may have an even stronger effect. 7 Conclusion In this work, we considered visual modifications to text as a new type of adversarial attack in NLP and we showed that humans are able to reliably recover visually perturbed text. In a number of experiments on character-, word-, and sentence-level, we highlighted the fundamental differences between humans and state-of-the-art NLP systems, which sometimes blatantly fail under visual attack, showing that visual adversarial attacks can have maximum impact. This calls for models that have richer biases than current paradigm types do, which would allow them to bridge the gaps in information processing between humans and machines. We have explored one such bias, visual encoding, but our results suggest that further work on such shielding is necessary in the future. Our work is also important for system builders, such as of toxic comment detection models deployed by, e.g., Facebook and Twitter, who regularly face visual attacks, and who might face even more such attacks once visual character perturbations are easier to insert than via the keyboard. From the opposite viewpoint, VIPER may help users retain privacy in online engagements and when trying to avoid censorship ( Hiruncharoen-vate et al. , 2015) by suggesting visually similar spellings of words. Finally, our work shows that the brittleness' (Belinkov and Bisk, 2018) of NLP extends beyond MT and beyond word reordering or replacements, a recognition that we hope inspires others to investigate more ubiquitous shielding techniques. Acknowledgments We thank the reviewers for helpful feedback. This work has been supported by the German Research Foundation (DFG) funded research training group Adaptive Preparation of Information form Heterogeneous Sources (AIPHES, GRK 1994/1), the DFG-funded projects QA-EduInf (GU 798/18-1, RI 803/12-1), DIP (GU 798/17-1), the German Federal Ministry of Education and Research (BMBF) under the promotional references 16DHL1040 (FAMULUS) and by the Hessian research excellence program Landes-Offensive zur Entwicklung Wissenschaftlichkonomischer Exzellenz (LOEWE) as part of the a!", "automated language instruction project (No. 521/17-03).", "We gratefully acknowledge support of NVIDIA Corp. with the donation of the Tesla K40 GPU used for this research.", "Calculations for this research were also conducted on the Lichtenberg high performance cluster of Technical University Darmstadt." ]
[ "abstain", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "result", "abstain", "objective", "objective", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain" ]
[ "Few-shot crosslingual transfer has been shown to outperform its zero-shot counterpart with pretrained encoders like multilingual BERT.", "Despite its growing popularity, little to no attention has been paid to standardizing and analyzing the design of few-shot experiments.", "In this work, we highlight a fundamental risk posed by this shortcoming, illustrating that the model exhibits a high degree of sensitivity to the selection of few shots.", "We conduct a large-scale experimental study on 40 sets of sampled few shots for six diverse NLP tasks across up to 40 languages.", "We provide an analysis of success and failure cases of few-shot transfer, which highlights the role of lexical features.", "Additionally, we show that a straightforward full model finetuning approach is quite effective for few-shot transfer, outperforming several state-of-the-art few-shot approaches.", "As a step towards standardizing few-shot crosslingual experimental designs, we make our sampled few shots publicly available.", "1 1 Introduction Multilingual pretrained encoders like multilingual BERT (mBERT; Devlin et al. (2019)) and XLM-R (Conneau et al., 2020) are the top performers in crosslingual tasks such as natural language inference (Conneau et al., 2018), document classification (Schwenk and Li, 2018; Artetxe and Schwenk, 2019), and argument mining (Toledo-Ronen et al., 2020).", "They enable transfer learning through language-agnostic representations in crosslingual setups (Hu et al., 2020).", "A widely explored transfer scenario is zero-shot crosslingual transfer (Pires et al., 2019; Conneau and Lample, 2019; Artetxe and Schwenk, 2019), * Equal contribution.", "where a pretrained encoder is finetuned on abundant task data in the source language (e.g., English) and then directly evaluated on target-language test data, achieving surprisingly good performance (Wu and Dredze, 2019; Hu et al., 2020).", "However, there is evidence that zero-shot performance reported in the literature has large variance and is often not reproducible (Keung et al., 2020a; Rios et al., 2020); the results in languages distant from English fall far short of those similar to English (Hu et al., 2020; Liang et al., 2020).", "Lauscher et al. (2020) stress the importance of few-shot crosslingual transfer instead, where the encoder is first finetuned on a source language and then further finetuned with a small amount (10100) of examples ( few shots ) of the target language.", "The few shots substantially improve model performance of the target language with negligible annotation costs (Garrette and Baldridge, 2013; Hedderich et al., 2020).", "In this work, however, we demonstrate that the gains from few-shot transfer exhibit a high degree of sensitivity to the selection of few shots .", "For example, different choices for the few shots can yield a performance variance of over 10% accuracy in a standard document classification task.", "Motivated by this, we propose to fix the few shots for fair comparisons between different crosslingual transfer methods, and provide a benchmark resembling the standard N -way K -shot few-shot learning configuration (Fei-Fei et al., 2006; Koch et al., 2015).", "We also evaluate and compare several state-of-the-art (SotA) few-shot finetuning techniques, in order to understand their performance and susceptibility to the variance related to few shots.", "We also demonstrate that the effectiveness of few-shot crosslingual transfer depends on the type of downstream task.", "For syntactic tasks such as named-entity recognition, the few shots can improve results by up to 20 F 1 points.", "For challenging tasks like adversarial paraphrase identification, the few shots do not help and even sometimes lead to worse performance than zero-shot transfer.", "To understand these phenomena, we conduct additional in-depth analyses, and find that the models tend to utilize shallow lexical hints (Geirhos et al., 2020) in the target language, rather than leveraging abstract crosslingual semantic features learned from the source language.", "Our contributions:", "1) We show that few-shot crosslingual transfer is prone to large variations in task performance; this property hinders unbiased assessments of the effectiveness of different few-shot methods.", "2) To remedy this issue, we publish fixed and standardized few shots to support fair comparisons and reproducibility.", "3) We empirically verify that few-shot crosslingual transfer has different performance impact on structurally different tasks; we provide in-depth analyses concerning the source of performance gains.", "4) We analyze several SotA few-shot learning methods, and show that they underperform simple full model finetuning.", "We hope that our work will shed new light on the potential and current difficulties of few-shot learning in crosslingual setups.", "Zero-/Few-Shot Crosslingual Transfer .", "Multilingual pretrained encoders show strong zero-shot crosslingual transfer ( ZS-XLT ) ability in various NLP tasks (Pires et al., 2019; Hsu et al., 2019; Artetxe and Schwenk, 2019).", "In order to guide and measure the progress, standardized benchmarks like XTREME (Hu et al., 2020) and XGLUE (Liang et al., 2020) have been developed.", "Recently, Lauscher et al. (2020) and Hedderich et al. (2020) extended the focus on few-shot crosslingual transfer ( FS-XLT ): They assume the availability of a handful of labeled examples in a target language, 2 which are used to further finetune a source-trained model.", "The extra few shots bring large performance gains at low annotation cost.", "In this work, we systematically analyze this recent FS-XLT scenario.", "FS-XLT resembles the intermediate-task transfer (STILT) approach (Phang et al., 2018; Pruk-sachatkun et al., 2020).", "In STILT, a pretrained encoder is finetuned on a resource-rich intermedi-2 According to Garrette and Baldridge (2013), it is possible to collect 100 POS-annotated sentences in two hours even for low-resource languages such as Malagasy.", "ate task, and then finetuned on a (resource-lean) target task.", "Likewise, FS-XLT focuses on transferring knowledge and general linguistic intelligence (Yogatama et al., 2019), although such transfer is between languages in the same task instead of between different tasks.", "Few-shot learning was first explored in computer vision (Miller et al., 2000; Fei-Fei et al., 2006; Koch et al., 2015); the aim there is to learn new concepts with only few images.", "Methods like prototypical networks (Snell et al., 2017) and model-agnostic meta-learning (MAML; Finn et al. (2017)) have also been applied to many monolingual (typi-cally English) NLP tasks such as relation classification (Han et al., 2018; Gao et al., 2019), named-entity recognition (Hou et al., 2020a), word sense disambiguation (Holla et al., 2020), and text classification (Yu et al., 2018; Yin, 2020; Yin et al., 2020; Bansal et al., 2020; Gupta et al., 2020).", "However, recent few-shot learning methods in computer vision consisting of two simple finetuning stages, first on base-class images and then on new-class few shots, have been shown to outperform MAML and achieve SotA scores (Wang et al., 2020; Chen et al., 2020; Tian et al., 2020; Dhillon et al., 2020).", "Inspired by this work, we compare various few-shot finetuning methods from computer vision in the context of FS-XLT.", "Task Performance Variance .", "Deep neural net-works' performance on NLP tasks is bound to exhibit large variance.", "Reimers and Gurevych (2017) and Dror et al. (2019) stress the importance of reporting score distributions instead of a single score for fair(er) comparisons.", "Dodge et al. (2020), Mos-bach et al. (2021), and Zhang et al. (2021) show that finetuning pretrained encoders with different random seeds yields performance with large variance.", "In this work, we examine a specific source of variance: We show that the choice of the few shots in crosslingual transfer learning also introduces large variance in performance; consequently, we offer standardized few shots for more controlled and fair comparisons.", "Following Lauscher et al. (2020) and Hedderich et al. (2020), our FS-XLT method comprises two stages.", "First, we conduct source-training : The pretrained mBERT is finetuned with abundant annotated data in the source language.", "Similar to Hu et al. (2020), Liang et al. (2020) and due to Name Metric Task |T | TS # of lang.", "the abundant labeled data for many NLP tasks, we choose English as the source in our experiments.", "Directly evaluating the source-trained model after this stage corresponds to the widely studied ZS-XLT scenario.", "The second stage is target-adapting : The source-trained model from previous stage is adapted to a target language using few shots.", "We discuss details of sampling the few shots in 4.", "The development set of the target language is used for model selection in this stage.", "We consider three types of tasks requiring varying degrees of semantic and syntactic knowledge transfer: Sequence classification ( CLS ), named-entity recognition ( NER ), and part-of-speech tagging ( POS ) in up to 40 typologically diverse languages (cf., Appendix B).", "For the CLS tasks, we sample few shots from four multilingual datasets: News article classification (MLDoc; Schwenk and Li (2018)); Amazon review classification (MARC; Keung et al. (2020b)); natural language inference (XNLI; Conneau et al. (2018); Williams et al. (2018)); and crosslingual paraphrase adversaries from word scrambling (PAWSX; Zhang et al. (2019); Yang et al. (2019)).", "We use treebanks in Universal Dependencies (Nivre et al., 2020) for POS, and WikiANN dataset (Pan et al., 2017; Rahimi et al., 2019) for NER.", "Table 1 reports key information about the datasets.", "We adopt the conventional few-shot sampling strategy (Fei-Fei et al., 2006; Koch et al., 2015; Snell et al., 2017), and conduct N -way K -shot sampling from the datasets; N is the number of classes and K refers to the number of shots per class.", "A group of N -way K -shot data is referred to as a bucket .", "We set N equal to the number of labels |T | .", "Following Wang et al. (2020), we sample 40 buckets for each target (i.e., non-English) language of a task to get a reliable estimation of model performance.", "CLS Tasks .", "For MLDoc and MARC, each language has a train/dev/test split.", "We sample the buckets without replacement from the training set of each target language, so that buckets are disjoint from each other.", "Target languages in XNLI and PAWSX only have dev/test splits.", "We sample the buckets from the dev set; the remaining data serves as a single new dev set for model selection during target-adapting.", "For all tasks, we use K { 1 , 2 , 4 , 8 } .", "POS and NER .", "For the two structured prediction tasks, N -way K -shot is not well-defined because each sentence contains one or more labeled tokens.", "We use a similar sampling principle as with CLS, where N is the size of the label set for each language and task, but K is set to the minimum number of occurrences for each label.", "In particular, we utilize the Minimum-Including Algorithm (Hou et al., 2020b,a) to satisfy the following criteria when sampling a bucket:", "1) each label appears at least K times, and", "2) at least one label will appear less than K times if any sentence is removed from the bucket.", "Appendix C gives sampling details.", "In contrast to sampling for CLS, we do not enforce samples from different buckets to be disjoint due to the small amount of data in some low-resource languages.", "We only use K { 1 , 2 , 4 } and exclude K = 8 , as 8-shot buckets already have lots of labeled tokens, and thus (arguably) might not be considered few-shot.", "We use the pretrained cased mBERT model (Devlin et al., 2019), and rely on the PyTorch-based (Paszke et al., 2019) HuggingFace Transformers repository (Wolf et al., 2019) in all experiments.", "For source-training , we finetune the pretrained encoder for 10 epochs with batch size 32.", "For target-adapting to every target language, the few-shot data is a sampled bucket in this language, and we finetune on the bucket for 50 epochs with early-stopping of 10 epochs.", "The batch size is set to the number of shots in the bucket.", "Each target-adapting experiment is repeated 40 times using the 40 buckets.", "We use the Adam optimizer (Kingma and Ba, 2015) with default parameters in both stages with learning rates searched over { 1 e 5 , 3 e 5 , 5 e 5 , 7 e 5 } .", "For CLS tasks, we use mBERT's [CLS] token as the final represen-48 49 50 51 52 53 Validation Accuracy (%) 0 5 C o un t Perf.", "tation.", "For NER and POS, following Devlin et al. (2019), we use a linear classifier layer on top of the representation of each tokenized word, which is its last wordpiece (He and Choi, 2020).", "We set the maximum sequence length to 128 after wordpiece tokenization (Wu et al., 2016), in all experiments.", "Further implementation details are shown in our Reproducibility Checklist in Appendix A.", "The ZS-XLT performance from English (EN) to target languages of the four CLS tasks are shown in the K = 0 column in Table 2.", "For NER and POS, the results are shown in Figure 2.", "For XTREME tasks (XNLI, PAWSX, NER, POS), our implementation delivers results comparable to Hu et al. (2020).", "For MLDoc, our results are comparable to (Dong and de Melo, 2019; Wu and Dredze, 2019; Eisenschlos et al., 2019).", "It is worth noting that reproducing the exact results is challenging, as suggested by Keung et al. (2020a).", "For MARC, our zero-shot results are worse than Keung et al. (2020b)'s who use the dev set of each target language for model selection while we use EN dev, following the common true ZS-XLT setup.", "Variance of Few-Shot Transfer .", "We hypothesize that FS-XLT suffers from large variance (Dodge et al., 2020) due to the large model complexity and small amount of data in a bucket.", "To test this empirically, we first conduct two experiments on MLDoc and MARC.", "First, for a fixed random seed , we repeat 1-shot target-adapting 40 times using different 1-shot buckets in German (DE) and Spanish (ES).", "Second, for a fixed 1-shot bucket, we repeat the same experiment 40 times using random seeds in { 0 . . . 39 } .", "Figure 1 presents the dev set performance distribution of the 40 runs with 40 random seeds (top) and 40 1-shot buckets (bottom).", "With exactly the same training data, using different random seeds yields a 12 accuracy difference of FS-XLT (Figure 1 top).", "A similar phenomenon has been observed in finetuning monolingual encoders (Dodge et al., 2020) and multilingual encoders with ZS-XLT (Keung et al., 2020a; Wu and Dredze, 2020b; Xia et al., 2020); we show this observation also holds for FS-XLT.", "The key takeaway is that varying the buckets is a more severe problem.", "It causes much larger variance (Figure 1 bottom): The maximum accuracy difference is 6 for DE MARC and 10 for ES MLDoc.", "This can be due to the fact that difficulty of individual examples varies in a dataset (Swayamdipta et al., 2020), resulting in different amounts of information encoded in buckets.", "This large variance could be an issue when comparing different few-shot learning algorithms.", "The bucket choice is a strong confounding factor that may obscure the strength of a promising few-shot technique.", "Therefore, for fair comparison, it is necessary to work with a fixed set of few shots.", "We propose to fix the sampled buckets for unbiased comparison of different FS-XLT methods.", "We publish the sampled buckets from the six multilingual datasets as a fixed and standardized few-shot evaluation benchmark.", "In what follows, each FS-XLT experiment is repeated 40 times using 40 different buckets with the same fixed random seed; we report mean and standard deviation.", "As noted, the variance due to random seeds is smaller", "(cf., Figure", "1) and has been well studied before (Reimers and Gurevych, 2017; Dodge et al., 2020).", "In this work, we thus focus our attention and limited computing resources on understanding the impact of buckets, the newly detected source of variance.", "However, we encourage practitioners to report results with both factors considered in the future.", "Different Numbers of Shots .", "A comparison concerning the number of shots ( K ), based on the few-shot results in Table 2 and Figure 2, reveals that the buckets largely improve model performance on a majority of tasks (MLDoc, MARC, POS, NER) over zero-shot results.", "This is in line with prior work (Lauscher et al., 2020; Hedderich et al., 2020) and follows the success of work on using bootstrapped data (Chaudhary et al., 2019; Sherborne K=0 K=1 K=2 K=4 K=8 MLD o c EN 96.88 ---DE 88.30 90.36 1.48 90.77 0.87 91.85 0.83 91.98 0.82 FR 83.05 88.94 2.46 89.71 1.68 90.80 0.88 91.01 0.94 ES 81.90 83.99 2.35 85.65 1.60 86.30 1.85 88.46 1.90 IT 74.13 74.97 2.04 75.29 1.57 76.43 1.41 78.12 1.25 RU 72.33 77.40 4.27 80.57 1.37 81.33 1.33 81.91 1.21 ZH 84.38 87.18 1.45 87.31 1.53 88.33 1.11 88.72 1.05 JA 74.58 76.23 1.59 76.71 2.12 78.60 2.43 81.17 1.72 MARCEN 64.52 ---DE 49.62 51.50 1.58 52.76 0.87 52.78 1.00 53.32 0.59 FR 47.30 49.32 1.34 49.70 1.43 50.64 0.94 51.23 0.76 ES 48.44 49.72 1.24 49.96 1.12 50.45 1.22 51.25 0.93 ZH 40.40 43.19 1.76 44.45 1.36 45.40 1.26 46.40 0.93 JA 38.84 41.95 2.09 43.63 1.30 43.98 0.89 44.44 0.69 XNLIEN 82.67 ---DE 70.32 70.58 0.36 70.60 0.34 70.61 0.39 70.70 0.50 FR 73.57 73.41 0.48 73.74 0.46 73.57 0.49 73.77 0.44 ES 73.71 73.84 0.40 73.87 0.44 73.74 0.48 73.87 0.46 RU 68.70 68.81 0.52 68.76 0.54 68.87 0.55 68.81 0.77 ZH 69.32 69.73 0.94 69.75 0.94 70.56 0.76 70.62 0.86 AR 64.97 64.75 0.36 64.82 0.23 64.82 0.23 64.94 0.37 BG 67.58 68.15 0.69 68.19 0.75 68.55 0.67 68.32 0.70 EL 65.67 65.64 0.40 65.73 0.36 65.80 0.41 66.00 0.53 HI 56.57 56.94 0.82 57.07 0.82 57.21 1.14 57.82 1.18 SW 48.08 50.33 1.08 50.28 1.24 51.08 0.62 51.01 0.79 TH 46.17 49.43 2.60 50.08 2.42 51.32 2.07 52.16 2.43 TR 60.40 61.02 0.68 61.20 0.61 61.35 0.49 61.31 0.56 UR 57.05 57.56 0.85 57.83 0.91 58.20 0.93 58.67 1.03 VI 69.82 70.04 0.59 70.14 0.75 70.23 0.63 70.41 0.70 PAWSXEN 93.90 ---DE 83.80 84.14 0.40 84.08 0.42 84.04 0.47 84.23 0.66 FR 86.90 87.07 0.27 87.06 0.37 87.03 0.31 86.94 0.41 ES 88.25 87.90 0.54 87.80 0.56 87.84 0.53 87.85 0.75 ZH 77.75 77.71 0.37 77.63 0.47 77.68 0.51 77.82 0.64 JA 73.30 73.78 0.75 73.71 1.04 73.48 0.69 73.79 1.28 KO 72.05 73.75 1.30 73.11 1.05 73.79 0.92 73.31 0.61 Table 2: Zero-shot (column K = 0 ) and few-shot (columns K > 0 ) results (Acc. in %) on the test set for CLS tasks.", "In general, we observe that:", "1) 1-shot buckets bring the largest relative performance improvement over ZS-XLT;", "2) the gains follow the increase of K , but with diminishing returns;", "3) the performance variance across the 40 buckets decreases as K increases.", "These observations are more pronounced for POS and NER; e.g., 1-shot EN to Urdu (UR) POS transfer shows gains of 22 F 1 points (52.40 with zero-shot, 74.95 with 1-shot).", "For individual runs, we observe that models in FS-XLT tend to overfit the buckets quickly at small K values.", "For example, in around 32% of NER 1-shot buckets, the model achieves the best dev score right after the first epoch; continuing the training only degrades performance.", "Similar observations hold for semantic tasks like MARC, where in 10 out of 40 DE 1-shot buckets, the dev set performance peaks at epoch 1 (cf. learning curve in Appendix D Figure 6).", "This suggests the necessity of running the target-adapting experiments on multiple buckets if reliable conclusions are to be drawn.", "Different Downstream Tasks .", "The models for different tasks present various levels of sensitivity to FS-XLT.", "Among the CLS tasks that require semantic reasoning, FS-XLT benefits MLDoc the most.", "This is not surprising given the fact that keyword matching can largely solve MLDoc (Artetxe et al., 2020a,b): A few examples related to target language keywords are expected to significantly improve performance.", "FS-XLT also yields prominent gains on the Amazon review classification dataset MARC.", "Similar to MLDoc, we hypothesize that just matching a few important opinion and sentiment words (Liu, 2012) in the target language brings large gains already.", "We provide further qualitative analyses in 5.4.", "XNLI and PAWSX behave differently from MLDoc and MARC.", "XNLI requires higher level semantic reasoning on pairs of sentences.", "FS-XLT performance improves modestly (XNLI) or even decreases (PAWSX-ES) compared to ZS-XLT, even with large K .", "PAWSX requires a model to distinguish adversarially designed non-paraphrase sentence pairs with large lexical overlap like Flights from New York to Florida and Flights from Florida to New York (Zhang et al., 2019).", "This poses a challenge for FS-XLT, given the small amount of target language information in the buckets.", "Therefore, when buckets are small (e.g., K = 1 ) and for challenging semantic tasks like PAWSX, the buckets do not substantially help.", "Annotating more shots in the target language is an intuitive solution.", "Designing task-specific pretrain-ing/finetuning objectives could also be promising (Klein and Nabi, 2020; Ram et al., 2021).", "Unlike CLS tasks, POS and NER benefit from FS-XLT substantially.", "We speculate that there are two reasons:", "1) Both tasks often require little to no high-level semantic understanding or reasoning;", "2) due to i.i.d. sampling, train/dev/test splits are likely to have overlapping vocabulary, and the labels in the buckets can easily propagate to dev and test.", "We delve deeper into these conjectures in 5.4.", "Different Languages .", "For languages that are more distant from EN, e.g., with different scripts, small lexical overlap, or fewer common typological features (Pires et al., 2019; Wu and Dredze, 2020a), FS-XLT introduces crucial lexical and structural information to guide the update of embedding and transformer layers in mBERT.", "We present several findings based on the NER and POS results for a typologically diverse language sample.", "Figure 2 shows that for languages with non-Latin scripts (different from EN), despite NL82.8 FR80.4 DE79.0 IT80.3 PT79.3 AF78.4 ET71.9 HU71.3 ES77.2 MS68.6 SW68.4 FI68.4 TL69.2 TR65.8 VI64.7 EU55.4 JV61.2 ID60.1 0 5 10 15 20 25 30 35 40 F 1 S c o r e I m p r o v e m e n t s Same Script = Yes EL75.2 BG78.6 KA61.3 KO46.5 ML46.8 MR54.7 MY42.5 HI65.8 TA46.1 HE56.4 RU65.2 BN64.2 TE50.0 TH1.5 KK40.3 JA7.2 AR39.9 UR40.8 YO35.5 FA40.7 ZH13.9 Same Script = No F1 Score Improvements 1-shot 2-shot 4-shot NL88.3 PT86.5 ID70.8 ET79.2 IT86.0 DE86.4 ES86.6 FI74.5 AF86.6 TR57.6 FR82.5 HU75.1 VI55.0 EU49.5 0 5 10 15 20 25 30 35 F 1 S c o r e I m p r o v e m e n t s RU86.4 HE76.8 TE67.5 BG87.0 EL81.9 AR66.5 TA53.5 ZH63.0 MR58.7 HI64.3 FA65.7 KO42.3 UR52.4 JA47.6 Figure 2: Improvement in F 1 (mean and standard deviation) of FS-XLT over ZS-XLT (numbers shown on x-axis beneath each language) for NER (top) and POS (bottom) for three different bucket sizes.", "their small to non-existent lexical overlap 3 and diverging typological features (see Appendix D Tables 9 and 14), the performance boosts are generally larger than those in the same-script target languages: 6.2 vs. 3.0 average gain in NER and 11.4 vs. 5.4 in POS for K = 1 .", "This clearly manifests the large information discrepancy between target-language buckets and source-language data.", "EN data is less relevant to these languages, so they obtain very limited gain from source-training, reflected by their low ZS-XLT scores.", "With a small amount of target-language knowledge in the buckets, the performance is improved dramatically, highlighting the effectiveness of FS-XLT.", "Table 3 shows that, besides script form, lexical overlap and the number of linguistic features com-3 We define lexical overlap as | V | L | V | EN | V | EN where V denotes vocabulary.", "| V | L is computed with the 40 buckets of a target language L. mon with EN 4 also contribute directly to FS-XLT performance difference among languages: There is a moderate negative correlation between F 1 score gains vs. the two factors when considered independently for both syntactic tasks: The fewer over-laps/features a target language shares with EN, the larger the gain FS-XLT achieves.", "This again stresses the importance of buckets they contain target-language-specific knowledge about a task that cannot be obtained by ZS-XLT, which solely relies on language similarity.", "Interestingly, Pearson's indicates that common linguistic features are much less linearly correlated with FS-XLT gains in NER than in POS.", "Table 4 reports the performance drop when directly carrying out target-adapting, without any prior source-training of mBERT.", "We show the scores for MLDoc and PAWSX as a simple and a challenging CLS task, respectively.", "For NER and POS, we select two high(Russian (RU), ES), mid(Viet-namese (VI), Turkish (TR)), and low-resource languages (Tamil (TA), Marathi (MR)) each.", "5 The results clearly indicate that omitting the 4 Following Pires et al. (2019), we use six WALS features: 81A (Order of Subject, Object and Verb), 85A (Order of Adposition and Noun), 86A (Order of Genitive and Noun), 87A (Order of Adjective and Noun), 88A (Order of Demonstrative and Noun), and 89A (Order of Numeral and Noun).", "5 The categorization based on resource availability is according to WikiSize (Wu and Dredze, 2020a).", "source-training stage yields large performance drops.", "Even larger variance is also observed in this scenario (cf. Appendix D Table 11).", "Therefore, the model indeed learns, when trained on the source language, some transferable crosslingual features that are beneficial to target languages, both for semantic and syntactic tasks.", "For syntactic tasks, we take Persian (FA) POS as an example.", "Figure 3 visualizes the lexical overlap, measured by the Jaccard index, of 10 1-shot buckets (rows) and the improved word-label predictions introduced by target-adapting on each of the buckets (columns).", "In more detail, for column c , we collect the set (denoted as C c ) of all test set words whose label is incorrectly predicted by the zero-shot model, but correctly predicted by the model trained on the c -th bucket.", "For row i , we denote with B i the set of words occurring in bucket i .", "The figure shows in cell ( i , k ) the Jaccard index of B i and C k .", "The bright color (i.e., higher lexical overlap) on the diagonal reflects that the improvements 0k 2k C o un t Intersected False True 0k 2k C o un t Intersected False True Bucket Index 0k 10k C o un t Intersected False True Figure 4: Improvement of word-label predictions introduced by a bucket (x-axis) in FA (top), UR (mid), and HI (bottom), in relation to the words' presence in the bucket (True or False).", "introduced by a bucket are mainly 6 those word-label predictions that are lexically more similar to the bucket than to other buckets.", "We also investigate the question: How many word-label predictions that are improved after FS-XLT occur in the bucket, i.e., in the training data?", "Figure 4 plots this for the 40 1-shot buckets in FA, UR, and Hindi (HI).", "We see that many test words do occur in the bucket (shown in orange), in line with recent findings (Lewis et al., 2021; Elangovan et al., 2021).", "These analyses shed light on why the buckets benefit NER/POS which heavily rely on lexical information more than higher level semantic tasks.", "un-6 Note that the sampled buckets for POS are not completely disjoint (cf. sampling strategy in 4).", "derstanding product reviews, Figure 5 visualizes the confusion matrices of test set predictions for DE and Chinese (ZH) zeroand 1-shot models; axis ticks are review scores in { 1 , 2 , 3 , 4 , 5 } .", "The squares on the diagonals in the two left heatmaps show that parameter initialization on EN is a good basis for well-performing ZS-XLT: This is particularly true for DE, which is linguistically closer to EN.", "Two extreme review scores 1 (for DE) and 5 (for ZH) have the largest confusions.", "The two right heatmaps show that improvements brought by the 1-shot buckets are mainly achieved by correctly predicting more cases of the two extreme review scores: 2 1 (DE) and 4 5 (ZH).", "But the more challenging cases (reviews with scores 2, 3, 4), which require non-trivial reasoning, are not significantly improved, or even become worse.", "We inspect examples that are incorrectly predicted by the few-shot model (predicting 1), but are correctly predicted by the zero-shot model (predict-ing 2).", "Specifically, we compute the difference of where [CLS] attends to, before and after adapting the model on a 1-shot DE bucket.", "We extract and average attentions computed by the 12 heads from the topmost transformer layer.", "Table 5 shows that nicht (not) draws high attention change from [CLS] .", "Nicht (i.e., negation) by itself is not a reliable indicator of sentiment, so giving the lowest score to reviews solely because they contain nicht is not a good strategy.", "The following review is classified as 1 by the 1-shot model, but 2 is the gold label (as the review is not entirely negative): Die Uhr ging nicht einmal eine Minute ... Op-tisch allerdings sehr schon . (The clock didn't even work one minute ... Visually, however, very nice .) Pretrained multilingual encoders are shown to learn and store language-agnostic features (Pires et al., 2019; Zhao et al., 2020); 5.3 shows that source-training mBERT on EN substantially benefits other languages, even for difficult semantic tasks like PAWSX.", "Conditioning on such language-agnostic features, we expect that the buckets should lead to good understanding and reasoning capabilities for a target language.", "However, plain few-shot finetuning still relies heavily on unintended shallow lexical cues and shortcuts (Niven and Kao, 2019; Geirhos et al., 2020) that generalize poorly.", "Other open research questions for future work arise: How do we overcome this excessive reliance on lexical features?", "How can we leverage language-agnostic features with few shots ?", "Our standardized buckets, baseline results, and analyses are the initial step towards researching and answering these questions.", "SotA few-shot learning methods (Chen et al., 2019; Wang et al., 2020; Tian et al., 2020; Dhillon et al., 2020) from computer vision consist of two stages:", "1) training on base-class images, and", "2) few-shot finetuning using new-class images.", "Source-training and target-adapting stages of FS-XLT, albeit among languages, follow an approach very similar to these methods.", "Therefore, we test their effectiveness for crosslingual transfer.", "These methods are built upon cosine similarity that imparts inductive bias about distance and is more effective than a fully-connected classifier layer (FC) with small K (Wang et al., 2020).", "Following (Chen et al., 2019; Wang et al., 2020; Tian et al., 2020), we freeze the embedding and transformer layers of mBERT, and explore four variants of the target-adapting stage using MARC.", "COS+Pooler .", "We randomly initialize a trainable weight matrix W R h c where h is the hidden dimension size and c is the number of classes.", "Rewriting W as [ w 1 , . . . , w i , . . . , w c ] , we compute the logits of an input sentence representation x R h (from mBERT) belonging to class i as x (cid:124) w i (cid:107) x (cid:107) 2 (cid:107) w i (cid:107) 2 , where is a scaling hyperparameter, set to 10 in all experiments.", "During training, W and mBERT's pooler layer containing a linear layer and a tanh non-linearity are updated.", "FC+Pooler .", "During training, we update the linear classifier layer and mBERT's pooler layer.", "FC only .", "During training, we only update the linear classifier layer.", "This variant largely reduces model complexity and exhibit lower variance when K is small.", "FC(reset)+Pooler .", "Similar to FC+Pooler, but the source-trained linear classifier layer is randomly re-initialized before training.", "Table 6 shows the performance of these methods along with full model finetuning (without freez-ing).", "FC+Pooler performs the best among the Full-ModelFinetuning FConly FC+Pooler COS+Pooler FC(reset)+Pooler K=0 K=1 K=8 K=1 K=8 K=1 K=8 K=1 K=8 K=1 K=8 DE 49.62 51.50 1.58 53.32 0.59 50.82 1.17 52.58 0.63 51.18 1.13 53.17 0.58 37.98 5.53 45.85 2.14 38.52 6.64 49.46 2.21 FR 47.30 49.32 1.34 51.23 0.76 48.19 0.78 49.05 0.93 48.60 1.02 49.97 0.77 39.93 3.50 44.41 1.95 40.12 5.04 47.77 2.00 ES 48.44 49.72 1.24 51.25 0.93 49.03 0.73 49.69 0.57 49.28 0.85 50.21 0.63 40.01 4.33 45.35 2.37 40.89 4.96 47.73 2.33 ZH 40.40 43.19 1.76 46.40 0.93 41.90 1.15 43.34 0.88 42.30 1.37 44.42 0.65 33.10 5.48 38.31 1.87 31.83 7.00 42.07 2.19 JA 38.84 41.95 2.09 44.44 0.69 40.76 1.76 43.14 0.76 41.40 1.74 43.81 0.56 34.36 4.19 38.95 1.80 32.80 5.17 41.18 1.68 Table 6: Accuracy (%) on MARC when varying classifier head configurations.", "four for both K = 1 and K = 8 in all languages.", "However, it underperforms the full model finetuning, especially when K = 8 .", "FC only is sub-optimal; yet the decrease in comparison to FC+Pooler is small, highlighting that EN-trained mBERT is a strong feature extractor.", "COS+Pooler and FC(reset)+Pooler perform considerably worse than the other two methods and zero-shot transfer presumably because their new parameters need to be trained from scratch with few shots.", "We leave further exploration of other possibilities of exploiting crosslingual features through collapse-preventing regularization (Aghajanyan et al., 2021) or contrastive learning (Gunel et al., 2021) to future work.", "Integrating prompting (Brown et al., 2020; Schick and Schutze, 2020; Gao et al., 2020; Liu et al., 2021) a strong performing few-shot learning methodology for NLP into the crosslingual transfer learning pipeline is also a promising direction.", "We have presented an extensive study of few-shot crosslingual transfer .", "The focus of the study has been on an empirically detected performance variance in few-shot scenarios: The models exhibit a high level of sensitivity to the choice of few shots.", "We analyzed and discussed the major causes of this variance across six diverse tasks for up to 40 languages.", "Our results show that large language models tend to overfit to few shots quickly and mostly rely on shallow lexical features present in the few shots, though they have been trained with abundant data in English.", "Moreover, we have empirically validated that state-of-the-art few-shot learning methods in computer vision do not outperform a conceptually simple alternative: Full model finetuning.", "Our study calls for more rigor and accurate reporting of the results of few-shot crosslingual transfer experiments.", "They should include score distributions over standardized and fixed few shots.", "To aid this goal, we have created and provided such fixed few shots as a standardized benchmark for six multilingual datasets.", "Few-shot learning is promising for crosslingual transfer, because it mirrors how people acquire new languages, and that the few-shot data annotation is feasible.", "In future work, we will investigate more sophisticated techniques and extend the work to more NLP tasks.", "This work was funded by the European Research Council: ERC NonSequeToR (#740516) and ERC LEXICAL (#648909).", "We thank the anonymous reviewers and Fei Mi for their helpful suggestions." ]
[ "abstain", "abstain", "result", "method", "result", "result", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "result", "objective", "abstain", "abstain", "result", "objective", "result", "abstain", "result", "result", "objective", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "result", "result", "result", "abstain", "objective", "abstain", "objective", "other", "other" ]
[ "Recent years have seen numerous NLP datasets introduced to evaluate the performance of fine-tuned models on natural language understanding tasks.", "Recent results from large pretrained models, though, show that many of these datasets are largely saturated and unlikely to be able to detect further progress.", "What kind of datasets are still effective at discriminating among strong models, and what kind of datasets should we expect to be able to detect future improvements?", "To measure this uniformly across datasets, we draw on Item Response Theory and evaluate 29 datasets using predictions from 18 pretrained Transformer models on individual test examples.", "We find that Quoref, HellaSwag, and MC-TACO are best suited for distinguishing among state-of-the-art models, while SNLI, MNLI, and CommitmentBank seem to be saturated for current strong models.", "We also observe span selection task format, which is used for QA datasets like QAMR or SQuAD2.0, is effective in differentiating between strong and weak models.", "Many datasets have been created to evaluate various aspects of natural language understanding (NLU) in English.", "These datasets are useful to measure progress; however, it is evident from various leaderboards (Wang et al., 2018, 2019b; Rajpurkar et al., 2016; Zellers et al., 2018) that many of them are no longer challenging or discriminative enough to differentiate strong models such as those based on Transformers (Vaswani et al., 2017).", "1 Even if these benchmarks are sound tests of important Equal contribution.", "Work done while at New York University.", "1 For example, the recent DeBERTa model (He et al., 2020) achieves parity with human annotators on the SuperGLUE benchmark score: https://super.gluebenchmark.", "com/leaderboard .", "(and potentially unsolved) tasks, their usefulness is limited if they cannot measure further progress.", "In this paper, we ask: Which datasets are best in distinguishing current and possible future strong models?", "We aim to compare datasets using a single metric that accounts for their effectiveness in separating current stronger and weaker models.", "To that end, we use Item Response Theory (IRT; Baker and Kim, 1993), a statistical framework from psychometrics that is widely used for the evaluation of test items in educational assessment.", "IRT assumes that the probability that a model will correctly handle an example in a test set depends on the model's latent ability parameter and three example-specific parameters, typically measuring example difficulty (how strong does a model have to be to get it right), discrimination (how effective the example is for differentiating between similar models), and guessing (how likely a weak model is to get the example right for spurious reasons).", "This paper presents a large-scale IRT analysis of existing English NLU datasets.", "Unlike previous work which focuses on example-level analysis within individual datasets (Lalor et al., 2016, 2018), here we analyze example characteristics from a larger perspective by comparing individual examples across datasets.", "We evaluate test sets from 29 datasets in different formatsclassification, multiple-choice QA, and span-selection QA.", "As responses, we use model predictions from 18 Transformer-based models, including some limited-capacity models chosen to expose better the dataset's ability to discriminate weaker from stronger predictors.", "We then fit a single IRT model on these responses using a variational inference method.", "2 2 Our data and code can be found at https://github.", "Quoref, HellaSwag, and MC-TACO contain the highest number of examples that can differentiate between near-state-of-the-art models, making them very likely to be effective at tracking near-future progress on the skills that they actually test (Figure 1).", "SQuAD2.0, NewsQA, QuAIL, MC-TACO, and ARC-Challenge have the most difficult examples.", "Span-based QA is an effective task format for discriminating between strong and weak models.", "CosmosQA, MC-TACO, Winogrande, and ARC-Challenge consist mostly of hard examples, while for most datasets, the example difficulty levels are more widely distributed.", "Baker and Kim (1993) introduce Item Response Theory (IRT), a statistical framework to measure the probability of a responder (human or AI system) predicting a correct answer for a given item (test example).", "The probability of a responder i answering an item j correctly is estimated as a function of the responder's latent ability i and the item characteristics, referred to as the item characteristic curve (ICC).", "We use the 3-parameter (3PL) IRT model, where item behavior is governed by discrimination, difficulty, and guessing parameters.", "The discrimination Figure 2: An example of item characteristic curves (ICCs) with different values for discrimination ( ), difficulty ( ), and guessing ( ) parameters.", "parameter ( ) defines how effective an item is for distinguishing predictors along the ability axis.", "The difficulty parameter ( ) defines a minimum level of ability at which we expect to see high responder performance.", "The guessing parameter ( ) defines the probability of correctly answering an item by random guessing.", "Figure 2 shows example ICCs with different parameter values.", "Formally, the probability of individual i answering item j correctly is modeled as: p j ( i ) = j + 1 j 1 + e j ( i j ) .", "We use variational inference to infer IRT parameters from model response patterns using Pyro (Ran-ganath et al., 2014; Bingham et al., 2019).", "Lalor et al. (2019) found this method effective when fitting IRT models to responses on SNLI.", "Let n be the number of items and let m be the number of responders.", "The response patterns is Y R n m , where the i -th row corresponds to responder i and the j -th column corresponds to item j .", "We define y ij [0 , 1] as the response of model i to item j , where y ij = 1 indicates a correct response and y ij = 0 indicates an incorrect response.", "We approximate the joint probability of the parameters ( , , , | Y ) with a variational posterior: q ( , , , ) = I (cid:89) i =1 i ( i ) J (cid:89) j =1 j ( i ) j ( i ) j ( i ) (2) where ( ) denotes the density for parameter .", "For each parameter, we choose the following distributions: N ( , 2 ) (3) log N ( , 2 ) (4) N ( , 2 ) (5) sigmoid 1 ( ) N ( , 2 ) (6) We fit the posterior parameters by minimizing the evidence lower bound (ELBO).", "When calculating the ELBO, we weight the log-likelihoods of each item's parameter by the inverse of the item's dataset size to control for test set size.", "Following Lalor et al. (2019), we use a prior of N (0 , 1) for , , and sigmoid 1 ( ) .", "While Lalor et al. (2019) uses N (0 , 10 3 ) for item parameter priors, we encountered degenerate runs and instead use N (0 , 1) .", "For log , we use N (0 , 2 ) where we set by searching [0 . 25 , 0 . 5] by increments of 0 .", "05 and use the value yielding the highest ELBO after excluding degenerate runs.", "We use a sigmoid transformation for to constrain the guessing probability to (0 , 1) .", "Our goal is to perform a fine-grained evaluation of English NLU datasets that appear to discriminate among widely used Transformer-based models.", "To that end, we choose datasets based on the following criteria: They are plausibly unsolved, in that the best-reported model performance does not exceed estimated human performance (if available) by more than three metric points.", "They are relatively easy to use with current large pretrained models, and in particular, their inputs fit within a typical pretrained Transformer's 512-token limits.", "(This rules out tasks with full-document contexts or retrieval components.) They are evaluated at example-level, i.e., we focus our analysis on QA and other classification datasets, where each example corresponds to one item in the IRT.", "(This rules out structured prediction and sequence tagging tasks.) They have simple and reliable automatic metrics at the example level.", "(This rules out generation-based tasks.)", "Table 1 lists the datasets we evaluate.", "For MNLI, we combine the matched and mismatched portions of the development and custom test sets for our analysis.", "For ANLI, we train models on SNLI, MNLI, and ANLI training examples.", "Similar to MNLI, we combine ANLI's three evaluation rounds of the development and the test sets for our analysis.", "Custom Test Splits Some of our selected datasets do not have publicly available labeled test examples.", "For such cases, we create a new custom split by randomly sampling 50% of the validation examples as a new test set and keeping the rest for validation (Cust. column in Table 1).", "For Natural Questions, we use the MRQA 2019 version (Fisch et al., 2019), as the original version includes some examples with very long contexts.", "3 For MC-TACO, the original dataset does not come with a training set.", "For our experiment, we use 80% of the validation set as our training set and the rest as a our validation set while leaving the original test set untouched.", "3 https://github.com/mrqa/ MRQA-Shared-Task-2019 | Train | | Dev | | Test | Cust.", "We aim to understand how examples from different datasets contribute to the evaluations of models with near-state-of-the-art abilities, so we include several pretrained Transformer-based models to approximate this.", "However, using only high-performing models could result in a poor IRT model fit (Martnez-Plumed et al., 2019) To avoid this, we add both weaker models and under-trained versions of our original models.", "We use ALBERT-XXL-v2 (Lan et al., 2020), RoBERTa Large and RoBERTa Base (Liu et al., 2019), BERT Large and BERT Base (Devlin et al., 2019), XLM-R (Conneau et al., 2020), and 12 MiniBERTas (Zhang et al., 2021b).", "4 For each of the 18 Transformer-based models, we evaluate five different checkpointsat 1%, 10%, 25%, and 50% of the maximum steps of 4 The MiniBERTas are RoBERTa models pretrained on 1M, 10M, 100M, or 1B words of raw text, and varying slightly in model size.", "There are three pretrained models for each pretraining data quantity, which are pretrained using different near-optimal hyperparameter values.", "We use all three variants in producing responses for IRT.", "the maximum epochs (Section 3.3), as well as the best checkpoint on the validation set, which need not be one of the other four.", "This yields a total of 90 model predictions for each test example.", "Optimization We perform a hyperparameter sweep on each dataset, varying the learning rate { 1 e 5 , 3 e 5 , 5 e 6 } .", "We tune the maximum epochs { 10 , 40 } for small datasets ( < 5 k training examples), and { 3 , 10 } for other datasets (Zhang et al., 2021a).", "We use the jiant (Pruk-sachatkun et al., 2020b) library which is based on PyTorch (Paszke et al., 2019) and HuggingFace Transformers (Wolf et al., 2020).", "We only perform hyperparameter tuning with the RoBERTa Large model and apply the best con-figuration to train all the other Transformer models.", "We use NVIDIA V100 Tensor Core GPUs for our experiments.", "On average, it takes approximately four hours to train RoBERTa on small datasets ( < 3 k training examples), one day for medium-Figure 3: The best validation performance of ALBERT-XXL-v2, RoBERTa Large , and the smallest MiniBERTa (RoBERTa-Med-Small-1M-2) on each dataset.", "The full results table with performance of all models is reported in the Appendix (Table", "3) sized datasets ( < 10 k), and four days for large datasets ( > 10 k ).", "Figure 3 shows the performance of RoBERTa Large , ALBERT-XXL-v2, and one of the low performing MiniBERTas (RoBERTa-Med-Small-1M-2) on all validation sets.", "Unsurprisingly, ALBERT-XXL-v2 and RoBERTa Large are the best-performing models, while the small MiniBERTa model achieves much lower performance.", "Full results using all 18 models can be found in the Appendix (Table 3).", "Metric As our primary metric, we introduce Locally Estimated Headroom (LEH) score, which measures the ability of each test example to contribute to the evaluation of near-future progress.", "We calculate it as the derivative of the example's ICC (Figure", "2) with respect to the highest latent ability score, which corresponds to ALBERT-XXL-v2.", "A high LEH score indicates that the best-performing model is still far from the example's saturation pointsthe flat sections of ICC inferred by our model.", "There is enough space along the curve that the IRT model expects the example to be able to differentiate future state-of-the-art models.", "Typically, different near-state-of-the-art models both succeed and fail on this kind of example, while weaker models mostly fail.", "A high LEH score implies that there is still enough room for potentially stronger models to perform better on this dataset.", "To validate the use of LEH scores for detecting near-future improvements, we compare two IRT models.", "The first is fitted using responses from all models, while the second is fitted based on responses from BERT and other weaker models (excluding RoBERTa Large , RoBERTa Base , XLM-R, and ALBERT-XXL-v2).", "After that, we compute the correlation between the two sets of LEH scores, focusing on the 75 th percentile for each dataset.", "The Pearson correlation is 95.5% with a median absolute difference of 0.007 and a standard deviation of 0.011.", "Out of the 29 datasets, only SQuAD2.0, CommensenseQA, MuTual, Quoref, and HellaSwag have more than 0.02 absolute difference in LEH scores.", "This strong correlation suggests that our ICCs fits are not overly sensitive to the exact characteristics of current state of the art models.", "Analysis by LEH Scores Figure 1 shows the distribution of test examples for each dataset based on their LEH scores.", "For our analysis, we focus on the 75 th percentile examples in each dataset as a rough proxy for how likely a dataset is to have a significant number of examples that are difficult or discriminative for near-future models.", "We observe that Quoref, HellaSwag, and MC-TACO have examples with the highest LEH scores, suggesting sufficient headroom for future state-of-the-art models with a higher ability to achieve better performance on these datasets.", "SNLI, CommitmentBank, and MNLI have relatively low LEH scores, indicating that performance on these datasets is largely saturated.", "Additionally, we also measure how the 75 th percentile LEH scores correlate with human-RoBERTa gap.", "Using 22 datasets that have human performance numbers (Table 1), we find that the Pearson correlation between the two is weakly positive (0.21).", "the distribution of test examples according to their discrimination and difficulty parameters (Figure 4).", "We observe that datasets with span selection for-Figure 4: Distribution of test examples for each dataset based on the log discrimination ( log ) parameter (top) and the difficulty ( ) parameter (bottom).", "mat (QAMR, NewsQA, SQuAD, MRQA-NQ, and Quoref) have the highest discrimination scores than other datasets, highlighting span selection as an effective task format for discriminating among strong and weak models.", "However, this might be because this task format typically features a much larger space of possible model outputs than the other formats we consider.", "It does not necessarily mean that span selection is the most suitable to test models' ability to understand language.", "As the span-based format restricts answers to be text spans in the given passage, there are concerns that it rarely requires reasoning ability which often involves answers not mentioned in the passage, and thus not reflecting comprehension ability of humans (Lai et al., 2017; Sugawara et al., 2018).", "For the difficulty parameter, we do not observe a narrow task format that is superior to the others.", "However, we notice that the highest difficulty scores are obtained by QA datasets such as SQuAD2.0, NewsQA, QuAIL, ARC-Challenge, and MC-TACO.", "ANLI, which is created with adversarial model-in-the-loop crowdsourcing, also has of many hard examples.", "Impressionistically, training set size and creation date do not seem to correlate with either example's difficulty or discrimination parameters.", "Figure 5 shows the distribution of examples jointly according to their difficulty and log discrimination parameters.", "We notice a half-moon shape pattern in most datasets, which indicates that most of the discriminative examples are either very easy or very difficult.", "Referring to the ICC curve (Figure 2), this indicates that there is high agreement among strong models or weak models, which corresponds to one of the saturation points in the ICC curve (upper or lower).", "The only dataset that does not have this pattern is Winogrande, which is difficult for all models.", "ARC-Challenge, QuAIL, HellaSwag, CommonsenseQA, and MC-TACO show clusters with high density on the top right regions, indicating a large number of examples with high discrimination and difficulty scores.", "Other datasets have more scattered distributions.", "SNLI, MNLI, and MCScript show higher density on the bottom right regions, while NewsQA, SQuAD2.0, and MRQA-NQ show higher density on both the top and bottom right regions.", "Further analysis of the guessing parameters can be found in Appendix A. 4.2 Examples with Unanimous Responses When fitting ICC on examples that have only correct responses or only incorrect responses, the discrimination parameter is unconstrained.", "We find that these examples make up 4% of our data.", "13 of the 29 datasets contain at least one such example.", "Roughly 16% of NewsQA examples are incorrectly answered by all models, while the remaining 12 datasets have less than 10% of all correct or incorrect examples.", "To study the effect of examples with all correct or incorrect responses, we fit an Figure 5: Distributions of log discrimination ( log ) versus the difficulty ( ) parameters for each", "IRT model on responses excluding such examples and compare against parameters from the full set of responses.", "We find that the Pearson correlation for the discrimination at the 75 th percentile is 97.2%, with a median absolute difference of 0.016 and standard deviation of 0.015.", "MC-TACO, CommitmentBank, and WSC differ by more than 0.04.", "Further, we find that the Pearson correlation for the LEH score at the 75 th percentile is 98.9%, with a median absolute difference of 0.006 and standard deviation of 0.005.", "RTE, WiC, WinoGrande, QAMR, NewsQA, MRQA-NQ, MC-TACO, and BoolQ differ by 0.01.", "Given these high correlations, we do not exclude these examples when reporting our main results.", "Next, we analyze each task-type group in more detail, focusing on the example's scores around the 75 th percentile.", "Classification We observe that all datasets have moderate discrimination scores.", "Most ANLI examples have relatively high difficulty scores, while SNLI, MNLI, and CommitmentBank have the lowest difficulty scores.", "Sentence-Level Multiple Choice All of the datasets in this group have relatively low discrimination scores compared to span selection datasets.", "Figure 5 shows that MC-TACO, Winogrande, and CommonsenseQA all have a higher density of difficult examples, while for other datasets the distribution is more spread.", "Paragraph-Level Multiple Choice QuAIL and ARC-Challenge examples have high difficulty but moderate discrimination scores.", "As seen in Figure 5, these datasets have a higher density in the top right regions, showing a large proportion of difficult examples.", "ARCT shows moderate difficulty despite its known artifacts (Niven and Kao, 2019), indicating that it can still be challenging for models.", "Compared to other datasets, BoolQ has the highest number of easy examples.", "However, as it is a binary classification task, the random baseline performance is already high.", "To investigate this, we calculate the number of examples in each test set that have parameter below 0 .", "5 .", "In general, we find that 88% of the test examples have < 0 .", "5 , implying that most of the examples contributed to the inferences of , , and .", "BoolQ was the only exception in which approximately 56% of examples were assigned > 0 .", "5 .", "After filtering out these guessable examples in BoolQ, we find that its test examples have slightly higher discrimination scores with lit-tle change in difficulty scores.", "Span Selection We observe that span selection datasets are the most discriminative.", "However, in terms of difficulty, only SQuAD2.0 and NewsQA are among the top five.", "ters.", "We observe a positive correlation between ability and average model accuracy (Appendix B).", "Generally, within a model, the best validation checkpoint obtains the highest average model accuracy and/or ability score.", "Across models, ALBERT-XXL-v2 performs typically best.", "To better understand what kinds of examples are difficult or discriminating, we analyze the 20 examples with the lowest and highest scores for the discrimination and the difficulty parameters from five datasets: SQuAD2.0, MC-TACO, QuAIL, MNLI, and BoolQ.", "The first three are datasets with high discrimination and/or difficulty scores.", "MNLI and BoolQ have moderate discrimination and difficulty scores and low label entropy (three-class classification for MNLI and binary choice for BoolQ).", "We observe that the 20 most difficult BoolQ examples are labeled False (the minority class), while 19 of the 20 easiest examples are labeled True .", "For MNLI, we find that the 20 easiest MNLI examples are labeled neutral while the 20 hardest examples are a mixture of entailment and contradiction .", "In MC-TACO, each example contains a varying number of answer choices.", "For each choice, a model needs to predict whether the answer is True or False .", "We find that all answer choices in top 20 easiest examples are labeled False (the majority class), whereas for difficult examples the answer choices are either all True or a mix of True and False (Table 2).", "For SQuAD2.0 and QuAIL, we analyze the context length, the answerability of a question, and the lexical overlap between context and questions.", "However, we do not find any clear evidence that any of them might indicate the difficulty level of test examples.", "For BoolQ, we observe that the 20 most discriminating examples are all labeled False while 13 of the 20 least discriminating examples are labeled True .", "Table 2 shows the hardest and the easiest examples of MNLI and MC-TACO.", "Prior work on using IRT to evaluate NLP systems mostly relies on human responses.", "Hopkins and May (2013) use IRT to estimate the relative ability of a set of machine translation systems using responses from pairwise comparison of system outputs by human judges.", "Otani et al. (2016) extend this work by including a baseline translation to the pairwise comparison.", "Lalor et al. (2016, 2018) use IRT to identify hard examples in natural language inference data based on human responses.", "In a follow-up study, Lalor et al. (2019) compare human versus model responses and find that both are positively correlated and demonstrate the use cases of IRT parameters in training set filtering.", "Sedoc and Ungar (2020) use IRT to evaluate chatbot systems.", "The work by Martnez-Plumed et al. (2019) is the first to study the idea of using model responses (as opposed to human responses) for IRT in machine learning research.", "For NLU, Lalor and Yu (2020) use model responses to estimate difficulty parameters of several GLUE datasets for dynamic data selection in curriculum learning.", "In concurrent work, Rodriguez et al. (2021) study how IRT can be used for more nuanced leaderboard evaluations.", "Their experiments demonstrate that IRT can produce a more reliable ranking of models than the traditional metrics.", "They also show that IRT is not only useful for better understanding of individual examples in the dataset and task, but also effective in identifying annotation errors.", "For other dataset evaluations, in addition to providing a benchmark, the SuperGLUE paper also compares a set of candidate datasets using a fixed pool of machine learning models and human annotators (Nangia and Bowman, 2019).", "Wang et al. (2019a) investigate pretraining tasks and paradigms for effective transfer learning methods.", "Pruksachatkun et al. (2020a) study when and why intermediate-task training is useful for a given target task.", "Vu et al. (2020) introduce task embed-dings to predict the most beneficial source task for a given target task.", "Schlegel et al. (2020) propose an evaluation framework for machine reading comprehension (MRC) datasets and reveal some concerns regarding factual correctness and the presence of linguistic cues in existing MRC gold datasets.", "Given the large number of NLU datasets introduced in recent years, what kinds of datasets are effective to measure near-future progress?", "Our analysis on 29 test sets using IRT gives us reason to believe that, among the datasets we evaluate, Quoref, HellaSwag, and MC-TACO are best able to discriminate among current (and likely future) strong models.", "Meanwhile, SNLI, MNLI, and CommitmentBank seem to be saturated and ineffective for measuring future progress.", "Our analysis of examples' difficulty and discrimination parameters shows that datasets with many hard examples do not always contain examples that can discriminate between strong and weak models.", "We find that QA datasets are more difficult than other datasets.", "We also find span selection as the most effective task format for discriminating between strong and weak models.", "According to our LEH score, datasets that seem to be solved are unlikely to see improvements with future pretrained models.", "Therefore, the skills they intend to test are either largely solved, to the extent that they are solvable, or not well isolated (e.g., due to data artifacts).", "Focusing on the skills for which these solved test sets are originally designed to evaluate would most likely require a new dataset that better isolates the reasoning ability of interest.", "On the other hand, datasets that perform well according to our LEH metric show the best signs of being amenable to future hill-climbing.", "This does not entail that we should focus future research on these benchmarks, since we do not evaluate whether they test the skills they mean to test, or whether these skills are important for scientific or practical progress on natural language understanding.", "Finally, we argue that this evaluation should be done periodically, as datasets and models improve over time.", "For future work, one can study multidimensional variables for both model ability and item parameters, which could reveal a factorization of datasets by skills.", "Other potential directions include expanding our analysis to a broader range of tasks and analyzing the relationship between the estimated IRT parameters and the human-model gap.", "We thank John Lalor, Joo Sedoc, Nikita Nangia, Sebastian Schuster, Iacer Calixto, and the anonymous reviewers for feedback.", "This work has ben-efited from financial support to SB by Eric and Wendy Schmidt (made by recommendation of the Schmidt Futures program), Samsung Research (un-der the project Improving Deep Learning using Latent Structure ), Apple, and Intuit, and from in-kind support by the NYU High-Performance Computing Center and by NVIDIA Corporation (with the donation of a Titan V GPU).", "This material is based upon work supported by the National Science Foundation under Grant No. 1922658.", "Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.", "We present an objective approach for comparing the difficulty of test sets examples across datasets and demonstrate it on a large set of established datasets.", "We expect this to contribute to the development of more challenging benchmarks for NLP datasets and potentially to develop more challenging models.", "One concern worth noting is that most of the evaluation datasets we study are crowdsourced or drawn from naturally occurring data.", "Thus, they likely demonstrate harmful stereotypes to some degree or even score models more highly for demonstrating them.", "In general, models that perform well on these datasets should not be deployed directly without additional measures to measure and eliminate any harms that stereotypes like these could cause in the target application settings." ]
[ "abstain", "abstain", "abstain", "method", "result", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "objective", "method", "other", "method", "method", "method", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "method", "abstain", "result", "result", "result", "objective", "abstain", "abstain", "result", "method", "result", "abstain", "method", "other", "other", "other", "other", "abstain", "abstain", "method", "abstain", "abstain" ]
[ "Question Answering (QA) is in increasing demand as the amount of information available online and the desire for quick access to this content grows.", "A common approach to QA has been to fine-tune a pretrained language model on a task-specific labeled dataset.", "This paradigm, however, relies on scarce, and costly to obtain, large-scale human-labeled data.", "We propose an unsupervised approach to training QA models with generated pseudo-training data.", "We show that generating questions for QA training by applying a simple template on a related, retrieved sentence rather than the original context sentence improves downstream QA performance by allowing the model to learn more complex context-question relationships.", "Training a QA model on this data gives a relative improvement over a previous unsupervised model in F1 score on the SQuAD dataset by about 14%, and 20% when the answer is a named entity, achieving state-of-the-art performance on SQuAD for unsupervised QA.", "Question Answering aims to answer a question based on a given knowledge source.", "Recent advances have driven the performance of QA sys-tems to above or near-human performance on QA datasets such as SQuAD (Rajpurkar et al., 2016) and Natural Questions (Kwiatkowski et al., 2019) thanks to pretrained language models such as BERT (Devlin et al., 2019), XLNet (Yang et al., 2019) and RoBERTa (Liu et al., 2019).", "Fine-tuning these language models, however, requires large-scale data for fine-tuning.", "Creating a dataset for every new domain is extremely costly and practically infeasible.", "The ability to apply QA models on out-of-domain data in an efficient manner is thus very 1 Equal contribution 2 Work done during internship at the AWS AI Labs Figure 1: Question Generation Pipeline: the original context sentence containing a given answer is used as a query to retrieve a related sentence containing matching entities, which is input into our question-style converter to create QA training data.", "desirable.", "This problem may be approached with domain adaptation or transfer learning techniques (Chung et al., 2018) as well as data augmentation (Yang et al., 2017; Dhingra et al., 2018; Wang et al., 2018; Alberti et al., 2019).", "However, here we expand upon the recently introduced task of unsupervised question answering (Lewis et al., 2019) to examine the extent to which synthetic training data alone can be used to train a QA model.", "In particular, we focus on the machine reading comprehension setting in which the context is a given paragraph, and the QA model can only access this paragraph to answer a question.", "Furthermore, we work on extractive QA, where the answer is assumed to be a contiguous sub-string of the context.", "A training instance for supervised reading comprehension consists of three components: a question , a context , and an answer .", "For a given dataset domain, a collection of documents can usually be easily obtained, providing context in the form of paragraphs or sets of sentences.", "Answers can be gathered from keywords and phrases from the context.", "We focus mainly on factoid QA; the question concerns a concise fact.", "In particular, we emphasize questions whose answers are named entities, the majority type of factoid questions.", "Entities can be extracted from text using named entity recognition (NER) techniques as the training instance's answer .", "Thus, the main challenge, and the focus 1 of this paper, is creating a relevant question from a (context, answer) pair in an unsupervised manner.", "Recent work of (Lewis et al., 2019) uses style transfer for generating questions for (context, answer) pairs but shows little improvement over applying a much simpler question generator which drops, permutates and masks words.", "We improve upon this paper by proposing a simple, intuitive, retrieval and template-based question generation approach, illustrated in Figure", "1. The idea is to retrieve a sentence from the corpus similar to the current context, and then generate a question based on that sentence.", "Having created a question for all (context, answer) pairs, we then fine-tune a pretrained BERT model on this data and evaluate on the SQuAD v1.1 dataset (Rajpurkar et al., 2016).", "Our contributions are as follows: we introduce a retrieval, template-based framework which achieves state-of-the-art results on SQuAD for unsupervised models, particularly when the answer is a named entity.", "We perform ablation studies to determine the effect of components in template question generation.", "We are releasing our synthetic training data and code.", "1 2 Unsupervised QA Approach We focus on creating high-quality, non-trivial questions which will allow the model to learn to extract the proper answer from a context-question pair.", "Sentence Retrieval: A standard cloze question can be obtained by taking the original sentence in which the answer appears from the context and masking the answer with a chosen token.", "However, a model trained on this data will only learn text matching and how to fill-in-the-blank, with little generalizability.", "For this reason, we chose to use a retrieval-based approach to obtain a sentence similar to that which contains the answer, upon which to create a given question.", "For our experiments, we focused on answers which are named entities, which has proven to be a useful prior assumption for downstream QA performance (Lewis et al., 2019) confirmed by our initial experiments.", "First, we indexed all of the sentences from a Wikipedia dump using the ElasticSearch search engine.", "We also extract named entities for each sentence in both the Wikipedia corpus and the sentences used as queries.", "We assume access to a named-entity recognition system, and in this work 1 https://github.com/awslabs/ unsupervised-qa Figure 2: Example of synthetically generated questions using generic cloze-style questions as well as a template-based approach.", "make use of the spaCy 2 NER pipeline.", "Then, for a given context-answer pair, we query the index, using the original context sentence as a query, to return a sentence which (1) contains the answer, (2) does not come from the context , and (3) has a lower than 95% F1 score with the query sentence to discard highly similar or plagiarized sentences.", "Besides ensuring that the retrieved sentence and query sentence share the answer entity, we require that at least one additional matching entity appears in both the query sentence and in the entire context, and we perform ablation studies on the effect of this matching below.", "These retrieved sentences are then fed into our question-generation module.", "Template-based Question Generation: We consider several question styles (1) generic cloze-style questions where the answer is replaced by the token [MASK], (2) templated question Wh+B+A+? as well as variations on the ordering of this template, as shown in Figure", "2. Given the retrieved sentence in the form of [Fragment A] [Answer] [Fragment B] , the templated question Wh+B+A+? replaces the answer with a Wh-component (e.g., what, who, where), which depends on the entity type of the answer and places the Wh-component at the beginning of the question, followed by sentence Fragment B and Fragment A .", "For the choice of wh-component, we sample a bi-gram based on prior probabilities of that bi-gram being associated with the named-entity type of the answer.", "This prior probability is calculated based on named-entity and question bi-gram starters from the SQuAD dataset.", "This information does not make use of the full context-question-answer and can be viewed 2 https://spacy.io as prior information, not disturbing the integrity of our unsupervised approach.", "Additionally, the choice of wh component does not significantly affect results.", "For template-based approaches, we also experimented with clause-based templates but did not find significant differences in performance.", "Settings: For all downstream question answering models, we fine-tune a pretrained BERT model using the Transformers repository (Wolf et al., 2019) and report ablation study numbers using the base-uncased version of BERT, consistent with (Lewis et al., 2019).", "All models are trained and validated on generated pairs of questions and answers along with their contexts tested on the SQuAD development set .", "The training set differs for each ablation study and will be described below, while the validation dataset is a random set of 1,000 template-based generated data points, which is consistent across all ablation studies.", "We train all QA models for 2 epochs, checkpointing the models every 500 steps and choosing the checkpoint with the highest F1 score on the validation set as the best model.", "All ablation studies are averaged over two training runs with different seeds.", "Unless otherwise stated, experiments are performed using 50,000 synthetic QA training examples, as initial models performed best with this amount.", "We will make this generated training data public.", "Effect of retrieved sentences: We test the effect of retrieved vs original sentences as input to question generation when using generic cloze questions.", "As shown in Table 1, using retrieved sentences improves over using the original sentence, reinforcing our motivation that a retrieved sentence, which may not match trivially the current context, forces the QA model to learn more complex relationships than just simple entity matching.", "The retrieval process may return sentences which do not match the original context.", "On a random sample, 15/18 retrieved sentences were judged as entirely relevant to the original sentence.", "This retrieval is already quite good, as we use a high quality ElasticSearch retrieval and use the original context sentence as the query, not just the answer word.", "While we do not explicitly ensure that the retrieved sentence has the same meaning, we find that the search results with entity matching gives largely Training procedure EM F1 Cloze-style original 17.36 25.90 Cloze-style retrieved 30.53 39.61 Table 1: Effect of original vs retrieved sentences for generic cloze-style question generation.", "semantically matching sentences.", "Additionally, we believe the sentences which have loosely related meaning may act as a regularization factor which prevent the downstream QA model from learning only string matching patterns.", "Along these lines, (Lewis et al., 2019) found that a simple noise function of dropping, masking and permuting words was a strong question generation baseline.", "We believe that loosely related context sentences can act as a more intuitive noise function, and investigating the role of the semantic match of the retrieved sentences is an important direction for future work.", "For the sections which follow, we only show results of retrieved sentences, as the trend of improved performance held across all experiments.", "Effect of template components: We evaluate the effect of individual template components on downstream QA performance.", "Results are shown in Table", "2. Wh template methods improve largely over the simple cloze templates.", "Wh + B + A + ? performs best among the template-based methods, as having the Wh word at the beginning most resembles the target SQuAD domain and switching the order of Fragment B and Fragment A may force the model to learn more complex relationships from the question.", "We additionally test the effect of the wh-component and the question mark added at the end of the sentence.", "Using the same data as Wh + B + A + ? but removing the wh-component results in a large decrease in performance.", "We believe that this is because the wh-component signals the type of possible answer entities, which helps narrow down the space of possible answers.", "Removing the question mark at the end of the template also results in decreased performance, but not as large as removing the wh-component.", "This may be a result of BERT pretraining which expects certain punctuation based on sentence structure.", "We note that these questions may not be grammatical, which may have an impact on performance.", "Improving the question quality makes a difference in performance as seen from the jump from cloze-style questions to template questions.", "The ablation studies suggest that a combination of question relevance, though Template data EM F1 Cloze 30.53 39.61 A + Wh + B + ?", "matching entities, and question formulation, as described above, determine downstream performance.", "Balancing those two components is an interesting problem and we leave improving grammaticality and fluency through means such as language model generation for future experiments.", "In the last two rows of Table 2, we show the effect of using the wh bi-gram prior on downstream QA training.", "Using the most-common wh word by grouping named entities into 5 categories according to (Lewis et al., 2019) performs very close to the best-performing wh n-gram prior method, while using a single wh-word (what) results in a significant decrease in performance.", "These results suggest that information about named entity type signaled by the wh-word does provide important information to the model but further information beyond wh-simple does not improve results significantly.", "Effect of filtering by entity matching: Besides ensuring that the retrieved sentence and query sentence share the answer entity, we require that at least one additional matching entity appears in both query sentence and entire context.", "Results are shown in Table", "3. Auxillary matching leads to improvements over no matching when using template-based data, with best results using matching with both query and context.", "Matching may filter some sentences whose topic are too far from the original context.", "We leave further investigation of the effect of retrieved sentence relevance to future work.", "Effect of synthetic training dataset size: Notably, (Lewis et al., 2019) make use of approximately 4 million synthetic data points in order to train their model.", "However, we are able to train a model with better performance in much fewer examples, and show that such a large subset is unnecessary for their released synthetic training data Matching procedure EM F1 No matching 41.02 50.81 Query matching 44.76 54.87 Context matching 44.22 55.35 Query + Context matching 46.09 56.82 Table 3: Effect of query and context matching for retrieved input to question generation module on downstream QA performance.", "as well.", "Figure 3 shows the performance from training over random subsets of differing sizes and testing on the SQuAD development data.", "We sample a random question for each context from the data of (Lewis et al., 2019).", "Even with as little as 10k datapoints, training from our synthetically generated template-based data with auxiliary matching outperforms the results from ablation studies in (Lewis et al., 2019).", "Using data from our template-based data consistently outperforms that of (Lewis et al., 2019).", "Training on either dataset shows similar trends; performance decreases after increasing the number of synthetic examples past 100,000, likely due to a distributional mismatch with the SQuAD data.", "We chose to use 50,000 examples for our final experiments with other ablation studies as this number gave good performance in initial experiments.", "We compare training on our best template-based data with state-of-the-art in Table", "4. SQuAD F1 results reflect results on the hidden SQuAD test set.", "We report single-model numbers; Lewis et al. (2019) report an ensemble method achieving 56.40 F1 and a best single model achieving 54.7 F1.", "We make use of the whole-word-masking version of Model Choice SQuAD Test F1 SQuAD NER F1 BERT-large (ours) 64.04 77.55 BERT-large (Lewis et al., 2019) 56.40 64.50 Table 4: A comparison of top results using the BERT-large model.", "BERT-large, although using the original BERT-large gives similar performance of 62.69 on the SQuAD dev set.", "We report numbers on the sample of SQuAD questions which are named entities, which we refer to as SQuAD-NER.", "The subset corresponding to the SQuAD development dataset has 4,338 samples, and may differ slightly from (Lewis et al., 2019) due to differences in NER preprocessing.", "We also trained a fully-supervised model on the SQuAD training dataset with varying amounts of data and found our unsupervised performance equals the supervised performance trained on about 3,000 labeled examples.", "In this paper we introduce a retrieval-based approach to unsupervised extractive question answering.", "A simple template-based approach achieves state-of-the-art results for unsupervised methods on the SQuAD dataset of 64.04 F1, and 77.55 F1 when the answer is a named entity.", "We analyze the effect of several components in our template-based approaches through ablation studies.", "We aim to experiment with other datasets and other domains, incorporate our synthetic data in a semi-supervised setting and test the feasibility of our framework in a multi-lingual setting.", "We thank Xiaofei Ma for fruitful discussions on the project." ]
[ "abstain", "abstain", "abstain", "objective", "result", "abstain", "abstain", "other", "abstain", "abstain", "objective", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "objective", "abstain", "method", "abstain", "objective", "abstain", "method", "objective", "method", "abstain", "method", "abstain", "abstain", "result", "method", "objective", "method", "other", "abstain", "method", "method", "method", "result", "abstain", "method", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "objective", "other" ]
[ "The sentence is a fundamental unit of text processing.", "Yet sentences in the wild are commonly encountered not in isolation, but unsegmented within larger paragraphs and documents.", "Therefore, the first step in many NLP pipelines is sentence segmentation .", "Despite its importance, this step is the subject of relatively little research.", "There are no standard test sets or even methods for evaluation, leaving researchers and engineers without a clear footing for evaluating and selecting models for the task.", "Existing tools have relatively small language coverage, and efforts to extend them to other languages are often ad hoc.", "We introduce a modern context-based modeling approach that provides a solution to the problem of segmenting punctuated text in many languages, and show how it can be trained on noisily-annotated data.", "We also establish a new 23-language multilingual evaluation set.", "Our approach exceeds high baselines set by existing methods on prior English corpora (WSJ and Brown corpora), and also performs well on average on our new evaluation set.", "We release our tool, ERSATZ , as open source.", "In many ways, the sentence is the fundamental unit of text in natural language processing (NLP).", "From the user perspective, tasks such as sentiment analysis, POS tagging, or machine translation consume sentences and emit classifications, annotations, or transductions of those inputs.", "Even tasks that operate at the paragraph or document level, such as coreference resolution or summarization, often make use of sentences internally.", "Yet at the same time, sentences in the wild rarely exist with marked sentence boundaries.", "For many languages, punctuation serves as a cue for these Examples of Ambiguity in Punctuated Contexts en ... in the U.S. House of Representatives ... ...in the U.S. (cid:88) Most Mexican Spanish ... cs ... podnikanie s.r.o. a hlavnm investorem ... a Systmy s.r.o. (cid:88) V roce 2017 ... ro ...", "boundaries, but this punctuation is ambiguous as we might see with acronyms or abbreviations in English.", "When segmented sentences are required, they must be split using a sentence segmentation technique that can resolve these ambiguities.", "Despite its importance and early position in the NLP pipeline, sentence segmentation is the subject of relatively little research.", "Widely-used tools such as that in Moses (Koehn et al., 2007) are implemented with ad-hoc, manually-designed, language-specific rules, leaving them vulnerable to the long tail of languages and language phenomena.", "The little comparative work that does exist generally focuses on techniques that work in English or other Indo-European languages (Palmer and Hearst, 1997; Gillick, 2009).", "Secondly, there is not a well-understood methodology for training segmenters that do not make narrow assumptions about the features or characteristics of the languages they support.", "At the heart of this is the lack of labeled training data.", "Manually-split datasets that accompany annotation projects tend to be small, and larger datasets are typically (imperfectly) segmented by the very tools whose performance is under question.", "Tools such as NLTK (Bird and Loper, 2004), which packages Punkt (Kiss and Strunk, 2006), provide an unsupervised method to train a model, but it is unclear what the effect is when switching to non-Latin-script languages, or how a more supervised approach would handle such noisy data.", "Finally, and perhaps most importantly, there are no standard test sets or even metrics for evaluating segmenter performance, leaving researchers and engineers with no objective way to determine which one is best.", "The work described in this paper is aimed at these problems.", "We propose a simple window-based model and semi-supervised training paradigm for the segmentation of punctuated text (3).", "We frame the task as binary classification applied to a set of candidate punctuation locations defined by a regular expression.", "Leveraging the similarity of the task across languages (Table 1), we show that our model is able to successfully bootstrap from multilingual data that has been imperfectly segmented.", "We define a common metric that works across different tools (4), and assemble a multilingual test suite by semi-automatically splitting existing (undersegmented) test sets (5), providing a basis for proper comparison.", "We release these data splits along with our tool, ERSATZ , as open source.", "1 2 Background A sentence is a sequence of grammatically linked words that conveys a complete thought.", "The term can be difficult to define in a precise manner that will not admit any exceptions, and in applications like machine translation, there are many times where the basic input unit is not a sentence, but a sentence fragment, such as a headline or an item from a list.", "In this work, we skirt these complexities, choosing instead to focus on the most common scenario, in which we are dealing with standard written language.", "For this, we adopt a functional definition: a sentence is a group of words that ends with a sentence-ending punctuation mark, such as (for many languages) a period, question mark, or exclamation point.", "Since punctuation is often used for non-sentence-ending purposes as well, the primary challenge for sentence segmentation is resolving this ambiguity for each segmentation candidate.", "Research in sentence segmentation 2 has been limited in scope.", "Prior work either introduces methods that work under a set of assumptions unique to Latin-script languages (the existence and importance of casing, word length, or whitespace), or tackles new languages ad hoc, making adaptation to new languages and domains difficult.", "Statistical methods use text-based features such as casing, punctuation, or length of surrounding words to make decisions around punctuation.", "The earliest work we found (Riley, 1989) considered all sentence boundaries and used decision trees based on these features.", "Gillick (2009) trained two statistical models in the form of an SVM and Naive Bayes classifier.", "Palmer and Hearst (1997) introduced Satz and shifted the approach by only focusing potential sentence boundaries being near sentence-ending punctuation, using part-of-speech distribution vectors as input to a feed-forward neural network and additionally applied their technique to German and French.", "In order to work without labeled data, Kiss and Strunk (2006) used heuristics to craft scores based on likelihood values of occurrences of tokens, punctuation, casing and token length, and then manually tune a threshold of score to indicate a sentence boundary.", "This work expanded the most multilingually, considering 10 Indo-European languages as well as Estonian and Turkish.", "Other work has focused on specific non-English languages.", "Xue and Yang (2011) study Chinese and dissect the theoretical reasons behind segmenting Chinese sentences to match their English equivalents.", "To segment Thai, which lacks punctuation, Zhou et al. (2016) use POS-taggers.", "Some work has tackled the problem of domains.", "Sanchez (2019) approaches the problem of legal text, which has a set structure without punctuation; other approaches (Wang et al., 2019; Rehbein et al., 2020) have investigated speech, which lacks both punctuation and written textual structure.", "A popular splitter is packaged in the Moses toolkit (Koehn et al., 2007), 3 which works by splitting on all sentence-final punctuation unless the preceding context is a non-breaking prefixa hand-built, language-specific list of acronyms and abbreviations.", "This approach cannot resolve the ambiguity where punctuation legitimately exists at the end of a sentence and is indifferent to novel 2 Alternately called sentence boundary detection .", "abbreviations at inference time.", "It produces a conservative segmenter that is high precision (unlikely to oversegment) but low recall (prone to underseg-menting).", "This raises the question of what effect reliance on this tool has had on construction of recent massive bitexts, such as CCMatrix (Schwenk et al., 2019b, 4.3).", "Gillick (2009) credit a 0.75% increase in accuracy to reduction of summarization error by a factor of four.", "Errors in segmentation may therefore affect the top matches for a sentence when doing bitext construction.", "Another popular splitter is SpaCy, which has not been described or evaluated anywhere, as far as we could tell.", "With sentence splitting being a crucial piece of modern corpus creation for machine translation and other tasks, the lack of approaches and rigorous comparisons between tools limits the field.", "Additionally, the research field moving towards (of-ten massively) multilingual settings, the need to build multilingual tools compare them in a proper scientific framework is both important and evident.", "Our general approach is to treat sentence segmentation as a binary classification problem, predicting sentence-internal ( ) or sentence-ending ( (cid:88) ) positions.", "The input to the model (3.1), shown in Figure 1, is the concatenated left and right token contexts, as depicted in Table 1. Predictions for both training and inference are done only at prede-fined candidate sites , which are determined by a regular expression (3.2).", "We then train in a semi-supervised setting where many of the labels may be missing (3.3).", "Our basic model is depicted in Figure 1. The encoder is a two-layer Transformer (Vaswani et al., 2017).", "Our hyperparameter search incorporates vocabulary size ( V ), embedding size ( e ), and left and right context sizes ( l and r ).", "We also experiment with simpler architectures (8.4), including single blocks of fully-connected linear layers with a TanH activation.", "4 These simpler models typically traded increased throughput for slight degradations in F1.", "Our training objective is binary cross-entropy loss.", "4 We initially experimented with various functions and layers (Sigmoid, ReLU, Pooling layers, etc) but found that TanH performs best.", "Our model works with segmentation candidate sites for both training and inference.", "This can be done in a fairly general, language-agnostic way.", "Let P be the set of all punctuation, and P e P be the set of sentence-ending punctuation.", "For a given input, we examine every character boundary and match based on two regular expressions for the left and right context, respectively: ( . * P e P *) : The left context ends with sentence-final punctuation, optionally followed by any amount of punctuation; and ( [^0-9]. *) : The right context does not start with a number.", "Raw text examples can be found in Table 1 and tokenized examples with fixed context sizes are shown in Table 2. Input to the model is in the form of documents.", "A linear pass over the data identifies all candidates sites and assembles them into a batch, with their associated left and right contexts.", "At training time, instances are extracted with their labels: for line-internal sites, and (cid:88) for sites that occur between input lines.", "At inference time, the trained classifier is applied, and newlines are inserted where (cid:88) is predicted.", "This general definition carries benefits and risks.", "On the positive side, it allows us to work with many languages without having to develop language-Label Left context Right context _the _ P .", "specific rules.", "It also speeds up training and inference, boosting both training speed and inference performance.", "On the downside, this loose definition can permit oversegmentation, since it permits, for example, word-internal segmentation in English and other languages.", "The criteria for identifying candidate sites can be easily altered to be more constrained or more general depending upon use case, and the list of punctuation to support more languages, if necessary.", "Our default list covers many languages.", "5 3.3 Training data As noted in our motivation, sentences in the wild are often not segmented but are part of paragraphs and documents.", "It is therefore unsurprising to find many segmentation errors in existing corpora.", "A particular problem one can observe is that of under-segmentation, perhaps resulting from application of conservative segmentation tools.", "This means the raw training data may contain many false negatives ( (cid:88) sites mistakenly labeled as ).", "Training a sentence segmentation model therefore presents a chicken-and-egg problem.", "We aim to train directly on existing data created for MT purposes, despite its having been either segmented by imperfect segmenters, or never segmented.", "While some data is undersegmented, the vast majority of the end-of-line contexts should be correct, since they are either", "(a) natural existing boundaries at the end of a paragraph or document or", "(b) the result of applying a conservative segmenter.", "We therefore hope to train classifiers even despite this noise.", "Because we are considering a binary classification problem (and using the associated binary cross entropy loss), we additionally consider 5 Our punctuation set (by unicode name): Full Stop, Question Mark, Exclamation Mark, Ellipsis, Ideographic Full Stop, Devanagari Danda, Arabic Question Mark, Arabic Full Stop, Khmer Sign Khan adding a weighted value to the (cid:88) class in order to give more credence to these contexts.", "6 For punctuation at the end of a line, the right-context is taken from the tokens at the beginning of the next sentence.", "In Section 7.3, we look into whether it matters if this right context is the true document context, or whether a random sentence will serve.", "For evaluation, we begin by removing sentences that do not end in punctuation, since none of the tools are able to segment these.", "We then concatenate the test set into a single line, joining sentences with a space.", "Evaluation among different tools contains subtle complexities.", "First, some tools normalize or tokenize the input text, complicating alignment between the input and the output.", "Second, different tools may attempt to segment at different subsets of input string locations, which might unfairly bias the results in favor of conservative tools.", "Finally, if we permit segmentation at any point of the input, there is a large class imbalance between and (cid:88) .", "The class imbalance advocates for F1 as a natural metric.", "The use of F1 also addresses the second issue, since only the gold positive class ( (cid:88) ) factors into the score.", "The first two issues also require that we align a segmenter's output with the gold standard segmented text.", "Since the texts are largely similar, we can do this efficiently using a modified Levenshtein distance 7 that only considers a fixed maximum distance between any two characters.", "Once the text is aligned, we compute F1 against the set of (cid:88) symbols in the gold text.", "An example is depicted in Figure 2. 5 Evaluation: Data We have noted the difficulty with making use of imperfect training data, and how we hope to work around it (3.3).", "Unfortunately, this workaround cannot be used for evaluation, where we need gold-standard data.", "We construct test sets from the WMT News Translation test sets (Barrault et al., 2020), which 6 Generally, we find no weight ( = 1 . 0 ) is sufficient in punctuated English, but increasing the weight ( = 20 ) improved performance in some languages and the multilingual setting where the data is noisier.", "7 While the distance itself can also be considered in comparing tools, we do not report these distances, and instead use the technique to align text within the window.", "provides for decent-size test sets in many languages. We manually corrected all sentence segmentations. While some sets were already well-segmented, some more recent years were extremely under-segmented. In Table 5, we show the test sets' line counts before and after manual correction. 8 Additionally, we report the % of candidate sites with a true (cid:88) label, which provides a measure of the ambiguity of the punctuation. Many positions occur in acronyms, such as U.S.A.\", embedded quotes, ellipsis, or in company names such as Yahoo!\".", "We consider three language settings:", "(i) monolingual English,", "(ii) a multilingual setting that includes the set of recent WMT languages plus Arabic, and", "(iii) a much larger multilingual setting that includes the previous languages plus all languages with at least 10 k lines in the WikiMatrix (Schwenk et al., 2019a) dataset.", "Starting with the English setting, we investigate the performance of a basic model and vary parameters such as context size, embedding size, and vocabulary size.", "After finding an optimal setting, we expand to the first multilingual setting and repeat.", "We train a single multilingual model that is agnostic of language and does not need language specification as input.", "Similar to the monolingual setting, we vary the aforementioned parameters, and compare the best model to baselines (6.3).", "In order to test expandability, we then train with the same parameters on the largest set of languages (us-ing the additional WikiMatrix data), and compare to the previous model's performance.", "While we do not widely experiment with additional monolingual settings, we train monolingual models in each language to compare against the multilingual models' performance.", "We report the 8 iu was left uncorrected due to the fact that available bitext often aligned sentences\" with singular or compound sentences in English and a lack of automatic translation corresponding to sentences.", "comparison of these three settings to baselines in Table 5. 6.1 Datasets We train our English model on a subset of the WSJ 9 and the English News Commentary datasets provided by WMT.", "10 To expand to a multilingual setting, we consider the set of all WMT Task languages and Arabic (23 in total) allowing us to leverage the various monolingual datasets (Joanis et al., 2020) released as part of the WMT workshopsoften using News Commentary datasets, as well as WikiMatrix (Schwenk et al., 2019a), CCMatrix (Schwenk et al., 2019b), and Global Voices (Nguyen and Daum III, 2019).", "For validation data, we use WMT test sets when available, and IWSLT (Cettolo et al., 2017) for Arabic.", "We experimented with", "(i) balancing the data so each language has equal amounts of data,", "(ii) normalizing the amount of data per language based on the relative ambiguity (measured by percent of candidate sites labeled as true (cid:88) ), and", "(iii) using all available data.", "We find that the third method performs the best and thus report under this setting.", "In the larger multilingual setting, we consider all WikiMatrix languages with more than 10 k unique lines (64 additional languages) and do not expand the validation set.", "For a complete list of datasets, please see Table 7 in Appendix A. 6.2 Training For each vocabulary size, we train a SentencePiece (Kudo and Richardson, 2018) model over the training data.", "We use a binary cross-entropy loss over the labels, Adam optimizer with a learning rate of 0.0001, and a of 1.0 (English) and 20.0 (multilingual) 9 Sections 1-2, 7-23 for training; section 24 for validation, and sections 03-06 for test in order to mirror the splits in Bird and Loper (2004) 10 http://data.statmt.org/ news-commentary/v15/ on the (cid:88) class (with the exception of the experiments in 7.4).", "We use a batch size of 25k instances, and compute F1 over the validation data every 500 batches, saving the model with the highest inference-time F1 score.", "This is the collective F1 score across all languages in the multilingual settings.", "If the model has not improved in 15 validations, training terminates.", "The models were trained on a Tesla V100 GPU.", "The monolingual models took approximately 2 hours to train while the multilingual models took approximately 10-15 hours.", "We use the following existing tools as baselines:", "Splitta (Gillick, 2009) ships with both SVM and Naive Bayes models.", "It targets English texts.", "We found similar performance and only report the Naive Bayes scores.", "NLTK Punkt Kiss and Strunk (2006) introduce an unsupervised training method for this task which uses frequency of occurences of input features such as casing, punctuation, and length in order to segment.", "Pretrained models for 18 languages (labeled as PUNKT in Table 5) are packaged with NLTK.", "NLTK additionally provides the framework to train a new model.", "We use this to train an additional model on all data (to simulate a multilingual model) and report the results in Table 5 as PUNKTML .", "PUNKT (and thus PUNKTML ) does not segment around non-Latin punctuation.", "Moses Sentence Splitter uses a list of prede-fined acronyms and abbreviations for each language.", "If left token is in this list, it does not split.", "This circumvents the whole point behind the ambiguity \"in the U.S.\"", "We first explore common questions and concerns while focusing on English data and results. We have three main parameters to study: context size, embedding size, and vocabulary size. We additionally consider how the training data affects results both in relative noise in class labels in addition to", "training on shuffled sentences instead of documents. In general, we find our technique creates a monolingual English model (Table 3) that outperforms the baselines.", "Starting with a minimal model with an embedding size of 32 , and a vocabulary size of 125 , we investigate whether such a small model can solve this problem. Our method is rooted in a contextual encoding of the subword tokens inside its context windows, and may benefit from increasing the size of these windows. At the operating point with a very small embedding and vocabulary size, the window size is the determining factor on performance. The results on English in Figure 3 show that a minimal amount of left and right context is necessary; however, left context is more beneficial than right", "We consider whether increasing the size of the model by doubling the embedding size and quadrupling the vocabulary size can produce better results. While varying the context windows (as seen in Figure 3) can result in increasingly higher scores, varying embedding size and vocabulary size did not produce the same effect. Keeping a fixed context window, we find that any given change in embedding size or vocabulary size increases F1 score by no more than 0.6%. While necessary to find the optimal model, it is clear that the context size is more important to experimentation. We note that a vocabulary size of 2000 tends to perform worse than smaller sizes while vocabulary sizes of 125 and 500 perform equally well when paired with any embedding size. Each of our monolingual models reported in Table 5 is the result of a grid search over various vocab sizes, and lambda weight (3.3). We keep context sizes of left (6) and right (4) and embedding size (128) constant.", "Because released monolingual data is often cleaned with sentences being removed and shuffled, it is unreasonable to assume that a set of consecutive sentences will always be available for training.", "In order to justify using this data, we repeat a subset of the previous English experimenttesting context and embedding sizes by training the model on the same data that has been shuffled. We test on the same validation data that has not been shuffled and retain its document order. In Table 4, we show that shuffling the training data has little impact on performance and document context is unnecessary in this punctuated setting.", "Uncleaned, unfiltered Wikipedia dumps do not have sentence boundaries in them. The smallest unit is the paragraph. Data scraped from internet sites is likely to have a similar form and much of our monolingual data is not guaranteed to be segmented. In order to justify that this approach works without already having segmented data, we show that we can achieve similar results as our previous English results in this setting. We train on one million randomly-selected paragraphs from an English Wikipedia dump. While many labels are now incorrect due to paragraphs being unsegmented, we assume the (cid:88) class is relatively noise-free.", "Because we already established that shuffling the data does not affect performance in this setting, the random selection is sufficient. While maintaining previously chosen hyper-parameterssuch as context sizes, learning rate, and dropoutwe search among potential values to use as a weight for the (cid:88) label. We find that increasing the value to 200 . 0 achieves the highest F1 of 97 . 5 . An unweighted model performs poorly. While still distant from the cleanly-trained models, it performs significantly better than the poorer baselines. Comparison to our other English models can be seen in Table 4.", "After outperforming current baselines in a monolingual English setting, we generalized our approach to work multilingually. The multilingual model can segment text irrespective of input language.", "In parallel to the monolingual conditions, we train two-layer transformer models with 6 tokens of left context, and 4 tokens of right context with 128 embedding size. While we did experiment with scaling these for the multilingual model, we found little effect. We additionally scale the vocabulary size to 12,000 to accommodate the larger character sets in Chinese and Japanese. Because more of the additional languages have undersegmented data, we searched over potential lambda weights for the (cid:88) class and report the best configuration ( = 20 . 0 ) in Table 5.", "Results of ERSATZ and baselines can be found in Table 5. In all cases, ERSATZ is at least competitive with baselines, if not outperforming them. Although most differences are small it outperforms", "SpaCy in all languages and often outperforms both Punkt and Moses.", "The Moses splitter is an interesting case. It identifies split points via a mix of general and language-specific regular expressions, which are then filtered against a curated list of non-breaking prefixes.", "This results in a conservative segmenter that will not (for example) allow a sentence to end with the token U.S. .", "As such, its high performance is notable.", "However, the comparison is likely unfair, since it was likely built and refined against the news datasets that constitute our WMT test sets.", "This approach is therefore effective in this domain, but may not generalize.", "Our single multilingual model, trained on noisy data, performs nearly identically.", "is by far the most ambiguous form of punctuation and is frequently used as an abbreviation marker.", "Other scripts using their own punctuation, such as Hindi, have specified a particular marker (the Devanagari Danda) as a sentence-ending punctuation that is rarely used sentence-internally.", "In these cases, ambiguity is introduced when alternative punctuation (such as .' or ...') is used.", "Additionally, even languages with the same scripts may not have the same level of ambiguity.", "French has the smallest number of punctuated contexts occurring sentence-internally within our test set, while English has the most.", "We note that the multilinguality of our model hurts the near-perfect performance that we see in the monolingual English models.", "We additionally note that some monolingual models perform worse than the multilingual model (see pl in Table 5).", "We hypothesize that this may be due to a lack of data, and the additional languages contain similar contexts, so the model may learn more about casing, punctuation, and length with additional data.", "While we note that it is difficult to evaluate many of the world's languages due to a lack of gold standard test data, we test for scalability by including additional languages (as described in 6) during training and noting any changes in performance on the evaluable languages.", "We include 64 additional languages (see Table 7 in the Appendix for comprehensive list) to bring us to a total of 87 languages.", "Table 5 also includes scores from a larger multilingual model (ERSATZWM ) that was built with these 64 additional languages.", "Overall, we find very little change between these two settings.", "With en , we actually see some improvement in performance from the smaller multilingual model.", "Generally, there is not significant degradation of scores, implying this technique can generalize to additional languages.", "With our context construction method, we benefit from batching to decrease runtime, since the decision at each candidate point is dependent only on its immediate window.", "We benchmark our models as well as the baselines (Table 6).", "While our models are slower than some baselines, we find that increasing the size of the model does not dramatically increase the runtime.", "Additionally, the rate (in tokens per second) is roughly constant.", "As one of the earliest steps in NLP pipelines, sentence segmentation is an important task.", "However, it has not to this date received proper experimental attention, relying instead on ad hoc methods.", "It is a good time to correct this oversight, as NLP moves to the use of larger and larger corpora covering more and more languages.", "Even as the field moves towards processing text at the paragraph or document level directly, it is likely that sentence processing will be with us for some time.", "We show here that a simple context-based model can produce state-of-the-art results with a modest hyperparameter search, trained on noisy annotations from imperfectly-segmented data.", "Together with a straightforward multilingual approach to identifying candidate split points and training on noisy segmented data, our single model performs well across a range of languages.", "More fundamentally, we have defined an experimental framework for benchmarking and future comparative work.", "Missing from our paper is an evaluation of the effect of these tools on downstream tasks.", "An obvious candidate for future work is to conduct this evaluation.", "It is possible that some tasks will not be affected by small differences among the best performing models, but this work at least sheds light on those differences.", "Another obvious direction is to look at approaches that would work for unpunctuated text (e.g., Wang et al. (2019)).", "This would expand the functionality of segmenters into other important areas, such as speech translation, and to languages, like Thai, that do not mark ends of sentences.", "The authors wish to thank Elizabeth Salesky, Carlos Aguirre, Jacob Bremerman and the anonymous reviewers for helpful technical discussions and feedback." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "result", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "other", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "other", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "result", "abstain", "result", "result", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other" ]
[ "Concept normalization, the task of linking textual mentions of concepts to concepts in an ontology, is challenging because ontologies are large.", "In most cases, annotated datasets cover only a small sample of the concepts, yet concept normalizers are expected to predict all concepts in the ontology.", "In this paper, we propose an architecture consisting of a candidate generator and a list-wise ranker based on BERT.", "The ranker considers pairings of concept mentions and candidate concepts, allowing it to make predictions for any concept, not just those seen during training.", "We further enhance this list-wise approach with a semantic type regularizer that allows the model to incorporate semantic type information from the ontology during training.", "Our proposed concept normalization framework achieves state-of-the-art performance on multiple datasets.", "Mining and analyzing the constantly-growing unstructured text in the bio-medical domain offers great opportunities to advance scientific discovery (Gonzalez et al., 2015; Fleuren and Alkema, 2015) and improve the clinical care (Rumshisky et al., 2016; Liu et al., 2019).", "However, lexical and grammatical variations are pervasive in such text, posing key challenges for data interoperability and the development of natural language processing (NLP) techniques.", "For instance, heart attack , MI , myocardial infarction , and cardiovascular stroke all refer to the same concept.", "It is critical to disambiguate these terms by linking them with their corresponding concepts in an ontology or knowledge base.", "Such linking allows downstream tasks (relation extraction, information retrieval, text classification, etc.) to access the ontology's rich knowledge about biomedical entities, their synonyms, semantic types and mutual relationships.", "Concept normalization is a task that maps concept mentions , the in-text natural-language mentions of ontological concepts, to concept entries in a standardized ontology or knowledge base.", "Techniques for concept normalization have been advancing, thanks in part to recent shared tasks including clinical disorder normalization in 2013 ShARe/CLEF (Suominen et al., 2013) and 2014 SemEval Task 7 Analysis of Clinical Text (Pradhan et al., 2014), and adverse drug event normalization in Social Media Mining for Health (SMM4H) (Sarker et al., 2018; Weissenbacher et al., 2019).", "Most existing systems use a string-matching or dictionary look-up approach (Leal et al., 2015; D'Souza and Ng, 2015; Lee et al., 2016), which are limited to matching morphologically similar terms, or supervised multi-class classifiers (Be-lousov et al., 2017; Tutubalina et al., 2018; Niu et al., 2019; Luo et al., 2019a), which may not generalize well when there are many concepts in the ontology and the concept types that must be predicted do not all appear in the training data.", "We propose an architecture (shown in Figure 1) that is able to consider both morphological and semantic information.", "We first apply a candidate generator to generate a list of candidate concepts, and then use a BERT-based list-wise classifier to rank the candidate concepts.", "This two-step architecture allows unlikely concept candidates to be filtered out prior to the final classification, a necessary step when dealing with ontologies with millions of concepts.", "In contrast to previous list-wise classifiers (Murty et al., 2018) which only take the concept mention as input, our BERT-based list-wise classifier takes both the concept mention and the candidate concept name as input, and is thus able to handle concepts that never appear in the training data.", "We further enhance this list-wise approach with a semantic type regularizer that allows our ranker to leverage semantic type information from head spinning a little C0220870 C0012833 C0018681", "Our work makes the following contributions: Our proposed concept normalization framework achieves state-of-the-art performance on multiple datasets.", "We propose a concept normalization framework consisting of a candidate generator and a list-wise classifier.", "Our framework is easier to train and the list-wise classifier is able to predict concepts never seen during training.", "We introduce a semantic type regularizer which encourages the model to consider the semantic type information of the candidate concepts.", "This semantic type regularizer improves performance over the BERT-based listwise classifier on multiple datasets.", "Traditional approaches for concept normalization involve string match and dictionary look-up.", "These approaches differ in how they construct dictionaries, such as collecting concept mentions from the labeled data as extra synonyms (Leal et al., 2015; Lee et al., 2016), and in different string matching techniques, such as string overlap and edit distance (Kate, 2016).", "Two of the most commonly used knowledge-intensive concept normalization tools, MetaMap (Aronson, 2001) and cTAKES (Savova et al., 2010) both employ rules to first generate lexical variants for each noun phrase and then conduct dictionary look-up for each variant.", "Several systems (D'Souza and Ng, 2015; Jonnagaddala et al., 2016) have demonstrated that rule-based concept normalization systems achieve performance competitive with other approaches in a sieve-based approach that carefully selects combinations and orders of dictionaries, exact and partial matching, and heuristic rules.", "However, such rule-based approaches struggle when there are great variations between concept mention and concept, which is common, for example, when comparing social media text to medical ontologies.", "Due to the availability of shared tasks and annotated data, the field has shifted toward machine learning techniques.", "We divide the machine learning approaches into two categories, classification (Savova et al., 2008; Stevenson et al., 2009; Limsopatham and Collier, 2016; Yepes, 2017; Festag and Spreckelsen, 2017; Lee et al., 2017; Tutubalina et al., 2018; Niu et al., 2019) and learning to rank (Leaman et al., 2013; Liu and Xu, 2017; Li et al., 2017; Nguyen et al., 2018; Murty et al., 2018).", "Most classification-based approaches using deep neural networks have shown strong performance.", "They differ in using different architectures, such as Gated Recurrent Units (GRU) with attention mechanisms (Tutubalina et al., 2018), multi-task learning with auxiliary tasks to generate attention weights (Niu et al., 2019), or pre-trained transformer networks (Li et al., 2019; Miftahutdinov and Tutubalina, 2019); different sources for training word embeddings, such as Google News (Lim-sopatham and Collier, 2016) or concept defini-tions from the Unified Medical Language System (UMLS) Metathesaurus (Festag and Spreckelsen, 2017); and different input representations, such as using character embeddings (Niu et al., 2019).", "All classification approaches share the disadvantage that the output space must be the same size as the number of concepts to be predicted, and thus the output space tends to be small such as 2,200 concepts in (Limsopatham and Collier, 2016) and around 22,500 concepts in (Weissenbacher et al., 2019).", "Classification approaches also struggle with concepts that have only a few example mentions in the training data.", "Researchers have applied point-wise learning to rank (Liu and Xu, 2017; Li et al., 2017), pairwise learning to rank (Leaman et al., 2013; Nguyen et al., 2018), and list-wise learning to rank (Murty et al., 2018; Ji et al., 2019) on concept normalization.", "Generally, the learning-to-rank approach has the advantage of reducing the output space by first obtaining a smaller list of possible candidate concepts via a candidate generator and then ranking them.", "DNorm (Leaman et al., 2013), based on a pair-wise learning-to-rank model where both mentions and concept names were represented as TF-IDF vectors, was the first to use learning-to-rank for concept normalization and achieved the best performance in the ShARe/CLEF eHealth 2013 shared task.", "List-wise learning-to-rank approaches are both computationally more efficient than pairwise learning-to-rank (Cao et al., 2007) and empirically outperform both point-wise and pair-wise approaches (Xia et al., 2008).", "There are two implementations of list-wise classifiers using neural networks for concept normalization: Murty et al. (2018) treat the selection of the best candidate concept as a flat classification problem, losing the ability to handle concepts not seen during training; Ji et al. (2019) take a generate-and-rank approach similar to ours, but they do not leverage resources such as synonyms or semantic type information from UMLS in their BERT-based ranker.", "We define a concept mention m as an abbreviation such as MI, a noun phrase such as heart attack, or even a short text such as an obstruction of the blood supply to the heart.", "The goal is then to assign m with a concept c .", "Formally, given a list of pre-identified concept mentions M = { m 1 , m 2 , ..., m n } in the text and an ontology or knowledge base with a set of concepts C = { c 1 , c 2 , ..., c t } , the goal of concept normalization is to find a mapping function c j = f ( m i ) that maps each textual mention to its correct concept.", "We approach concept normalization in two steps: we first use a candidate generator G ( m, C ) C m to generate a list of candidate concepts C m for each mention m , where C m C and | C m | (cid:28) | C | .", "We then use a candidate ranker R ( m, C m ) C m , where C m is a re-ranked list of candidate concepts sorted by their relevance, preference, or importance.", "But unlike information retrieval tasks where the order of candidate concepts in the sorted list C m is crucial, in concept normalization we care only that the one true concept is at the top of the list.", "The main idea of the two-step approach is that we first use a simple and fast system with high recall to generate candidates, and then a more precise system with more discriminative input to rank the candidates.", "We implement two kinds of candidate generators: a BERT-based multi-class classifier when the number of concepts in the ontology is small, and a Lucene-based 1 dictionary look-up when there are hundreds of thousands of concepts in the ontology.", "BERT (Devlin et al., 2019) is a contextualized word representation model that has shown great performance in many NLP tasks.", "Here, we use BERT in a multi-class text-classification configuration as our candidate concept generator.", "We use the final hidden vector V m RH corresponding to the first input token ( [CLS] ) generated from BERT ( m ) and a classification layer with weights W R | C | H , and train the model using a standard classification loss: LG = y log ( softmax ( V m WT )) (1) where y is a one-hot vector, and | y | = | C | .", "The score for all concepts is calculated as: p ( C ) = softmax ( V m WT ) (2) We select the top k most probable concepts in p ( C ) and feed that list C m to the ranker.", "Multi-pass sieve rule based systems (D'Souza and Ng, 2015; Jonnagaddala et al., 2016; Luo et al., 2019b) achieve competitive performance when used with the right combinations and orders of different dictionaries, exact and partial matching, and heuristic rules.", "Such systems relying on basic lexical matching algorithms are simple and fast to implement, but they are only able to generate candidate concepts which are morphologically similar to a given mention.", "Inspired by the work of Luo et al. (2019b), we implement a Lucene-based sieve normalization system which consists of the following components (see Appendix A.1 for details): 1 https://lucene.apache.org/", "a. Lucene index over the training data finds all mentions that exactly match m .", "b. Lucene index over ontology finds concepts whose preferred name exactly matches m .", "c. Lucene index over ontology finds concepts where at least one synonym of the concept exactly matches m .", "d. Lucene index over ontology finds concepts where at least one synonym of the concept has high character overlap with m .", "After the candidate generator produces a list of concepts, we use a BERT-based list-wise classifier to select the most likely candidate.", "BERT allows us to match morphologically dissimilar (but semantically similar) mentions and concepts, and the list-wise classifier takes both mention and candidate concepts as input, allowing us to handle concepts that appear infrequently (or never) in the training data.", "Here, we use BERT similar to a question answering configuration, where given a concept mention m , the task is to choose the most likely candidate concept c m from all candidate concepts C m .", "As shown in Figure 1, our classifier input includes the text of the mention m and all synonyms of the candidate concept c m , and takes the form [CLS] m [SEP] syn 1 ( c m ) [SEP] ... [SEP] syn s ( c m ) [SEP] , where syn i ( c m ) is the i th synonym of concept c m 2 .", "We calculate the final hidden vector V ( m,c m ) RH corresponding to the first input token ( [CLS] ) generated from BERT for each such input, and then concatenate the hidden vectors of all candidate concepts to form a matrix V ( m,C m ) R | C m | H .", "We use this matrix and classification layer weights W RH , and compute a standard classification loss: LR = y log ( softmax ( V ( m,C m ) WT )) .", "where y is a one-hot vector, and | y | = | C", "To encourage the list-wise classifier towards a more informative ranking than just getting the correct", "concept at the top of the list, we propose a semantic type regularizer that is optimized when candidate concepts with the correct semantic type are ranked above candidate concepts with incorrect types.", "The semantic type of the candidate concept is assumed correct only if it exactly matches the semantic type of the gold truth concept.", "If the concept has multiple semantic types, all must match.", "Our semantic type regularizer consists of two components: R p ( y t , y p ) = (cid:88) p P ( y ) ( m 1 + y p y t ) (4) R n ( y p , y n ) = (cid:88) p P ( y ) max n N ( y ) ( m 2 + y n y p ) (5) where y = V ( m,c m ) WT , N ( y ) is the set of indexes of candidate concepts with incorrect semantic types (negative candidates), P ( y ) (positive candidates) is the complement of N ( y ) , y t is the score of the gold truth candidate concept, and thus t P ( y ) .", "The margins m 1 and m 2 are hyper-parameters for controlling the minimal distances between y t and y p and between y p and y n , respectively.", "Intuitively, R p tries to push the score of the gold truth concept above all positive candidates at least by m 1 , and R n tries to push the best scored negative candidate below all positive candidates by m 2 .", "The final loss function we optimize for the BERT-based list-wise classifier is: L = LR + R p ( y t , y p ) + R n ( y p , y n ) (6) where and are hyper-parameters to control the tradeoff between standard classification loss and the semantic type regularizer.", "Our experiments are conducted on three social media datasets, AskAPatient (Limsopatham and Collier, 2016), TwADR-L (Limsopatham and Collier, 2016), and SMM4H-17 (Sarker et al., 2018), and one clinical notes dataset, MCN (Luo et al., 2019b).", "We summarize dataset characteristics in Table 1.", "AskAPatient The AskAPatient dataset 3 contains 17,324 adverse drug reaction (ADR) annotations collected from blog posts.", "The mentions are mapped to 1,036 medical concepts with 3 http://dx.doi.org/10.5281/zenodo.", "22 semantic types from the subset of Systematized Nomenclature Of Medicine-Clinical Term (SNOMED-CT) and the Australian Medicines Terminology (AMT).", "We follow the 10-fold cross validation (CV) configuration in Limsopatham and Collier (2016) which provides 10 sets of train/dev/test splits.", "TwADR-L The TwADR-L dataset 3 contains 5,074 ADR expressions from social media.", "The mentions are mapped to 2,220 Medical Dictionary for Regulatory Activities (MedDRA) concepts with 18 semantic types.", "We again follow the 10-fold cross validation configuration defined by Limsopatham and Collier (2016).", "SMM4H-17 The SMM4H-17 dataset 4 consists of 9,149 manually curated ADR expressions from tweets.", "The mentions are mapped to 22,500 concepts with 61 semantic types from MedDRA Preferred Terms (PTs).", "We use the 5,319 mentions from the released set as our training data, and keep the 2,500 mentions from the original test set as evaluation.", "MCN The MCN dataset consists of 13,609 concept mentions drawn from 100 discharge summaries from the fourth i2b2/VA shared task (Uzuner et al., 2011).", "The mentions are mapped to 3792 unique concepts out of 434,056 possible concepts with 125 semantic types in SNOMED-CT and RxNorm.", "We take 40 clinical notes from the released data as training, consisting of 5,334 mentions, and the standard evaluation data with 6,925 mentions as our test set.", "Around 2.7% of mentions in MCN could not be mapped to any 4 http://dx.doi.org/10.17632/rxwfb3tysd.", "A major difference between the datasets is the space of concepts that systems must consider.", "For AskAPatient and TwADR-L, all concepts in the test data are also in the training data, and in both cases only a couple thousand concepts have to be considered.", "Both SMM4H-17 and MCN define a much larger concept space: SMM4H-17 considers 22,500 concepts (though only 513 appear in the data) and MCN considers 434,056 (though only 3,792 appear in the data).", "AskAPatient and TwADR-L have no unseen concepts in their test data, SMM4H-17 has a few (43), while MCN has a huge number (2,256).", "Even a classifier that perfectly learned all concepts in the training data could achieve only 70.15% accuracy on MCN.", "MCN also has more unseen mentions: 53.9%, where the other datasets have less than 40%.", "The MCN dataset is thus harder to memorize, as systems must consider many mentions and concepts never seen in training.", "Unlike the clinical MCN dataset, in the three social media datasets AskAPatient, TwADR-L, and SMM4H-17 it is common for the ADR expressions to share no words with their target medical concepts.", "For instance, the ADR expression makes me like a zombie is assigned the concept C1443060 with preferred term feeling abnor-mal.", "The social media datasets do not include context, only the mentions themselves, while the MCN dataset provides the entire note surrounding each mention.", "Since only 4.5% of mentions in the MCN dataset are ambiguous, for the current experiments we ignore this additional context information.", "from nearly 200 different vocabularies such as SNOMED-CT, MedDRA, RxNorm, etc.", "There are over 3.5 million concepts in UMLS, and for each concept, UMLS also provides the definition, preferred term, synonyms, semantic type, relationships with other concepts, etc.", "In our experiments, we make use of synonyms and semantic type information from UMLS.", "We restrict our concepts to the three vocabularies, MedDRA, SNOMED-CT, and RxNorm in the UMLS version 2017AB.", "For each concept in the ontologies of the four datasets, we first find its concept unique identifier (CUI) in UMLS.", "We then extract synonyms and semantic type information according to the CUI.", "Synonyms (English only) are collected from level 0 terminologies containing vocabulary sources for which no additional license agreements are necessary.", "For all four datasets, the standard evaluation of concept normalization systems is accuracy.", "For the AskAPatient and TwADR-L datasets, which use 10-fold cross validation, the accuracy metrics are averaged over 10 folds.", "We use the BERT-based multi-class classifier as the candidate generator on the three social media datasets AskAPatient, TwADR-L, and SMM4H-17, and the Lucene-based candidate generator for the MCN dataset.", "In the social media datasets, the number of concepts in the data is small, few test concepts are unseen in the training data, and there is a greater need to match expressions that are morphologically dissimilar from medical concepts.", "In the clinical MCN dataset, the opposites are true.", "For all experiments, we use BioBERT-base (Lee et al., 2019), which further pre-trains BERT on PubMed abstracts (PubMed) and PubMed Central full-text articles (PMC).", "We use huggingface's pytorch implementation of BERT 5 .", "We select the best hyper-parameters based on the performance on dev set.", "See Appendix A.2 for hyperparameter settings.", "We compare our proposed architecture with the following state-of-the-art systems.", "WordCNN Limsopatham and Collier (2016) use convolutional neural networks over pre-trained word embeddings to generate a vector representation for each mention, and then feed these into a softmax layer for multi-class classification.", "WordGRU+Attend+TF-IDF Tutubalina et al. (2018) use a bidirectional GRU with attention over pre-trained word embeddings to generate a vector representation for each mention, concatenate such vector representations with the cosine similarities of the TF-IDF vectors between the mention and all other concept names, and then feed the concatenated vector to a softmax layer for multi-class classification.", "BERT+TF-IDF Miftahutdinov and Tutubalina (2019) take similar approach as Tutubalina et al. (2018), but use BERT to generate a vector representation for each mention.", "They concatenate the vector representations with the cosine similarities of the TF-IDF vectors between the mention and all other concept names, and then feed the concatenated vector to a softmax layer for multi-class classification.", "CharCNN+Attend+MT Niu et al. (2019) use a multi-task attentional character-level convolution neural network.", "They first convert the mention into a character embedding matrix.", "The auxiliary task network takes the embedding matrix as input for a CNN to learn to generate character-level domain-related importance weights.", "Such learned importance weights are concatenated with the character embedding matrix and fed as input to another CNN model with a softmax layer for multi-class classification.", "CharLSTM+WordLSTM Han et al. (2017) first use a forward LSTM over each character of the mention and its corresponding character class such as lowercase or uppercase to generate a character-level vector representation, then use another bi-directional LSTM over each word of the mention to generate a word-level representation.", "They concatenate character-level and word-level representations and feed them as input to a softmax layer for multi-class classification.", "LR+MeanEmbedding Belousov et al. (2017) calculate the mean of three different weighted word embeddings pre-trained on GoogleNews, Twitter and DrugTwitter as vector representations for TwADR-L AskAPatient SMM4H-17 Approach Dev Test Dev Test Dev Test WordCNN (Limsopatham and Collier, 2016) -44.78 -81.41 -WordGRU+Attend+TF-IDF (Tutubalina et al., 2018) --85.71 -BERT+TF-IDF (Miftahutdinov and Tutubalina, 2019) ---89.64 CharCNN+Attend+MT (Niu et al., 2019) -46.46 -84.65 -CharLSTM+WordLSTM (Han et al., 2017) ---87.20 LR+MeanEmbedding (Belousov et al., 2017) ---87.70 BERT 47.08 44.05 88.63 87.52 84.74 87.36 BERT + BERT-rank 48.07 46.32 88.14 87.10 84.44 87.66 BERT + BERT-rank + ST-reg 47.98 47.02 88.26 87.46 84.66 88.24 BERT + gold + BERT-rank 52.70 49.69 89.06 87.92 88.57 90.16 BERT + gold + BERT-rank + ST-reg 52.84 50.81 89.68 88.51 88.87 91.08 Table 2: Comparisons of our proposed concept normalization architecture against the current state-of-the-art performances on TwADR-L , AskAPatient , and SMM4H-17 datasets.", "the mention, where word weights are calculated as inverse document frequency.", "Such vector representations are fed as input to a multinomial logistic regression (LR) model for multi-class classification.", "Sieve-based Luo et al. (2019b) build a sieve-based normalization model which contains exact-match and MetaMap (Aronson, 2001) modules.", "Given a mention as input, the exact-match module first looks for mentions in the training data that exactly match the input, and then looks for concepts from the ontology whose synonyms exactly match the input.", "If no concepts are found, the mention is fed into MetaMap.", "They run this sieve-based normalization model twice.", "In the first round, the model lower-cases the mentions and includes acronym/abbreviation tokens during dictionary lookup.", "In the second round, the model lower-cases the mentions spans and also removes special tokens such as &apos;s, &quot;, etc.", "Since our focus is individual systems, not ensembles, we compare only to other non-ensembles 6 .", "We separate out the different contributions from the following components of our architecture.", "6 An ensemble of three systems (including CharL-STM+WordLSTM and LR+MeanEmbedding) achieved 88.7% accuracy on the SMM4H-17 dataset (Sarker et al., 2018).", "Lucene The Lucene-based dictionary look-up.", "When used alone, we take the top-ranked candidate concept as the prediction.", "+ST-reg The semantic type regularizer, always used in combination with BERT-ranker.", "We also consider the case ( +gold ) where we artifi-cially inject the correct concept into the candidate generator's list if it was not already there.", "Table 2 shows that our complete model, BERT + BERT-rank + ST-reg, achieves a new state-of-the-art on two of the social media test sets, and Table 3 shows that Lucene + BERT-rank + ST-reg achieves a new state-of-the-art on the clinical MCN test set.", "The TwADR-L dataset is the most difficult, with our complete model achieving 47.02% accuracy.", "In the other datasets, performance of our complete model is much higher: 87.46% for AskAPatient, 88.24% for SMM4H-17 7 .", "On the TwADR-L, SMM4H-17, and MCN test sets, adding the BERT-based ranker improves performance over the candidate generator alone, and adding the semantic type regularization further improves performance.", "For example, Lucene alone achieves 79.25% accuracy on the MCN data, adding the BERT ranker increases this to 82.75%, and adding the semantic type regularizer increases this to 83.56%.", "On AskAPatient, performance of the full model is similar to just the BERT multi-class classifier, perhaps because in this case BERT alone already successfully improves the state-of-the-art from 85.71% to 87.52%.", "The +gold setting allows us to answer how well our ranker would perform if our candidate generator made no mistakes.", "First, we can see that if the correct concept is always in the candidate list, our list-based ranker (+BERT-rank) outperforms the multi-class classifier (BERT) on all test sets.", "We also see in this setting that the benefits of the semantic type regularizer are amplified, with test sets of TwADR-L and MCN showing more than 1.00% gain in accuracy from using the regularizer.", "These findings suggest that improving the quality of the candidate generator should be a fruitful future direction.", "Overall, we see the biggest performance gains from our proposed generate-and-rank architecture in the MCN dataset.", "This is the most realistic setting, where the number of candidate concepts is large and many test concepts were never seen during training.", "In such cases, we cannot use a multi-class classifier as a candidate generator since it would never generate unseen concepts.", "Thus, our ranker shines in its ability to sort through the long list of possible concepts.", "Table 4 shows an example that is impossible for the multi-class classifier approach to concept normalization.", "The concept mention an abdominal wall hernia in the clinical MCN dataset needs to be mapped to the concept with the preferred name Hernia of abdominal wall, but that concept never appeared in the training data.", "The Lucene-based candidate generator finds this concept, but only 7 Miftahutdinov and Tutubalina (2019) use the same architecture as our BERT-based multi-class classifier (row 7), but they achieve 89.28% of accuracy on SMM4H-17.", "We were unable to replicate this result as their code and parameter settings were unavailable.", "d.) and several other concepts have high overlap as well.", "Thus Lucene ranks the correct concept 4th in its list.", "The BERT ranker is able to compare an abdominal wall hernia to Hernia of abdominal wall and recognize that as a better match than the other options, re-assigning it to rank 1.", "Table 5 shows an example that illustrates why the semantic type regularizer helps.", "The mention felt like I was coming down with flu in the social media AskAPatient dataset needs to be mapped to the concept with the preferred name influenza-like symptoms, which has the semantic type of a sign or symptom.", "The BERT ranker ranks two disease or syndromes higher, placing the correct concept at rank 3.", "After the semantic type regularizer is added, the system recognizes that the mention should be mapped to a sign or symptom, and correctly ranks it above the disease or syndromes.", "Note that this happens even though the ranker does not get to see the semantic type of the input mention at prediction time.", "The available concept normalization datasets are somewhat limited.", "Lee et al. (2017) notes that AskAPatient and TwADR-L have issues including duplicate instances, which can lead to bias in the system; many phrases have multiple valid mappings to concepts but the context necessary to disambiguate is not part of the dataset; and the 10-fold cross-validation makes training complex models unnecessarily expensive.", "These datasets are also unrealistic in that all concepts in the test data are seen during training.", "Future research should focus on more realistic datasets that follow the approach of MCN in annotating mentions of concepts from a large ontology and including the full context.", "Our ability to explore the size of the candidate list was limited by our available computational resources.", "As the size of the candidate list increases, the true concept is more likely to be included, but the number of training instances also increases, making the computational cost larger, especially for the datasets using 10-fold cross-validation.", "We chose candidate list sizes as large as we could afford, but there are likely further gains possible with larger candidate lists.", "Our semantic type regularizer is limited to exact matching: it checks only whether the semantic type of a candidate exactly matches the semantic type of the true concept.", "The UMLS ontology includes many other relations, such as is-a and part-of relations, and extending our regularizer to encode such rich semantic knowledge may yield further improvements in the BERT-based ranker.", "We propose a concept normalization framework consisting of a candidate generator and a list-wise classifier based on BERT.", "Because the candidate ranker makes predictions over pairs of concept mentions and candidate concepts, it is able to predict concepts never seen during training.", "Our proposed semantic type regularizer allows the ranker to incorporate semantic type information into its predictions without requiring semantic types at prediction time.", "This generate-and-rank framework achieves state-of-the-art performance on multiple concept normalization datasets.", "We thank the anonymous reviewers for their insightful comments on an earlier draft of this paper.", "This work was supported in part by National Institutes of Health grant R01LM012918 from the National Library of Medicine (NLM) and grant R01GM114355 from the National Institute of General Medical Sciences (NIGMS).", "The computations were done in systems supported by the National Science Foundation under Grant No. 1228509.", "This research was supported in part by an appointment to the Oak Ridge National Laboratory Advanced Short-Term Research Opportunity (ASTRO) Program, sponsored by the U.S. Department of Energy and administered by the Oak Ridge Institute for Science and Education.", "The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health, National Science Foundation, or Department of Energy." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "objective", "objective", "abstain", "abstain", "abstain", "objective", "objective", "result", "method", "objective", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "other", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "objective", "abstain", "objective", "abstain", "other", "other", "other", "other", "other" ]
[ "Recent studies have shown that multilingual pretrained language models can be effectively improved with cross-lingual alignment information from Wikipedia entities.", "However, existing methods only exploit entity information in pretraining and do not explicitly use entities in downstream tasks.", "In this study, we explore the effectiveness of leveraging entity representations for downstream cross-lingual tasks.", "We train a multilingual language model with 24 languages with entity representations and show the model consistently outperforms word-based pretrained models in various cross-lingual transfer tasks.", "We also analyze the model and the key insight is that incorporating entity representations into the input allows us to extract more language-agnostic features.", "We also evaluate the model with a multilingual cloze prompt task with the mLAMA dataset.", "We show that entity-based prompt elicits correct factual knowledge more likely than using only word representations.", "Our source code and pretrained models are available at https: //github.com/studio-ousia/luke .", "Pretrained language models have become crucial for achieving state-of-the-art performance in mod-ern natural language processing.", "In particular, multilingual language models (Conneau and Lample, 2019; Conneau et al., 2020a; Doddapaneni et al., 2021) have attracted considerable attention particularly due to their utility in cross-lingual transfer.", "In zero-shot cross-lingual transfer, a pretrained encoder is fine-tuned in a single resource-rich language (typically English), and then evaluated on other languages never seen during fine-tuning.", "A key to solving cross-lingual transfer tasks is to obtain representations that generalize well across languages.", "Several studies aim to improve multilingual models with cross-lingual supervision such as Work done as an intern at Studio Ousia.", "or parallel sentences (Conneau and Lample, 2019).", "Another source of such information is the cross-lingual mappings of Wikipedia entities (articles).", "Wikipedia entities are aligned across languages via inter-language links and the text contains numerous entity annotations (hyperlinks).", "With these data, models can learn cross-lingual correspondence such as the words Tokyo (English) and (Japanese) refers to the same entity.", "Wikipedia entity annotations have been shown to provide rich cross-lingual alignment information to improve multilingual language models (Calixto et al., 2021; Jiang et al., 2022).", "However, previous studies only incorporate entity information through an auxiliary loss function during pretraining, and the models do not explicitly have entity representations used for downstream tasks.", "In this study, we investigate the effectiveness of entity representations in multilingual language models.", "Entity representations are known to enhance language models in mono-lingual settings (Zhang et al., 2019; Peters et al., 2019; Wang et al., 2021; Xiong et al., 2020; Yamada et al., 2020) presumably by introducing real-world knowledge.", "We show that using entity representations facilitates cross-lingual transfer by providing language-independent features.", "To this end, we present a multilingual extension of LUKE (Yamada et al., 2020).", "The model is trained with the multilingual masked language modeling (MLM) task as well as the masked entity prediction (MEP) task with Wikipedia entity embeddings.", "We investigate two ways of using the entity representations in cross-lingual transfer tasks: (1) perform entity linking for the input text, and append the detected entity tokens to the input sequence.", "The entity tokens are expected to provide language-independent features to the model.", "We evaluate this approach with cross-lingual question answering (QA) datasets: XQuAD (Artetxe et al., 2020) and MLQA (Lewis et al., 2020); (2) use the entity [MASK] token from the MEP task as a language-independent feature extractor.", "In the MEP task, word tokens in a mention span are associated with an entity [MASK] token, the contextualized representation of which is used to train the model to predict its original identity.", "Here, we apply similar input formulations to tasks involving mention-span classification, relation extraction (RE) and named entity recognition (NER): the attribute of a mention or a pair of mentions is predicted using their contextualized entity [MASK] feature.", "We evaluate this approach with the RELX (Kksal and zgr, 2020) and CoNLL NER (Tjong Kim Sang, 2002; Tjong Kim Sang and De Meulder, 2003) datasets.", "The experimental results show that these entity-based approaches consistently outperform word-based baselines.", "Our analysis reveals that entity representations provide more language-agnostic features to solve the downstream tasks.", "We also explore solving a multilingual zero-shot cloze prompt task (Liu et al., 2021) with the entity [MASK] token.", "Recent studies have shown that we can address various downstream tasks by querying a language model for blanks in prompts (Petroni et al., 2019; Cui et al., 2021).", "Typically, the answer tokens are predicted from the model's word-piece vocabulary but here we incorporate the prediction from the entity vocabulary queried by the entity [MASK] token.", "We evaluate our approach with the mLAMA dataset (Kassner et al., 2021) in various languages and show that using the entity [MASK] token reduces language bias and elicits correct factual knowledge more likely than using only the word [MASK] token.", "To evaluate the effectiveness of entity representations for cross-lingual downstream tasks, we introduce a new multilingual language model based on a bidirectional transformer encoder: Multilingual LUKE (mLUKE), a multilingual extension of LUKE (Yamada et al., 2020).", "The model is trained with the masked language modeling (MLM) task (Vaswani et al., 2017) as well as the masked entity prediction (MEP) task.", "In MEP, some of the input entity tokens are randomly masked with the special entity [MASK] token, and the model is trained to predict the original entities.", "Note that the entity [MASK] token is different from the word [MASK] token for MLM.", "The model takes as input a tokenized text ( w 1 , w 2 , ..., w m ) and the entities appearing in the text ( e 1 , e 2 , ..., e n ), and compute the contextualized representation for each token ( h w 1 , h w 2 , ..., h w m and h e 1 , h e 2 , ..., h e n ).", "The word and entity tokens equally undergo self-attention computation ( i.e. , no entity-aware self-attention in Yamada et al. (2020)) after embedding layers.", "The word and entity embeddings are computed as the summation of the following three embeddings: token embeddings, type embeddings, and position embeddings (Devlin et al., 2019).", "The entity tokens are associated with the word tokens through position embeddings: the position of an entity token is defined as the positions of its corresponding word tokens, and the entity position embeddings are summed over the positions.", "Model Configuration.", "The model configurations of mLUKE follow the base and large configurations of XLM-RoBERTa (Conneau et al., 2020a), a variant of BERT (Devlin et al., 2019) trained with CommonCrawl data from 100 languages.", "Before pretraining, the parameters in common ( e.g. , the weights of the transformer encoder and the word embeddings) are initialized using the checkpoint from the Transformers library.", "1 The size of the entity embeddings is set to 256 and they are projected to the size of the word embeddings before being fed into the encoder.", "We use Wikipedia dumps in 24 languages (Ap-pendix A) as the training data.", "These languages are selected to cover reasonable numbers of languages that appear in downstream cross-lingual datasets.", "We generate input sequences by splitting the content of each page into sequences of sentences comprising 512 words with their entity annotations ( i.e. , hyperlinks).", "During training, data are sampled from each language with n i items with the following multinomial distribution: p i = n i (cid:80) Nk =1 n k , (1) where is a smoothing parameter and set to 0.7 following multilingual BERT.", "Entity Vocabulary.", "Entities used in mLUKE are defined as Wikipedia articles.", "The articles from different languages are aligned through inter-language links 3 and the aligned articles are treated as a single entity.", "We include in the vocabulary the most frequent 1.2M entities in terms of the number of hyperlinks that appear across at least three languages to facilitate cross-lingual learning.", "Optimization.", "We optimize the models with a batch size of 2048 for 1M steps in total using AdamW (Loshchilov and Hutter, 2019) with warmup and linear decay of the learning rate.", "To stabilize training, we perform pretraining in two stages: (1) in the first 500K steps, we update only those parameters that are randomly initialized (e.g., entity embeddings); (2) we update all parameters in the remaining 500K steps.", "The learning rate scheduler is reset at each training stage.", "For further details on hyperparameters, see Appendix A. 2.3 Baseline Models We compare the primary model that we investigate, multilingual LUKE used with entity representations ( mLUKE-E ), against several baselines pretrained models and an ablation model based on word representations: mBERT (Devlin et al., 2019) is one of the earliest multilingual language models.", "tion impact the performance.", "Since earlier studies (Liu et al., 2019; Lan et al., 2020) indicated longer pretraining would simply improve performance, we train another model based on XLM-R base with extra MLM pretraining following the same config-uration of mLUKE.", "mLUKE-W is an ablation model of mLUKE-E.", "This model discards the entity embeddings learned during pretraining and only takes word tokens as input as with the other baseline models.", "The results from this model indicate the effect of MEP only as an auxiliary task in pretraining, and the comparison with this model will highlight the effect of using entity representations for downstream tasks in mLUKE-E.", "The above models are fine-tuned with the same hyperparameter search space and computational budget as described in Appendix B. We also present the results of XLM-K (Jiang et al., 2022) for ease of reference.", "XLM-K is based on XLM-R base and trained with entity information from Wikipedia but does not use entity representations in downstream tasks.", "Notice that their results are not strictly comparable to ours, because the pretraining and fine-tuning settings are different.", "We evaluate the approach of adding entity embeddings to the input of mLUKE-E with cross-lingual extractive QA tasks.", "The task is, given a question and a context passage, to extract the answer span from the context.", "The entity embeddings provide language-agnostic features and thus should facilitate cross-lingual transfer learning.", "Datasets.", "We fine-tune the pretrained models with the SQuAD 1.1 dataset (Rajpurkar et al., 2016), and evaluate them with the two multilingual datasets: XQuAD (Artetxe et al., 2020) and MLQA (Lewis et al., 2020).", "XQuAD is created by translating a subset of the SQuAD development set while the source of MLQA is natural text in Wikipedia.", "Besides multiple monolingual evaluation data splits, MLQA also offers data to evaluate generalized cross-lingual transfer (G-XLT), where the question and context texts are in different languages.", "Models.", "All QA models used in this experiment follow Devlin et al. (2019).", "The model takes the question and context word tokens as input and predicts a score for each span of the context word tokens.", "The span with the highest score is predicted as the answer to the question.", "mLUKE-E takes entity tokens as additional features in the input (Figure", "1) to enrich word representations.", "The entities are automatically detected using a heuristic string matching based on the original Wikipedia article from which the dataset instance is created.", "See Appendix C for more details.", "Results.", "Table 1 summarizes the model's F1 scores for each language.", "First, we discuss the base models.", "On the effectiveness of entity representations, mLUKE-E base performs better than its word-based counterpart mLUKE-W base (0.6 average points improvement in the XQuAD average score, 0.1 points in MLQA) and XLM-K (0.2 points improvement in MLQA), which indicates the input entity tokens provide useful features to facilitate cross-lingual transfer.", "The usefulness of entities is demonstrated especially in the MLQA's G-XLT setting (full results available in Appendix F); mLUKE-E base exhibits a substantial 1.6 point improvement in the G-XLT average score over mLUKE-W base .", "This suggests that entity representations are beneficial in a challenging situation where the model needs to capture language-agnostic semantics from text segments in different languages.", "We also observe that XLM-R base benefits from extra training (0.4 points improvement in the average score on XQuAD and 2.1 points in MLQA).", "The mLUKE-W base model further improves the average score from XLM-R base with extra training (1.2 points improvement in XQuAD and 2.1 points in MLQA), showing the effectiveness of the MEP task for cross-lingual QA.", "By comparing large models, we still observe substantial improvements from XLM-R large to the mLUKE models.", "Also we can see that mLUKE-E large overall provides better results than mLUKE-W large (0.4 and 0.3 points improvements in the MLQA average and G-XLT scores; comparable scores in XQuAD), confirming the effectiveness of entity representations.", "How do the entity representations help the model in cross-lingual transfer?", "In the mLUKE-E model, the input entity tokens annotate mention spans on which the model performs prediction.", "We hypothesize that this allows the encoder to inject language-agnostic entity knowledge into span representations, which help better align representations across languages.", "To support this hypothesis, we compare the degree of alignment between span representations before and after adding entity embeddings in the input, i.e. , mLUKE-W and mLUKE-E.", "Task.", "We quantify the degree of alignment as performance on the contextualized word retrieval (CWR) task (Cao et al., 2020).", "The task is, given a word within a sentence in the query language, to find the word with the same meaning in the context from a candidate pool in the target language.", "Dataset.", "We use the MLQA dev set (Lewis et al., 2020).", "As MLQA is constructed from parallel sentences mined from Wikipedia, some sentences and answer spans are aligned and thus the dataset can be easily adapted for the CWR task.", "As the query and target word, we use the answer span 4 annotated in the dataset, which is also parallel across the languages.", "We use the English dataset as the query language and other languages as the target.", "We discard query instances that do not have their parallel data in the target language.", "The candidate pool is all answer spans in the target language data.", "Models.", "We evaluate the mLUKE-W base and mLUKE-E base models without fine-tuning.", "The retrieval is performed by ranking the cosine similarity of contextualized span representations, which is computed by mean-pooling the output word vectors in the span.", "Results.", "Table 2 shows the retrieval performance in terms of the mean reciprocal rank score.", "We observe that the scores of mLUKE-E base are higher than mLUKE-W base across all the languages.", "This demonstrates that adding entities improves the degree of alignment of span representations, which may explain the improvement of mLUKE-E in the cross-lingual QA task.", "In this section, we evaluate the approach of using the entity [MASK] token to extract features from mLUKE-E for two entity-related tasks: relation extraction and named entity recognition.", "We formulate both tasks as the classification of mention spans.", "The baseline models extract the feature of spans as the contextualized representations of word tokens, while mLUKE-E extracts the feature as the contextualized representations of the special language-independent entity tokens associated with the mentions (Figure 1).", "We demonstrate that this approach consistently improves the performance in cross-lingual transfer.", "Relation Extraction (RE) is a task to determine the correct relation between the two (head and tail) entities in a sentence.", "Adding entity type features have been shown to be effective to cross-lingual transfer in RE (Subburathinam et al., 2019; Ahmad et al., 2021), but here we investigate an approach that does not require predefined entity types but utilize special entity embeddings learned in pretraining.", "Datasets.", "We fine-tune the models with the English KBP-37 dataset (Zhang and Wang, 2015) and evaluate the models with the RELX dataset (Kksal and zgr, 2020), which is created by translating a subset of 502 sentences from KBP-37's test set into four different languages.", "Following Kksal and zgr (2020), we report the macro average of F1 scores of the 18 relations.", "Models.", "In the input text, the head and tail entities are surrounded with special markers ( <ent> , <ent2> ).", "The baseline models extract the feature vectors for the entities as the contextualized vector of the first marker followed by their mentions.", "The two entity features are concatenated and fed into a linear classifier to predict their relation.", "For mLUKE-E, we introduce two special entities, [HEAD] and [TAIL] , to represent the head and tail entities (Yamada et al., 2020).", "Their embeddings are initialized with the entity [MASK] embedding.", "They are added to the input sequence being associated with the entity mentions in the input, and their contextualized representations are extracted as the feature vectors.", "As with the word-based models, the features are concatenated and input to a linear classifier.", "Named Entity Recognition (NER) is the task to detect entities in a sentence and classify their type.", "We use the CoNLL-2003 English dataset (Tjong Kim Sang and De Meulder, 2003) as the training data, and evaluate the models with the CoNLL-2003 German dataset and the CoNLL-2002 Spanish and Dutch dataset (Tjong Kim Sang, 2002).", "Models.", "We adopt the model of Sohrab and Miwa (2018) as the baseline model, which enumerates all possible spans in a sentence and classifies them into the target entity types or non-entity type.", "In this experiment, we enumerate spans with at most 16 tokens.", "For the baseline models, the span features are computed as the concatenation of the word representations of the first and last tokens.", "The span features are fed into a linear classifier to predict their entity type.", "The input of mLUKE-E contains the entity [MASK] tokens associated with all possible spans.", "The span features are computed as the contextualized representations of the entity [MASK] tokens.", "The features are input to a linear classifier as with the word-based models.", "The results are shown in Table 3.", "The mLUKE-E models outperform their word-based counterparts mLUKE-W in the average score in all the comparable settings (the base and large settings; the RE and NER tasks), which shows entity-based features are useful in cross-lingual tasks.", "We also observe that XLM-R base benefits from extra training (1.8 average points improvement in RE and 0.3 points in NER), but mLUKE-E still outperforms the results.", "The performance gain of mLUKE-E over mLUKE-W can be partly explained as the entity [MASK]", "token extracts better features for predicting entity attributes because it resembles how mLUKE is pretrained with the MEP task.", "We hypothesize that there exists another factor for the improvement in cross-lingual performance: language neutrality of representations.", "The entity [MASK] token is shared across languages and their contextualized representations may be less affected by the difference of input languages, resulting in features that generalize well for cross-lingual transfer.", "To find out if the entity-based features are actually more language-independent than word-based features, we evaluate the modularity (Fujinuma et al., 2019) of the features extracted for the RELX dataset.", "Modularity is computed for the k -nearest neighbor graph of embeddings and measures the degree to which embeddings tend to form clusters within the same language.", "We refer readers to Fujinuma et al. (2019) for how to compute the metric.", "Note that the maximum value of modularity is 1, and 0 means the embeddings are completely randomly distributed regardless of language.", "We compare the modularity of the word features from mLUKE-W base and entity features from mLUKE-E base before fine-tuning.", "Note that the features here are concatenated vectors of head and tail features.", "Table 4 shows that the modularity of mLUKE-E base is much lower than mLUKE-W base , ar en fi fr id ja ru vi zh avg.", "demonstrating that entity-based features are more language-neutral.", "However, with entity-based features, the modularities are still greater than zero.", "In particular, the modularity computed with Turkish, which is the most distant language from English here, is significantly higher than the others, indicating that the contextualized entity-based features are still somewhat language-dependent.", "In this section, we show that using the entity representations is effective in a cloze prompt task (Liu et al., 2021) with the mLAMA dataset (Kassner et al., 2021).", "The task is, given a cloze template such as [X] was born in [Y] with [X] filled with an entity ( e.g. , Mozart ), to predict a correct entity in [Y] ( e.g., Austria ).", "We adopt the typed querying setting (Kassner et al., 2021), where a template has a set of candidate answer entities and the prediction becomes the one with the highest score assigned by the language model.", "Model.", "As in Kassner et al. (2021), the word-based baseline models compute the candidate score as the log-probability from the MLM classifier.", "When a candidate entity in [Y] is tokenized into multiple tokens, the same number of the word [MASK] tokens are placed in the input sequence, and the score is computed by taking the average of the log-probabilities for its individual tokens.", "On the other hand, mLUKE-E computes the log-probability of the candidate entity in [Y] with the entity [MASK] token.", "Each candidate entity is associated with an entity in mLUKE's entity vocabulary via string matching.", "The input sequence has the entity [MASK] token associated with the word [MASK] tokens in [Y] , and the candidate score is computed as the log-probability from the MEP classifier.", "We also try additionally appending the entity token of [X] to the input sequence if the entity is found in the vocabulary.", "word-based and entity-based prediction, we restrict the candidate entities to the ones found in the entity vocabulary and exclude the questions if their answers are not included in the candidates (results with full candidates and questions in the dataset are in Appendix G).", "Results.", "We experiment in total with 16 languages which are available both in the mLAMA dataset and the mLUKE's entity vocabulary.", "Here we only present the top-1 accuracy results from 9 languages on Table 5, as we can make similar observations with the other languages.", "We observe that XLM-R base performs notably worse than mBERT as mentioned in Kassner et al. (2021).", "However, with extra training with the Wikipedia corpus, XLM-R base shows a significant 9.3 points improvement in the average score and outperforms mBERT (27.8 vs. 27.2).", "We conjecture that this shows the importance of the training corpus for this task.", "The original XLM-R is only trained with the CommonCrawl corpus (Conneau et al., 2020a), text scraped from a wide variety of web pages, while mBERT and XLM-R + training are trained on Wikipedia.", "The performance gaps indicate that Wikipedia is particularly useful for the model to learn factual knowledge.", "The mLUKE-W base model lags behind XLM-R base + extra training by 1.7 average points but we can see 5.4 points improvement from XLM-R base + extra training to mLUKE-E base ( [Y] ), indicating entity representations are more suitable to elicit correct factual knowledge from mLUKE than word representations.", "Adding the entity corresponding to [X] to the input (mLUKE-E base ( [X] & [Y] )) further pushes the performance by 11.7 points to 44.9 %, which further demonstrates the effectiveness of entity representations.", "Analysis of Language Bias.", "Kassner et al. (2021) notes that the prediction of mBERT is biased by the input language.", "For example, when queried in Italian ( e.g. , [X] e stato creato in [MASK] . ), the model tends to predict entities that often appear in Italian text ( e.g., Italy ) for any question to answer en ja fr mBERT The Bahamas, 41% (355/870) Japan, 82% (361/439) Pays-Bas, 71% (632/895) XLM-R base London, 78% (664/850) Japan, 99% (437/440) Allemagne, 96% (877/916) + extra training Australia, 27% (247/899) Japan, 99% (437/442) Allemagne, 93% (854/917) mLUKE-W base Germany, 22% (198/895) Japan, 97% (428/442) Allemagne, 99% (906/918) mLUKE-E base ( [Y] ) London, 37% (310/846) Japan, 56% (241/430) Sude, 40% (362/908) mLUKE-E base ( [X] & [Y] ) London, 27% (213/797) Japan, 44% (176/401) Sude, 30% (266/895) Table 6: The top incorrect predictions in three languages for the template [X] was founded in [Y] . for each model.", "location.", "We expect that using entity representations would reduce language bias because entities are shared among languages and less affected by the frequency in the language of questions.", "We qualitatively assess the degree of language bias in the models looking at their incorrect predictions.", "We show the top incorrect prediction for the template [X] was founded in [Y] . for each model in Table 6, together with the top1 incorrect ratio , that is, the ratio of the number of the most common incorrect prediction to the total false predictions, which indicates how much the false predictions are dominated by few frequent entities.", "The examples show that the different models exhibit bias towards different entities as in English and French, although in Japanese the model consistently tends to predict Japan .", "Looking at the degree of language bias, mLUKE-E base ( [X] & [Y] ) exhibits lower top1 incorrect ratios overall (27% in fr, 44% in ja, and 30% in fr), which indicates using entity representations reduces language bias.", "However, lower language bias does not necessarily mean better performance: in French (fr), mLUKE-E base ( [X] & [Y] ) gives a lower top1 incorrect ratio than mBERT (30% vs. 71%) but their numbers of total false predictions are the same (895).", "Language bias is only one of several factors in the performance bottleneck.", "Multilingual pretrained language models have recently seen a surge of interest due to their effectiveness in cross-lingual transfer learning (Conneau and Lample, 2019; Liu et al., 2020).", "A straightforward way to train such models is multilingual masked language modeling (mMLM) (Devlin et al., 2019; Conneau et al., 2020a), i.e., training a single model with a collection of monolingual corpora in multiple languages.", "Although models trained with mMLM exhibit a strong cross-lingual ability without any cross-lingual supervision (K et al., 2020; Conneau et al., 2020b), several studies aim to develop better multilingual models with explicit cross-lingual supervision such as bilingual word dictionaries (Conneau et al., 2020b) or parallel sentences (Conneau and Lample, 2019).", "In this study, we build a multilingual pretrained language model on the basis of XLM-RoBERTa (Conneau et al., 2020a), trained with mMLM as well as the masked entity prediction (MEP) (Yamada et al., 2020) with entity representations.", "Language models trained with a large corpus contain knowledge about real-world entities, which is useful for entity-related downstream tasks such as relation classification, named entity recognition, and question answering.", "Previous studies have shown that we can improve language models for such tasks by incorporating entity information into the model (Zhang et al., 2019; Peters et al., 2019; Wang et al., 2021; Xiong et al., 2020; Fvry et al., 2020; Yamada et al., 2020).", "When incorporated into multilingual language models, entity information can bring another ben-efit: entities may serve as anchors for the model to align representations across languages.", "Multilingual knowledge bases such as Wikipedia often offer mappings between different surface forms across languages for the same entity.", "Calixto et al. (2021) fine-tuned the top two layers of multilingual BERT by predicting language-agnostic entity ID from hyperlinks in Wikipedia articles.", "As our concurrent work, Jiang et al. (2022) trained a model based on XLM-RoBERTa with an entity prediction task along with an object entailment prediction task.", "While the previous studies focus on improving cross-lingual language representations by pretraining with entity information, our work investigates a multilingual model not only pretrained with entities but also explicitly having entity representations and how to extract better features from such model.", "Alexis Conneau and Guillaume Lample.", "2019.", "Cross-lingual Language Model Pretraining.", "Alexis Conneau, Shijie Wu, Haoran Li, Luke Zettle-moyer, and Veselin Stoyanov.", "2020b.", "Emerging Cross-lingual Structure in Pretrained Language Models.", "In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics .", "Leyang Cui, Yu Wu, Jian Liu, Sen Yang, and Yue Zhang.", "2021.", "Template-Based Named Entity Recognition Using BART.", "In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 .", "Sumanth Doddapaneni, Gowtham Ramesh, Anoop Kunchukuttan, Pratyush Kumar, and Mitesh M. Khapra.", "2021.", "A Primer on Pretrained Multilingual Language Models.", "ArXiv , abs/2107.00676.", "Yoshinari Fujinuma, Jordan Boyd-Graber, and Michael J. Paul.", "2019.", "A Resource-Free Evaluation Metric for Cross-Lingual Word Embeddings Based on Graph Modularity.", "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics .", "Xiaoze Jiang, Yaobo Liang, Weizhu Chen, and Nan Duan.", "2022.", "XLM-K: Improving Cross-Lingual Language Model Pre-Training with Multilingual Knowledge.", "In Proceedings of the Thirty-Sixth AAAI Conference on Artificial Intelligence .", "Karthikeyan K, Zihan Wang, Stephen Mayhew, and Dan Roth.", "2020.", "Cross-Lingual Ability of Multilingual BERT: An Empirical Study.", "In International Conference on Learning Representations .", "Nora Kassner, Philipp Dufter, and Hinrich Schtze.", "2021.", "Multilingual LAMA: Investigating Knowledge in Multilingual Pretrained Language Models.", "In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume .", "Abdullatif Kksal and Arzucan zgr.", "2020.", "The RELX Dataset and Matching the Multilingual Blanks for Cross-Lingual Relation Classification.", "In Findings of the Association for Computational Linguistics: EMNLP 2020 .", "We investigated the effectiveness of entity representations in multilingual language models.", "Our pretrained model, mLUKE, not only exhibits strong empirical results with the word inputs (mLUKE-W) but also shows even better performance with the entity representations (mLUKE-E) in cross-lingual transfer tasks.", "We also show that a cloze-prompt-style fact completion task can effectively be solved with the query and answer space in the entity vocabulary.", "Our results suggest a promising direction to pursue further on how to leverage entity representations in multilingual tasks.", "Also, in the current model, entities are represented as individual vectors, which may incur a large memory footprint in practice.", "One can investigate an efficient way of having entity representations." ]
[ "abstain", "abstain", "objective", "result", "method", "method", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "result", "method", "abstain", "objective", "abstain", "method", "abstain", "method", "method", "abstain", "objective", "abstain", "result", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "method", "other", "abstain", "other", "other", "other", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "objective", "result", "abstain", "abstain" ]
[ "This work investigates the use of interactively updated label suggestions to improve upon the efficiency of gathering annotations on the task of opinion mining in German Covid-19 social media data.", "We develop guidelines to conduct a controlled annotation study with social science students and find that suggestions from a model trained on a small, expert-annotated dataset already lead to a substantial improvement in terms of inter-annotator agreement (+.14", "Fleiss' ) and annotation quality compared to students that do not receive any label suggestions.", "We further find that label suggestions from interactively trained models do not lead to an improvement over suggestions from a static model.", "Nonetheless, our analysis of suggestion bias shows that annotators remain capable of reflecting upon the suggested label in general.", "Finally, we confirm the quality of the annotated data in transfer learning experiments between different annotator groups.", "To facilitate further research in opinion mining on social media data, we release our collected data consisting of 200 expert and 2,785 student annotations.", "1 1 Introduction The impact analysis of major events like the Covid-19 pandemic is fundamental to research in social sciences.", "To enable more socially sensitive public decision making, researchers need to reliably monitor how various social groups (e.g., political actors, news media, citizens) communicate about political decisions (Jungherr, 2015).", "The increasing use of social media especially allows social science researchers to conduct opinion analysis on a larger scale than with traditional methods, e.g. 1 Code and data can be found on GitHub: https://github.com/UKPLab/acl2021-label-suggestions-german-covid19 interviews or questionnaires.", "However, the publication of research results is often delayed or temporally transient due to limitations of traditional social science research, i.e. prolonged data gathering processes or opinion surveys being subject to reactivity.", "Given the increasing performance of language models trained on large amounts of data in a self-supervised manner (Devlin et al., 2019; Brown et al., 2020), one fundamental question that arises is how NLP systems can contribute to alleviate existing difficulties in studies for digital humanities and social sciences (Risch et al., 2019).", "One important approach to make data annotation more efficient is the use of automated label suggestions .", "In contrast to active learning , that aims to identify a subset of annotated data which leads to optimal model training, label suggestions alleviate the annotation process by providing annotators with pre-annotations (i.e., predictions) from a model (Ringger et al., 2008; Schulz et al., 2019).", "To enable the annotation of large amounts of data which are used for quantitative analysis by disciplines such as social sciences, label suggestions are a more viable solution than active learning.", "One major difficulty with label suggestions is the danger of biasing annotators towards (possibly erroneous) suggestions.", "So far, researchers have investigated automated label suggestions for tasks that require domain-specific knowledge (Fort and Sagot, 2010; Yimam et al., 2013; Schulz et al., 2019); and have shown that domain experts successfully identify erroneous suggestions and are more robust to potential biases.", "However, the limited availability of such expert annotators restricts the use of label suggestions to small, focused annotation studies.", "For tasks that do not require domain-specific knowledge and can be conducted with non-expert annotators such as crowd workers or citizen science volunteers on a large scale, label suggestions have not been considered yet.", "This leads to two open questions.", "First, if non-expert annotators that do not receive any training besides annotation guidelines benefit from label suggestions at all.", "Second, if existing biases are amplified especially when including interactively updated suggestions that have been shown to be advantageous over static ones (Klie et al., 2020).", "We tackle these challenges by conducting a comparative annotation study with social science students using a recent state-of-the-art model to generate label suggestions (Devlin et al., 2019).", "Our results show that a small set of expert-labeled data is sufficient to improve annotation quality for non-expert annotators.", "In contrast to Schulz et al. (2019), we show that although interactive and non-interactive label suggestions substantially improve the agreement, we do not observe significant differences between both approaches.", "We further confirm this observation with experiments using models trained on (and transferred to) individual annotator groups.", "Our contributions are: C1 : An evaluation of label suggestions in terms of annotation quality for non-expert annotators.", "C2 : An investigation of label suggestion bias for both static and interactively updated suggestions.", "C3 : A novel corpus of German Twitter posts that can be used by social science researchers to study the effects of governmental measures against Covid-19 on the public opinion.", "Finally, we also publish 200 expert and 2,785 individual student annotations of our dataset to facilitate further research in this direction.", "Label suggestions.", "In an early work, Rehbein et al. (2009) study the effects of label suggestions on the task of word sense disambiguation and observe a positive effect on annotation quality.", "With the introduction of annotation tools such as brat (Stenetorp et al., 2012), WebAnno (Yimam et al., 2013), or INCEpTION (Klie et al., 2018), the use of label suggestions became more feasible; leading to an increased investigation of label suggestions in the context of NLP.", "For instance, Yimam et al. (2014) investigate label suggestions for Amharic POS tagging and German named entity recognition and show with expert annotators that label suggestions significantly reduce the annotation time.", "Other works further investigate interactively updated label suggestions and come to a similar conclusion (Klie et al., 2020).", "Label suggestions have also been shown to be effective in non-NLP annotation tasks that require domain-specific knowledge such as in medical (Lingren et al., 2014) or educational (Schulz et al., 2019) use cases.", "Bias.", "Annotations from untrained human annotators may introduce biases that are conveyed to machine learning models (Gururangan et al., 2018).", "One possible source of bias may be due to the different decision making process triggered by label suggestions namely, first deciding if the suggested label is correct and only if not, considering different labels (Turner and Schley, 2016).", "Hence, the key question that arises is to what extent annotators are influenced by such suggestions.", "Although Fort and Sagot (2010) identify an influence on annotation behaviour when providing pre-annotated data for POS-tagging, they do not measure any clear bias in the annotated labels.", "Rosset et al. (2013) come to a similar conclusion when investigating the bias introduced by label suggestions in a cross-domain setup, i.e., when using label suggestions from a model that is trained on data from a different domain than the annotated data.", "They conduct their experiments with eight annotators from varying levels of expertise and report considerable annotation performance gains while not finding considerable biases introduced by label suggestions.", "Most similar to our work is the setup from Schulz et al. (2019).", "The authors investigate interactive label suggestions for expert annotators across two domains and study the effects of using existing and newly annotated data for training different suggestion models.", "They compare personalised user models against a universal model which has access to all annotated data and show that the latter provides suggestions with a higher acceptance rate.", "This seems less surprising due to the substantially larger training set.", "Further, they do not identify any bias introduced by pre-annotating data.", "Whereas existing work reports no measurable bias for expert annotators (Fort and Sagot, 2010; Lingren et al., 2014; Schulz et al., 2019), it remains unclear for annotators who have no prior experience in similar annotation tasks; especially for scenarios where besides annotation guidelines no further training is provided.", "However, the use of novice annotators is common for sce-0 1 10 100 1.000 10.000 2019 12 01 2020 01 01 2020 02 01 2020 03 01 2020 04 01 N u m be r o f t w ee t s Figure 1: Number of tweets per day collected from December 2019 to April 2020.", "narios where no linguistic or domain expertise is required.", "Hence, we present a first case-study for the use of interactive label suggestions with nonexpert annotators.", "Furthermore, we find that recent state-of-the-art models such as BERT (Devlin et al., 2019) can provide high-quality label suggestions with already little training data and hence, are important for interactive label suggestions in non-expert annotation tasks.", "Our task is inspired by social science research on analyzing public opinion using social media (Jungherr, 2015; McCormick et al., 2017).", "The goal is to identify opinions in German-speaking countries about governmental measures established to contain the spread of the Corona virus.", "We use Twitter due to its international and widespread usage that ensures a sufficient database and the several challenges for the automatic identification of opinions and stance it poses from an NLP perspective (Imran et al., 2016; Mohammad et al., 2016; Gorrell et al., 2019; Conforti et al., 2020).", "For example, the use of language varies from colloquial expressions to well-formed arguments and news-spreading statements due to its heterogeneous user base.", "Additionally, hashtags are used directly as part of text but also to embed the tweet itself in the broader discussion on the platform.", "Finally, the classification of a tweet is particularly challenging given the character limitation of the platform, i.e., at the date of writing Twitter allows for 280 characters per tweet.", "Data collection.", "Initially, we collected tweets from December 2019 to the end of April 2020.", "Using a manually chosen set of search queries (corona', pandemie', covid', socialdistance'), we made use of the Twitter Streaming API and gathered only those tweets which were classified as German by the Twitter language identifier.", "This resulted in a set of approximately 16.5 million tweets.", "We retained only tweets that contain key terms referring to measures related to the Covid-19 pandemic and removed all duplicates, retweets and all tweets with text length less than 30 characters.", "After filtering, 237,616 tweets remained and their daily temporal distribution is visualized in Figure", "1. We sample uniformly at random from the remaining tweets for all subsequent annotation tasks.", "2 Annotation scheme.", "We developed annotation guidelines together with three German-speaking researchers from social sciences and iteratively re-fined them in three successive rounds.", "Our goal from a social science perspective is to analyze the public perception of measures taken by the government.", "Therefore, the resulting dataset should help in (1) identifying relevant tweets for governmental measures and if relevant, (2) detecting what stance is expressed.", "We follow recent works on stance detection and Twitter data (Hanselowski et al., 2018; Baly et al., 2018; Conforti et al., 2020) and use four distinct categories for our annotation.", "They are defined as follows: Unrelated : no measures related to the containment of the pandemic are mentioned Comment : measures are mentioned, but not assessed or neutral Support : measures are assessed positively Refute : measures are assessed negatively The four label annotation scheme allows us to distinguish texts that are related to the pandemic but do not talk about measures (i.e., unrelated).", "Our goal is to study the effects of interactively updated and static label suggestions in non-expert annotation scenarios.", "Non-experts such as crowd workers or student volunteers have no prior experience in annotating comparable tasks and only receive annotation guidelines for preparation.", "3 Our secondary goal is to collect a novel dataset that can be used by social science researchers to study the 2 We provide additional information about data collection in Appendix A and discuss ethical concerns regarding the use of Twitter data after the conclusion.", "To train a model that provides label suggestions to our non-expert annotators, we first collect a small set of 200 expert-annotated instances.", "We then split our non-expert annotators into three different groups that receive (G1) no label suggestions, (G2) suggestions from a model trained on expert annotations, and (G3) suggestions from a model that is retrained interactively using both expert-annotated and interactively annotated data.", "The expert annotations were provided by the researchers (three social science researchers and one NLP researcher) that created the annotation guidelines and who are proficient in solving the task.", "In total, 200 tweets were sampled uniformly at random and annotated by all four experts.", "The inter-annotator agreement (IAA) across all 200 tweets lies at 0.54 Fleiss's (moderate agreement) and is comparable to previously reported annotation scores in the field of opinion and argument mining (Bar-Haim et al., 2020; Schaefer and Stede, 2020; Boltuzic and Snajder, 2014).", "Overall, in more than 50% of the tweets all four experts selected the same label (respectively, in 75% of the tweets at least three experts selected the same label).", "The disagreement on the remaining 25% of the tweets furthermore shows the increased difficulty of our task due to ambiguities in the data source, e.g., ironical statements or differentiating governmental measures from non-governmental ones like home-office.", "To compile gold standard labels for instances that the experts disagreed upon, we apply MACE (Hovy et al., 2013) using a threshold of 1.0.", "The resulting labels were then re-evaluated by the experts and agreed upon.", "The annotations were conducted with a group of 21 German-speaking university students.", "To ensure a basic level of comparability for our student annotators, we recruited all volunteers from the same social science course at the same university.", "The annotators received no further training apart from the annotation guidelines.", "We randomly assigned them to three different groups (G1, G2, and G3), each consisting of seven students.", "To investigate the effects of interactive label suggestions, we defined different annotation setups for each group.", "The annotations were split into two rounds.", "At each round of annotation, students were provided with 100 tweets consisting of 70 new tweets and 30 quality control tweets from the expert-labeled data which are used to compare individual groups.", "Across both rounds, we thus obtain a total of 140 unique annotated tweets per student and use 60 tweets for evaluation.", "The annotation setup of each group including the individual data splits is visualized in Figure", "2. 4 No label suggestions (G1).", "The first group serves as a control group and receives no label suggestions.", "Static label suggestions (G2).", "The second group only receives label suggestions based on a model which was trained using the 200 expert-labeled instances described in section 4.1.", "4 Note that the control instances were distributed uniformly at random within a round to mitigate any interdependency effects between different tweets.", "Interactive label suggestions (G3).", "The last group of students receives expert label suggestions in the first round and interactively updated label suggestions in the second round.", "In contrast to existing work (Schulz et al., 2019), this setup allows us to directly quantify effects of bias amplification that may occur with interactive label suggestions.", "System setup.", "We conduct our annotation experiments using INCEpTION (Klie et al., 2018) which allows us to integrate label suggestions using recommendation models.", "To obtain label suggestions, we use a German version of BERT (Ger-BERT) that is available through the HuggingFace library (Wolf et al., 2020).", "5 We perform a random hyperparame-ter search (cf. Appendix B.3) and train the model on the expert annotated data for 10 epochs with a learning rate of 8e-5 and a batch size of 8.", "We select the model that performed best in terms of F1-score on a held-out stratified test set (20% of the data) across ten runs with different random seeds.", "All experiments were conducted on a desktop machine with a 6-core 3.8 GHz CPU and a GeForce RTX 2060 GPU (8GB).", "Model comparison.", "To assess the label suggestion quality of our model, we report the predictive performance on the expert-labeled dataset (setup as described above) in Table", "1. We compare our model with baselines 6 which have been used in related work (Schulz et al., 2019; Klie et al., 2020) for label suggestions.", "As expected, Ger-BERT achieves superior performance and the results are promising for using label suggestions.", "Interactive training routine.", "To remedy the cold-start problem, G3 receives label suggestions from the model trained only on the expert-annotated data in round", "1. Afterwards, we retrain the model with an increasing number of instances 5 https://deepset.ai/german-bert 6 We adapted the respective architectures to our setup.", "using both, the expert annotations and the G3 data of individual students from round", "1. 7 To avoid unnecessary waiting times for our annotators due to the additional training routine, we always collect batches of 10 instances before re-training our model.", "We then repeatedly train individual models for each student in G3 with an increasing amount of data of up to 70 instances.", "The 30 expert-annotated quality control tweets were excluded in this step to avoid conflicting labels and duplicated data.", "Table 2 shows the overall statistics of our resulting corpus consisting of 200 expert and 2,785 student-annotated German tweets.", "Note that we removed 60 expert-annotated instances that we included for annotation quality control for each student, resulting in 140 annotated tweets per student.", "Outliers.", "A fine-grained analysis of annotation time is not possible due to online annotations at home.", "However, one student in G3 had, on average, spent less than a second for each annotation and accepted almost all suggested labels.", "This student's annotations were removed from the final dataset and assumed as faulty labels considering the short amount of time spent on this task in comparison to the minimum amount of seven seconds per tweet and annotation for all other students.", "To assess the overall quality of our collected student annotations, we investigate annotator consistency in terms of inter-annotator-agreement (IAA) as well as the annotator accuracy on our quality assurance instances.", "Table 3 shows Fleiss' (Fleiss, 1971) and the accuracy computed for the quality control instances that were consistent across all groups.", "In general, we observe a similar or higher agreement for our students compared to the expert annotations ( = 0 . 54 ) showing that the guidelines were able to convey the task well.", "We also find that groups that receive label suggestions (G2 and G3) achieve a substantially larger IAA as opposed to G1.", "Most interestingly, we observe a substantial increase in IAA for both G2 and G3 in the second annotation round, whereas the IAA in G1 remains stable.", "7 Note that using all previously annotated data of G3 would impair the comparability between individual students as the data was collected asynchronously to allow students to pick their best suited timeslot.", "Further, a synchronization step between users would impair the applicability of the approach.", "Analyzing our models' predictions shows that the suggested labels for the 60 quality control samples mostly conform with the label given by the expert (97% for G2 and 94% for G3).", "Therefore, annotators are inclined to accept the label suggested by the model.", "We can further confirm this observation when investigating the number of instances that the students labeled correctly (accuracy).", "The highest accuracy is observed for the group that received the highest quality suggestions (G2).", "Furthermore, both groups that received label suggestions (G2, G3) express an increased accuracy over the control group (G1).", "In general, for both rounds the accuracy remains similarly high across all groups ( . 02 difference) with only a slight decrease ( . 04 ) for G1.", "Hence, we conjecture that the resulting annotations provide satisfying quality given the challenging task and annotator proficiency.", "One major challenge in using label suggestions is known in psychology as the anchoring effect (Tver-sky and Kahneman, 1974; Turner and Schley, 2016).", "It describes the concept that annotators who are provided a label suggestion follow a different decision process compared to a group that does not receive any suggestions and tend to accept the suggestions.", "As we observe larger IAA and accuracy for groups receiving label suggestions, we look at the label suggestion acceptance rate and which U n r e l a t e d C o m m e n t S u pp o r t R e f u t e Refute Support Comment Unrelated 10 25 16 0 50 51 0 27 99 0 46 42 0 14 6 2 Number of label suggestion corrections (G2) 0 20 40 60 80 N u m b e r Figure 3: The number of rejected label suggestions.", "Acceptance rate.", "One way to quantify possible biases is to evaluate if annotators tend to accept more suggestions with an increasing number of instances (Schulz et al., 2019).", "This may be the case when annotators increasingly trust the model with consistently good suggestions.", "Consequently, with increasing trust towards the model's predictions, non-expert annotators may tend to accept more model errors.", "To investigate if annotators remain capable of reflecting on instance and label suggestion, we compute the average acceptance rate for G2 and G3 in both rounds.", "We find that for both groups, the acceptance rate remains stable (G2: 73% and 72%, G3: 68% and 69%) and conclude that annotators receiving high quality label suggestions remain critical while producing more consistent results.", "Label corrections.", "To further evaluate if students are vulnerable to erroneous label suggestions from a model, we specifically investigate labels that have been corrected.", "Figure 3 shows our results for G2.", "8 As can be seen, the most notable number of label corrections were made by students for unrelated tweets that were classified as comments by the model.", "Additionally, we find a large number of corrections that have been made with respect to the stance of the presented tweet.", "We will discuss both types of corrections in the following.", "Unrelated tweets.", "The label suggestion model makes the most errors for unrelated tweets (i.e., tweets that are corrected as Unrelated ) by misclassifying them as Comment (99).", "In contrast, instances that are identified as Unrelated tweets are only seldomly corrected.", "This indicates an increased focus on recall at the expense of precision for related tweets, most likely due to Comment being the largest class in the training data (see Table 2, expert data).", "We find possible causes for such wrong predictions when we look at examples where Comment was suggested for Unrelated instances 9 : Example 1: The corona virus also requires special protective measures for Naomi Campbell.", "The top model wears a protective suit during a trip.", "Example 2: Extraordinary times call for extraordinary measures: the Elbschlosskeller now has a functioning door lock.", "#Hamburg #Corona #COVID-19 8 Note that analyzing G3 shows similar observations (cf. Appendix C).", "9 Note that we present translations of the original German texts for better readability and to protect user privacy Clearly, these examples are fairly easy to annotate for humans but are difficult to predict for a model due to specific cue words being mentioned, e.g., measures .", "Similar results have also been reported in previous work (Hanselowski et al., 2018; Conforti et al., 2020).", "Stance.", "In Figure 3, we can also see that the model makes mistakes regarding the stance of a tweet.", "Especially, 101 Support suggestions have been corrected as either being unrelated or neutral and 88 Comment suggestions have been corrected to either Support or Refute .", "For the second case, we often discover tweets that implicitly indicate the stance for example, by complaining about people ignoring the measures: Example 3: Small tweet aside from XR: Colleague drags himself into the office this morn-ing with flu symptoms ( OD) The other colleagues still have to convince him to please go home immediately.", "Only then does he show some understanding.", "Unbelievable.", "#COVID #SocialDistancing Such examples demonstrate the difficulty of the task and seem to be difficult to recognize for the model.", "However, given the large amount of label corrections, the non-expert annotators seem to be less susceptible to accept such model errors.", "The high number of label corrections for specific types of tweets shows that our annotators of G2 remained critical towards the suggested label.", "With interactively updated suggestions however, this may not be the case.", "Especially annotators that accept erroneous suggestions may lead to reinforcing a model in its prediction; hence, leading to amplifying biases.", "Diverging suggestions.", "To study such effects, we first identify if the interactively updated models express a difference in terms of predictions compared to the static model.", "In Figure 4 we can observe that with already 40 instances (Iteration 140), the number of differently predicted instances is ten or higher across all personalized models.", "This divergence is highly correlated with the number of changes a student provides (see Figure 5).", "We thus can conclude that the interactively trained models are able to adapt to the individual annotations for each annotator.", "Comparison to G2.", "Figure 6 shows the average number of accepted suggestions for G2 and G3 as well as the upper and lower quartiles, respectively.", "The vertical line separates the first and the second round of annotations.", "We find that especially in the first round of annotations, both groups have a very similar acceptance rate of suggested labels.", "Only with interactively updated suggestions we find an increasing divergence in G3 with respect to the upper and lower quartiles.", "Individual acceptance rate.", "To assess the impact of interactive label suggestions, we further investigate how many suggestions were accepted by each annotator.", "Figure 5 shows the number of accepted label suggestions for each student in G3 in the second round of annotations.", "Although we observe that the average number of accepted label suggestions remains constant across G2 and G3, we can see substantial differences between individual students.", "For instance, we can observe that for s21 , the increased model adaptivity leads e x p e r t g 1 g 2 g 3 g3 g2 g1 expert 55.93 44.14 49.31 47.96 52.38 41.48 48.22 44.88 40.38 39.49 40.13 40.64 52.15 38.38 45.86 44.10 Transfer Learning Experiments using expert and student data 40 42 44 46 48 50 52 54 M a c r o F 1 Figure 7: Transfer learning performance of models trained on individual annotator groups.", "to an overall decrease in the number of accepted labels.", "Moreover, s24 who received predictions that diverge less from the static model prediction accepted the most suggestions in the second round.", "This shows that interactive label suggestions does not necessarily lead to a larger acceptance rate possibly amplifying biases but instead, varies for each annotator and needs to be investigated in future work.", "Finally, we investigate how well models trained on different annotator groups transfer to each other.", "We hence conduct transfer learning experiments for which we remove the quality control instances in our student groups and train a separate Ger-BERT model using the same hyperparameters as for the expert model.", "We use 80% of the data for training and the remaining 20% to identify the best model which we then transfer to another group.", "Figure 7 shows the macro-F1 scores averaged across ten independent runs, diagonal entries are the scores on the 20%.", "Most notably, models trained on the groups with label suggestions (G2, G3) do in fact perform comparable or better on the expert-labeled data and outperform models trained on the group not receiving any suggestions (G1).", "The higher cross-group performance for models trained on groups that received label suggestions shows that the label suggestions successfully conveyed knowledge from the expert annotated data to our students.", "In this work, we analysed the usefulness of providing label suggestions for untrained annotators to identify opinions in a challenging text domain (i.e., Twitter).", "We generated suggestions using expert-labeled training data as well as interactively training models using data annotated by untrained students.", "Our results show that label suggestions from a state-of-the-art sentence classification model trained on a small set of expert annotations help improving annotation quality for untrained annotators.", "In terms of potential biases that may occur with untrained annotators we observe that the students retained their capability to reflect on the suggested label.", "We furthermore do not observe a general amplification in terms of bias with interactively updated suggestions; however, we find that such effects are very specific to individual annotators.", "We hence conclude that interactively updated label suggestions need to be considered carefully when applied to non-expert annotation scenarios.", "For future work, we plan to leverage our setup to annotate tweets from a larger time span.", "In Germany, the measures taken by the government have been met with divided public reaction starting with reactions of solidarity and changing towards a more critical public opinion (Viehmann et al., 2020a,b).", "In particular, we are interested if our label suggestion model is robust enough to account for such a shift in label distribution.", "This work has been supported by the German Research Foundation (DFG) as part of the Research Training Group KRITIS No.", "GRK 2222/2.", "Further, this work is supported by the European Regional Development Fund (ERDF) and the Hessian State Chancellery Hessian Minister of Digital Strategy and Development under the promotional reference 20005482 (TexPrax).", "We thank the student volunteers from the University of Mainz for their annotations as well as Johannes Daxenberger, Mohsen Mesgar, Ute Winchenbach, Kevin Stowe and the anonymous reviewers for their valuable feedback.", "we use to collect Tweets are in compliance with Twit-ter's terms of service.", "We only release the set of identifiers (Tweet IDs) for the texts used in this research project.", "Thereby, we adhere to the Twitter Developer policy 10 and give users full control of their privacy and data as they can delete or privatize tweets so that they cannot be collected.", "We asked student annotators for voluntary participation in the annotation study.", "All students have been informed about the goal of the conducted research and the purpose of the collected annotations.", "During annotation no information about the tweet's author or any other additional metadata was made available to the annotators.", "We did not collect any personal data from the students before, after, or during the annotation task.", "Data usage.", "This work presents an investigation of efficient data annotation methods in a case study on social media data.", "The results of this work allow social science researchers to apply their analysis on a larger scale.", "In the case of analyzing public opinion on governmental measures, the resulting analysis allows politicians to make more socially sensitive public decisions.", "This information is useful in aggregated form, without the need for information about individual users.", "However, we want to point out that users of social media (particularly Twitter) do not constitute a representative sample of the general population, especially in Germany (Newman et al., 2020).", "Therefore, our goal is not to foster public decision-making solely based upon analysis of Twitter but to provide an additional supporting tool.", "Dual use.", "Further, we acknowledge the potential of misuse of our dataset: the annotated data allows anyone, including both individuals and organizations, for training models to identify individuals expressing their consent or dissent with governmental actions.", "To this end, we follow the argumentation by (Benton et al., 2017) that in general we cannot prevent publicly available data from being misused but we want to make both researchers and the general public aware of the possible malicious use." ]
[ "objective", "abstain", "abstain", "result", "result", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "abstain", "method", "objective", "objective", "objective", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "result", "result", "result", "method", "abstain", "abstain", "abstain", "other", "other", "other", "other", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method" ]
[ "Ad hominem attacks are those that target some feature of a person's character instead of the position the person is maintaining.", "These attacks are harmful because they propagate implicit biases and diminish a person's credibility.", "Since dialogue systems respond directly to user input, it is important to study ad hominems in dialogue responses.", "To this end, we propose categories of ad hominems, compose an annotated dataset, and build a classifier to analyze human and dialogue system responses to English Twitter posts.", "We specifically compare responses to Twitter topics about marginalized communities ( #Black-LivesMatter , #MeToo ) versus other topics ( #Vegan , #WFH ), because the abusive language of ad hominems could further amplify the skew of power away from marginalized populations.", "Furthermore, we propose a constrained decoding technique that uses salient n -gram similarity as a soft constraint for topk sampling to reduce the amount of ad hominems generated.", "Our results indicate that 1) responses from both humans and DialoGPT contain more ad hominems for discussions around marginalized communities, 2) different quantities of ad hominems in the training data can influence the likelihood of generating ad hominems, and 3) we can use constrained decoding techniques to reduce ad hominems in generated dialogue responses.", "Ad hominems attack an opponent's character or identity instead of the points the opponent is making, and can exist in any conversational setting between two or more entities.", "From an argumentation perspective, ad hominems are fallacies, and fallacies rely on faulty reasoning to advance a point (Hansen, 2020).", "These ad hominem fallacies are related to abusive language, toxicity, and microaggressions, and can be expressed with both subtle and explicitly offensive language.", "Table 1 presents Post: Many are trying to co-opt and mischaracterize the #blacklivesmatter movement.", "examples of ad hominem responses to Twitter posts.", "Undesirable in any response, ad hominems are unproductive in furthering a meaningful discussion and can reinforce falsehoods.", "However, these attacks appeal to emotions and implicit biases to argue a point, and are thus often effectively harmful regardless of whether the attacks are true, recognized, or retracted (Yap, 2013).", "Our work is motivated by this fallacy's potential to amplify the spread of harmful societal biases.", "For communities that are already disproportionately harmed by societal power inequalities, ad hominems further amplify the power imbalance.", "Tone policing is a type of ad hominem that seeks to regulate the emotions that a person (usually of a marginalized population) can use to deliver their points (e.g., not too angrily), thereby altogether invalidating the style of delivery, the person's competence, and the points being conveyed.", "Besides directly experiencing ad hominem attacks, marginalized groups could also be disproportionately discouraged from using technologies that propagate these attacks, since abusive language from a technology can deter people from using the technology (Sood et al., 2012b).", "The goal of this study is to analyze ad hominems in dialogue systemand human-generated responses for topics that vary in impact to marginalized populations.", "Through analysis, we formulate techniques to reduce ad hominem responses and thus the associated harms, which is especially important for dialogue systems since these systems directly interact with users.", "We analyze responses from DialoGPT (Zhang et al., 2020a) and humans to English Twitter posts.", "Specifically, we compare responses to Twitter topics about marginalized communities ( #Black-LivesMatter , #MeToo ) versus other topics ( #Vegan , #WFH ).", "Through human annotation and trained classifiers, we find that ad hominems exist in both human and DialoGPT responses.", "Across response sources, there are more ad hominems in #Black-LivesMatter and #MeToo -related responses, fewer in #Vegan -related responses, and even fewer in #WFH -related responses.", "The presence of more ad hominems in responses to social issues that concern marginalized groups has troubling implications about the amplified harms toward these groups.", "Given our analysis, we further propose a constrained decoding algorithm to reduce the amount of ad hominems generated by dialogue systems.", "By using salient n -gram similarity to apply soft constraints to topk sampling, our proposed technique is simple, extensible to reducing other harms, and does not require much additional computation.", "At each decoding time step, the technique compares the similarity between the current generated output and salient ad hominem versus non-ad hominem n -grams, possibly selecting alternative token candidates to generate.", "This technique is effective at reducing the amount of ad hominems generated across topics while maintaining coherence and relevance.", "Our main contribution is a novel analysis of ad hominem responses generated by humans and DialoGPT across topics varying in impact to marginalized communities.", "For this analysis, we propose empirically-derived ad hominem categories that are further verified through annotation.", "Furthermore, we build a new dataset of Twitter posts paired with humanand DialoGPT-generated responses, where the responses have ad hominem-related labels.", "Finally, we devise a constrained decoding technique that uses salient n -gram similarity to steer topk sampling away from ad hominem responses.", "We release data and code at https://github.com/ ewsheng/ad-hom-in-dialogue .", "This work is related to a broad spectrum of topics, including prior definitions of ad hominems and how ad hominems facilitate biases.", "Also, analyzing ad hominems in dialogue systems is related to examining offensive language and other harms.", "Lastly, we discuss existing constrained decoding methods.", "Ad Hominems In the argumentation literature, theoretical ad hominems include the abusive (attack on the opponent's character), tu quoque (he did it first), circumstantial (accusation of hypocrisy), and guilt by association (associating the opponent with someone with low credibility) (Walton, 1998; Woods, 2007).", "Wijze (2003) criticizes that these textbook examples are not realistic in conversation.", "For more empirical categories, Habernal et al. (2018) propose ad hominem types based on analysis of Reddit's ChangeMyView discussion threads, and Delobelle et al. (2019) analyze the name-calling and abusive categories.", "Moreover, Wulczyn et al. (2017) use classifiers for a large-scale analysis of personal attacks in Wikipedia comments.", "We build upon prior works to define and analyze ad hominems in a conversational setting.", "Additionally, Yap (2013) discusses the harmful effects of implicit biases in forming and evaluating ad hominems.", "They emphasize that ad hominem attacks can be harmful to a person's credibility and expertise even if the attack is recognized as fallacious and irrelevant to the argument.", "In particular, because societal norms allow biases and stereotypes to detract from a person's credibility or expertise, the use of ad hominems can further diminish the rhetorical credibility (Govier, 1993) of marginalized groups.", "Offensive Language Detection Ad hominems occur in many forms and are related to different types of offensive language, including abusive language (Yin et al., 2009; Chen et al., 2012; Nobata et al., 2016), hate speech (Warner and Hirschberg, 2012; Kwok and Wang, 2013; Djuric et al., 2015), profanity (Sood et al., 2012a), and the more subtle forms of microaggressions (Breitfeller et al., 2019) and projecting biases and stereotypes through power differentials in language (Sap et al., 2020).", "Ranging from outright insults to condescension, ad hominems are a form of offensive language that is difficult to comprehensively and objectively define.", "Nonetheless, these responses are important to characterize, since they can irreparably damage a person's credibility.", "It is also generally important to identify these subtle forms of offensive language, since it is unclear if existing offensive language detection techniques are equally effective for these subtle forms.", "Harms in Dialogue Systems Conversational systems are known to perpetuate several types of harms.", "Ruane et al. (2019) caution about harms that can result from using conversational systems and propose striving for trust and transparency; Roller et al. (2020) suggest techniques for chatbot safety.", "For analysis, Sheng et al. (2019) evaluate societal biases in language generation, Curry and Rieser (2018) study how conversational systems respond to sexual harassment, and Khatri et al. (2018) detect offensive content with a semi-supervised approach.", "To reduce harms, Sheng et al. (2020) present a framework for controlling biases in language generation, and Dinan et al. (2019) show how adversarial attacks can make models more robust to offensive language usage from humans.", "Constrained Decoding For constrained decoding, prior works focus on incorporating words or phrases (as hard or soft constraints) into the decoded output.", "Swanson et al. (2014) and Balakr-ishnan et al. (2019) use parse trees among other techniques to enforce constraints in the generated text.", "Hokamp and Liu (2017); Post and Vilar (2018) propose variants of Grid Beam Search, which generate output that include lexical constraints.", "Miao et al. (2019); Zhang et al. (2020b); Susanto et al. (2020) explore insertion-based non-autoregressive decoding algorithms.", "To be compatible with an autoregressive model like DialoGPT and effective for open-domain generation, we apply constrained decoding to topk sampling.", "Our method also differs from these prior works in that it imposes soft constraints to not generate phrases that are likely to lead to ad hominems.", "Decoding-time techniques that can be used to reduce harmful language generation, e.g., the Plug and Play Language Model (PPLM) (Dathathri et al., 2020), are most relevant to our technique.", "and the dialogue model variations we analyze.", "Dataset Collection Our goal is to understand how ad hominem responses differ across discussions that vary in impact and relevance to marginalized groups.", "To that end, we extract English [post, response] pairs on different topics from Twitter and also use DialoGPT to generate responses for all collected posts.", "We refer to this collective dataset as the ADHOMINTWEETS dataset.", "controversial) and non-polarizing; we expect there to be more strong opinions for the polarizing topics and thus more ad hominem responses for those topics.", "For this study, we choose the topic WFH (work from home) as a non-polarizing topic and collect Twitter posts that include the hashtag #wfh or #workingfromhome .", "Polarizing topics can further be divided into those that are directly relevant to marginalized communities and those that are not.", "For the latter, we choose the topic Vegan and collect posts that include any of the hashtags: #vegan , #veganism , #govegan , or #veganlife .", "1 For polarizing topics that are directly relevant to marginalized groups, we focus on the topics BLM (from #black-livesmatter posts) and MeToo (from #metoo posts).", "#blacklivesmatter is related to the justice, healing, and freedom to Black people across the globe, 2 and #metoo is related to the movement against sexual violence.", "3 In total, we collect 14,585 [post, response] pairs of Tweets posted between Aug. 7 and Oct. 29, 2020; detailed data statistics are in Table 2. We replace all usernames and urls with special placeholders to better anonymize the data.", "Models In this work, we analyze responses from the DialoGPT (Zhang et al., 2020a) dialogue model.", "DialoGPT was originally trained on web data, and then was further fine-tuned for multi-turn conversational capabilities on Reddit data.", "Since models can vary in harm depending on the training data, we compare responses from the original medium-sized DialoGPT to responses from DialoGPT separately fine-tuned on each of the four topics from the human response subset of ADHOMINTWEETS .", "4 4 Identifying Ad Hominem Responses It is generally difficult to settle on a comprehensive list of ad hominem categories.", "We build 1 Habernal et al. (2018) find that vegan-related topics are one of the top topics that contain ad hominems in their study.", "2 https://blacklivesmatter.com 3 https://metoomvmt.org 4 More details are in Appendix A.2.", "upon the work of Habernal et al. (2018) to devise ad hominem categories that are both empirically-motivated and can be annotated with high inter-annotator agreement.", "We specifically include categories such as ignorance and condescension to cover more subtle forms of personal attacks (e.g., tone policing, mansplaining) that could further diminish the credibility of those who are already marginalized.", "We also limit the definition of ad hominem to personal attacks towards the author of the post and not a third person.", "We collect human annotations that can then be used for analysis and training a classifier to automatically label ad hominems.", "Although Habernal et al. (2018) propose a similar typology of ad hominems, there is no existing dataset annotated with their empirically-derived categories.", "Moreover, we study ad hominems in casual conversational settings.", "For these reasons, we annotate a subset of ADHOMINTWEETS with ad hominem information.", "To measure inter-annotator agreement, we calculate the Worker Agreement With Aggregate (WAWA) score, following Ning et al. (2020).", "The WAWA score compares the majority votes against each annotator and micro-averages the resulting precision, recall, and F 1 scores.", "5 Heuristics for Ad Hominems Ad hominem responses are relatively rare and range broadly from explicit to more subtle forms.", "For more effective annotation, we use heuristics to choose [post, response] pairs where the response is likely to be an ad hominem.", "In preliminary analyses, we find that responses that contain certain you -phrases such 5 There are also other agreement metrics such as Krippen-dorff's alpha, but because we expect our data to have many more non-ad hominem compared to ad hominem responses, alpha scores can be misleadingthe WAWA score gives a more appropriate estimate of annotator agreement.", "as you are are more likely to have ad hominems.", "We call these responses you-responses .", "6 In addition to pairs with you-responses , we also collect random pairs without you-responses for annotation to ensure that the annotated samples are representative of different ad hominems.", "Annotation Task We ask annotators on Mechanical Turk to read a post and response and determine whether the response contains any ad hominem(s) towards the person who made the post.", "We divide ad hominems into the following categories: stupidity , ignorance , trolling/lying , bias , condescension , and other ; examples are in Table 3. 7 Annotation Round 1 The goal for the first round of human annotation is to collect enough data to train an ad hominem classifier.", "To balance targeted and random samples, for each topic ( BLM , MeToo , Vegan , WFH ) and response source (human, DialoGPT) pair, we randomly select 150 [post, response] pairs with you-responses and another 150 pairs without you-responses for annotation.", "In total, we gather 2,400 [post, response] pairs that are then annotated through Mechanical Turk.", "Additional Annotations We conduct three more rounds of annotations to retrieve more ad hominem responses.", "For the second and third rounds, we use an ad hominem classifier trained on data from all previous rounds (with the same architecture and hyperparameters as the final classifier in Sec. 4.2) to label unseen samples in ADHOMINTWEETS .", "We then select a balanced amount of automatically-labeled ad hominems and non-ad hominems from each [topic, response source] pair to annotate.", "8 Some topics (e.g., WFH and Vegan ) prompt fewer ad hominem responses, so it is difficult to 6 Full set of you-responses is in Appendix A.1.", "find enough of these responses in the wild to train a more accurate classifier.", "Our solution is to manually take the responses annotated as ad hominems and pair them with WFH or Vegan posts.", "To verify that these new pairs contain ad hominem responses, we run a fourth round of annotation on these pairs and only keep the ones where the majority of annotators label the response as an ad hominem to the post.", "We combine majority annotations across all rounds of annotations to train the final ad hominem classifier used for analysis.", "For large-scale analysis of ad hominems in human and dialogue system responses, we rely on classifier annotation.", "To simplify the learning problem, we condense the different ad hominem categories into a binary yes/no scheme, where yes\" indicates the presence of any type and quantity of ad hominems in the response given the post. We build a classifier to automatically label whether a response contains ad hominems for a given post by fine-tuning a BERT (Devlin et al., 2019) model with the input format [CLS] POST [SEP] RESPONSE [SEP].", "We additionally include comparisons to a baseline classifier built on top of DialoGPT to similarly label whether a post and response pair indicates the presence of an ad hominem response.", "This baseline classifier allows a comparative evaluation of a bi-directional encoder model versus an auto-regressive decoder model for ad hominem classification and how this difference may affect the quality of control techniques that rely on the latter (e.g., PPLM (Dathathri et al., 2020), GeDi (Krause et al., 2020)).", "Appendix A.2 includes more details of our model implementation and data statistics (Table 8).", "Ultimately, the goal is to train an ad hominem detection classifier that has high accuracy across sources and topics, so we curate the dev and test datasets to be balanced across topics, response sources, and ad hominem versus non-ad hominem samples (through downsampling).", "Because of the natural imbalance of ad hominem responses for different topics, ad hominem responses for topics like WFH are relatively sparse compared to those for topics like BLM .", "We automatically augment our training set to combat this sparsity.", "First, we accumulate all posts and responses not present in the dev and test sets.", "Next, we choose a random post to pair with a random labeled response to form a new sample.", "We generate these new data samples to roughly balance the number of samples across topics and across ad hominems versus non-ad hominems for each topic.", "These new combinations of [post, response] pairs help de-emphasize spurious correlations between topics and classifier labels.", "Since the automatic augmentation reduces emphasis on the post when predicting the presence of ad hominems in the response, a natural question is if the post is really necessary to gauge whether the response contains ad hominems.", "The answer is mixedfor example, the response you're a troll is an ad hominem for any post.", "However, the response those who promote veganism are arrogant fools is an ad hominem given the post everyone should follow veganism , but not an ad hominem given the post I don't understand veganism .", "Empirically, by limiting the classifier input to only responses, the classifier performs worse than if it has both the post and response as input.", "9 5 Reducing Ad Hominem Responses Inspired by the success of n -gram features in detecting abusive language by Nobata et al. (2016), we propose a constrained decoding algorithm to discourage the model from generating n -grams that are semantically similar to salient n -grams found in ad hominem responses.", "While we motivate this technique within the context of ad hominems, the technique is applicable to other subtle harms (e.g., microaggressions) in language generation.", "A naive method to generate fewer ad hominems is to block words that are likely to occur in ad hominems.", "However, ad hominems are contextually determined, meaning that phrases are a better indicator than words, thus motivating our use of n -grams.", "Additionally, our algorithm uses soft constraints because there are no words or phrases that always indicate the presence of an ad hominem.", "In this section, we describe how our technique SALIENSIMTOPk extends topk sampling by incorporating n -gram similarity constraints.", "Salient n -grams We define salient ad hominem n -grams to be n -grams that appear more frequently in ad hominem responses than in non-ad hominem responses.", "Similarly, salient non-ad hominem n 9 By randomly forming new (post, response) pairs during augmentation, we do not explicitly account for the responses that are context-specific; however, we find the context-specific responses to be relatively rare and that our augmentation empirically results in a more robust classifier.", "grams appear more frequently in non-ad hominem responses than in ad hominem responses.", "We use the salience score as defined by Li et al. (2018): S ( u, a ) = count( u, D a ) + (cid:16)(cid:80) a (cid:48) A ,a (cid:48) (cid:54) = a count( u, D a (cid:48) ) (cid:17) + .", "(1) In Eq.", "(1), u denotes an n -gram, D = { ( s 1 , a 1 ) , ..., ( s m , a m ) } is a corpus where each sample is a sentence s i labeled with attribute a i .", "D a is therefore the set of sentences in the corpus with the same attribute a .", "A is the set of possible attributes (e.g., ad hominem or non-ad hominem).", "We define the n -gram u to be salient for the attribute a if S ( u, a ) .", "We find setting the smoothing parameter = 0 .", "5 and threshold = 5 .", "5 effective for our experiments, and we compute the salience of 3 -, 4 -, and 5 -grams.", "Table 4 shows that the top salient ad hominem n -grams are intuitively those that are likely to lead to ad hominems.", "For example, you're being a is used in contexts such as you're being a hypocrite .", "A more overt example of a phrase likely to lead to an ad hominem response is you're a troll .", "The amount of you-responses in salient ad hominem n grams verify our intuition that many ad hominem responses occur in the form of you-responses .", "Also, we find that there are more salient ad hominem n grams than non-ad hominem n -grams, and that the former generally have higher salience scores.", "These observations and preliminary experiments suggested that it is useful to consider both types of salient n -grams to reduce ad hominems.", "Topk Sampling For open domain language generation, topk sampling (Fan et al., 2018) and topp nucleus sampling (Holtzman et al., 2019) are popular decoding algorithms that have been shown to maintain topic consistency and promote diversity.", "We experiment with constrained decoding through topk sampling, though our technique is also applicable to nucleus sampling.", "As topk sampling is a general decoding algorithm that can be used with Algorithm 1: SALIENSIMTOPk Data: input tokens x , # top tokens k , # candidate tokens t , # recent tokens r , salient ad hominem average n -grams A , salient non-ad hominem average n -grams B , semantic similarity threshold Result: output tokens y y = x while len( y ) < max_steps + len( x ) do vocab_logits = model( y ) P (cid:48) = choose topk vocab_logits and rescale candidate_tokens = sample t tokens using P (cid:48) for cand in candidate_tokens do if special_condition then y", "various language generation models without further tuning or training, expanding upon this technique allows for a computationally-light generalizability.", "SALIENSIMTOPk We reduce the amount of generated ad hominems by encouraging the generation of n -grams that are semantically dissimilar to salient ad hominem n -grams and similar to salient non-ad hominem n -grams.", "Alg.", "1 details constraints we add to topk sampling.", "In the for-loop, we iterate through each candidate token.", "If the current generated output meets a special_condition (e.g., backtracking limit, first r time steps), then we select the current candidate token.", "Otherwise we retrieve and average DialoGPT's embeddings over the most recently generated r -gram to calculate c , an e -dimensional vector where e is the size of the token embedding.", "We similarly compute representations to form A , a j e matrix of j salient ad hominem average n -gram embeddings, and B , a k e matrix of k salient non-ad hominem average n -gram embeddings.", "We then calculate the average pairwise similarity sim_a = 1 j (cid:80) ji =1 sim ( A i , c ) , where A i is the i -th row of A , and similarly for sim_b.", "We select the current token if the difference between the similarities is under a threshold , i.e., the current r -gram is less similar to the ad hominem n -grams and more similar to the non-ad hominem n -grams.", "Otherwise, we backtrack to the previous time step if we iterate through all candidates without finding a suitable one.", "By limiting the number of times the algorithm can backtrack while gen-Topic Source dev test avg BLM Human 83.3 82.9 83.1 DialoGPT 84.2 75.7 80.0 MeToo Human 80.0 73.7 76.9 DialoGPT 85.0 80.0 82.5 Vegan Human 80.0 70.6 75.3 DialoGPT 82.9 82.9 82.9 WFH Human 77.8 83.3 80.6 DialoGPT 92.3 88.4 90.4 Table 5: BERT-based classifier F 1 scores for ad hominem responses across topics and response sources.", "erating a sample, this algorithm adds a constant amount of computational resources compared to the original, non-constrained decoding.", "Implementation Details In our experiments, we set k = 40 (commonly used in previous generation tasks (Radford et al., 2019)).", "With parameter tuning, we find t = 10 and = 0 effective for our setup.", "We use r = 5 to compare the averaged embedding of the most recent 5 -gram with those of salient 3 -, 4 -, and 5 -grams.", "Additionally, we use cosine similarity as the similarity metric and our special_condition includes either", "a) a limit of 5 for backtracking or", "b) the first r time steps.", "Annotation Across all rounds of annotations, the average WAWA scores include a precision of 0.82, recall of 0.92, and F 1 of 0.87, indicating moderately high majority agreement.", "Generally, the agreement scores for the human responses are slightly higher than those for the DialoGPT responseswe hypothesize that the former tend to be more coherent and longer, and thus more informative.", "Ad Hominem Classifier The resulting BERT-based classifier has an overall dev F 1 score of 83.3% and a test F 1 score of 80.0% for ad hominems.", "The DialoGPT-based classifier has a dev F 1 score of 74.6% and a test F 1 score of 72.6%, supporting our use of the BERT-based classifier to automatically detect ad hominems in the rest of this work.", "10 The full breakdown of F 1 scores across topics and response sources is shown in Table 5 and Appendix Table 9.", "Ad Hominem Categories By comparing ad hominem types across the manually-annotated human and DialoGPT responses, we find that ad hominems in human responses frequently occur in the forms of condescension and ignorance, while ad hominems in DialoGPT responses occur in the forms of ignorance and other types (Ta-ble 11 in the Appendix).", "These results indicate that responses from different sources and topics are likely to contain different ad hominems.", "Formally categorizing ad hominems allows for more consistent annotations and a better understanding of the types DialoGPT is prone to generate.", "DialoGPT Responses The classifier enables us to perform a large-scale study of ad hominem trends across various contexts for the entire ADHOMINTWEETS dataset.", "Figure 1 shows the percentage of ad hominem responses to posts across topics and response sources.", "Focusing on the Hu-man and DialoGPT bars for each topic, we see that ad hominem responses are present across all topics for both response sources.", "Additionally, ad hominem responses occur more frequently in discussions related to BLM and MeToo and less frequently in discussions related to Vegan and WFH .", "Vegan discussions also seem to attract more ad hominem responses than WFH discussions.", "The relatively higher rates of ad hominem responses in topics related to marginalized communities indicate the elevated potential for harm towards these communities.", "Fine-tuned DialoGPT Responses Figure 1 also shows that fine-tuning on datasets that contain more ad hominem responses leads to more generation of ad hominem responses across topics.", "11 From these results, we infer that the original DialoGPT (which was fine-tuned from GPT-2) was trained on a dataset that likely contained relatively more rather than fewer ad hominems.", "Additionally, fine-tuning on a carefully chosen dataset can reduce the quantity of generated ad hominems and associated harms.", "Baselines We compare techniques from two classes of harm reduction methods for language generation: data-based and decoding-based.", "Gehman et al. (2020) define data-based techniques as those where further model training on more data is necessary and decoding-based techniques as those where the generation strategy is changed without changing model parameters.", "For our main decoding-based SALIENSIMTOPk technique, we 11 Table 13 in the Appendix includes examples generated by the fine-tuned models.", "introduce four baselines to span the different classes of harm reduction techniques.", "The first baseline is simply the original DialoGPT.", "Our data-based reduction baseline is DialoGPT fine-tuned on the WFH dataset, as described in Sec. 3. For the first decoding-based baseline, we rely on a gradient-based method post-training to find a trig-ger phrase, which is then attached to a prompt at inference time to influence the generated output (Wallace et al., 2019).", "Sheng et al. (2020) further propose a framework to use these triggers to control societal biases, and we use these methods to find a trigger that can induce DialoGPT to generate fewer ad hominems and more non-ad hominems when prepended to posts about different topics.", "For the second decoding-based baseline, we use the Plug and Play Language Model (PPLM) proposed by Dathathri et al. (2020), which guides a pre-trained language model's generated output using gradients from attribute classifiers.", "12 Human Annotation To verify ad hominem trends from the automatic evaluation, we randomly select 100 samples from each [reduction technique, topic] pair for additional human annotation.", "General Trends Classifier and human evaluations for techniques to reduce ad hominems are in Figure 2, and examples of generated responses are in Table 6.", "The classifier-labeled results allow us to evaluate 14.5K samples across all topics per response source, and the human-labeled results allow us to more accurately evaluate a smaller set of samples.", "Overall, the trends for classifier and human evaluations are similar, and the evaluations suggest that all ad hominem reduction techniques are effective compared to the original DialoGPT.", "Furthermore, SALIENSIMTOPk is more effective than the other individual techniques, and combining fine-tuning and SALIENSIMTOPk has promise for further reducing the amount of generated ad 12 More details are in Appendix A.3 and A.4.", "For SALIENSIMTOPk , limiting the number of times we backtrack to previous time steps ensures that the algorithm is not significantly slower compared to the original topk sampling algorithm.", "Empirically, we find that using SALIENSIMTOPk with a backtracking limit of 5 on the original DialoGPT results in 13% of the decoding operations being non-forward operations, where the set of decoding operations are:", "a) choosing the current token and moving forward to the next timestep,", "b) looking for an alternate token at the same timestep, or", "c) moving backward to a previous timestep.", "When applying constrained decoding to DialoGPT fine-tuned on WFH , 10% of the operations are non-forward operations.", "Since ad hominems are less common than non-ad hominems, the algorithm is able to proceed with the first sampled candidate token in most time steps.", "Additionally, models or topics that are inclined to generate more ad hominems incur more non-forward operations.", "Coherence and Relevance Evaluation To ensure that the ad hominem reduction techniques do not affect the quality of the generated responses, we have annotators label the coherence and relevance of a response to a post, both on a scale of 1 to 5, where a higher score is better.", "The trigger method produces samples that are relatively more coherent, although at the cost of lower relevance to the post.", "PPLM generates responses that are relatively lower in both coherence and relevance.", "SALIENSIMTOPk manages to maintain a decent balance of generating both coherent and relevant responses.", "Combining SALIENSIMTOPk with fine-tuning on WFH data results in responses that are slightly less coherent and mixed in relevance for different topics.", "coherence (0.38), indicating the task subjectivity.", "Discussion The collective results indicate that SALIENSIMTOPk is an effective standalone ad hominem reduction technique that maintains generated text quality; while it can be combined with other techniques to further reduce ad hominems, one should carefully evaluate the trade-offs between response coherence and relevance.", "Additionally, for reducing harmful language types that are more subjective or difficult to detect, straightforward control techniques that rely on salient n grams may be more useful than techniques that rely on noisier signals from classifiers.", "Ad hominem responses from dialogue systems are offensive, stall conversations, and are especially harmful for marginalized communities.", "We analyze responses to find that discussions on topics that affect marginalized groups contain more ad hominems.", "Through a novel constrained decoding technique, we decrease the amount of ad hominems generated from dialogue systems while keeping the response quality comparable.", "Furthermore, our method can be easily applied to other pre-trained language generation models and other subtle yet harmful language.", "More broadly, our work strives to understand ad hominems in the context of harms in conversational systems.", "This work identifies personal attacks in responses generated by dialogue systems, quantifies the dis-13", "proportionate amount generated for topics concerning marginalized populations, and proposes methods to reduce ad hominem-related harms.", "Dataset We collect an English dataset from Twitter and ensure that personal information (e.g., usernames, emails, urls) is discarded.", "We also collect crowd-sourced annotations for this dataset through Mechanical Turk, where we ask for judgements of whether a response contains ad hominems for a given post, and the coherence and relevance of a response.", "No information about the annotators are collected from the annotation tasks.", "The annotation information (pay per amount of work, guidelines) is in the Appendix.", "One annotation aspect that we did not control for is whether the annotators themselves are from marginalized communities.", "When measuring harms towards different demographics, it is important to consider the lived experiences of those groups and how these experiences may affect our analyses.", "Future work includes specifically collecting annotations from marginalized groups.", "Additionally, we analyze ad hominems in responses to four Twitter topics and from one dialogue model, which leaves much room for exploring the generalizability of the trends we see.", "Techniques In terms of dual-use harms, our constrained decoding technique could potentially be used to amplify rather than reduce ad hominems (or other harmful language).", "However, we believe that by being transparent about this technique and releasing the associated code and data, we can better counter attempts of malicious misuse.", "Furthermore, to perform a large-scale analysis of ad hominems across different contexts, we build an automatic classifier.", "While we spent much effort on collecting representative train/dev/test datasets and verifying classifier quality and observed trends with human labels, collecting more (diverse) data could help further improve the classifier accuracy and robustness.", "In the meantime, we think this work introduces an important perspective of how ad hominems in dialogue systems reinforce unequal harms and effective reduction methods.", "We would like to thank members of the PLUS Lab and the anonymous reviewers for the helpful feedback, and Jason Teoh for the many discussions.", "This paper is supported in part by NSF IIS 1927554 and by the CwC program under Contract W911NF-15-1-0543 with the US Defense Advanced Research Projects Agency (DARPA).", "The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government." ]
[ "abstain", "abstain", "abstain", "objective", "method", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "method", "method", "method", "result", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "objective", "objective", "objective", "method", "other", "abstain", "other", "abstain", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "objective", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "other", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "other", "other", "other" ]
[ "Multilingual neural machine translation aims at learning a single translation model for multiple languages.", "These jointly trained models often suffer from performance degradation on rich-resource language pairs.", "We attribute this degeneration to parameter interference.", "In this paper, we propose LaSS to jointly train a single unified multilingual MT model.", "LaSS learns La nguage S pecific S ub-network (LaSS) for each language pair to counter parameter interference.", "Comprehensive experiments on IWSLT and WMT datasets with various Transformer architectures show that LaSS obtains gains on 36 language pairs by up to 1.2 BLEU.", "Besides, LaSS shows its strong generalization performance at easy adaptation to new language pairs and zero-shot translation.", "LaSS boosts zero-shot translation with an average of 8.3 BLEU on 30 language pairs.", "Codes and trained models are available at https: //github.com/NLP-Playground/LaSS .", "Neural machine translation (NMT) has been very successful for bilingual machine translation (Bah-danau et al., 2015; Vaswani et al., 2017; Wu et al., 2016; Hassan et al., 2018; Su et al., 2018; Wang, 2019).", "Recent research has demonstrated the efficacy of multilingual NMT, which supports translation from multiple source languages into multiple target languages with a single model (Johnson et al., 2017; Aharoni et al., 2019; Zhang et al., 2020; Fan et al., 2020; Siddhant et al., 2020).", "Multilingual NMT enjoys the advantage of deployment.", "Further, the parameter sharing of multilingual NMT encourages transfer learning of different languages.", "An extreme case is zero-shot translation, where direct translation between a language pair never seen in training is possible (Johnson et al., 2017).", "While very promising, several challenges remain in multilingual NMT.", "The most challenging one is related to the insufficient model capacity.", "Since multiple languages are accommodated in a single model, the modeling capacity of NMT model has to be split for different translation directions (Aha-roni et al., 2019).", "Therefore, multilingual NMT models often suffer from performance degradation compared with their corresponding bilingual baseline, especially for rich-resource translation directions.", "The simplistic way to alleviate the insufficient model capacity is to enlarge the model parameters (Aharoni et al., 2019; Zhang et al., 2020).", "However, it is not parameter or computation efficient and needs larger multilingual training datasets to avoid over-fitting.", "An alternative solution is to design language-aware components, such as division of the hidden cells into shared and language-dependent ones (Wang et al., 2018), adaptation layers (Bapna and Firat, 2019; Philip et al., 2020), language-aware layer normalization and linear transformation (Zhang et al., 2020), and latent layers (Li et al., 2020).", "In this work, we propose LaSS, a method to dynamically find and learn La nguage S pecific S ub-network for multilingual NMT.", "LaSS accommodates one sub-network for each language pair.", "Each sub-network has shared parameters with some other languages and, at the same time, preserves its language specific parameters.", "In this way, multilingual NMT can model language specific and language universal features for each language pair in one single model without interference.", "Figure 1 is the illustration of vanilla multilingual model and LaSS.", "Each language pair in LaSS has both language universal and language specific parameters.", "The network itself decides the sharing strategy.", "The advantages of our proposed method are LaSS is parameter efficient, requiring no extra trainable parameters to model language specific features.", "LaSS alleviates parameter interference, potentially improving the model capacity and boosting performance.", "LaSS shows its strong generalization performance at easy adaptation to new language pairs and zero-shot translation.", "LaSS can be easily extended to new language pairs without dramatic degradation of existing language pairs.", "Besides, LaSS can boost zero-shot translation by up to 26.5 BLEU.", "Multilingual Neural Machine Translation The standard multilingual NMT model uses a shared encoder and a shared decoder for different languages (Johnson et al., 2017).", "There is a transfer-interference trade-off in this architecture (Arivazhagan et al., 2019): boosting the performance of low resource languages or maintain the performance of high resource languages.", "To solve this trade-off, previous works assign some parts of the model to be language specific: Language specific decoders (Dong et al., 2015), Language specific encoders and decoders (Firat et al., 2016; Lyu et al., 2020) and Language specific hidden states and embeds (Wang et al., 2018).", "Sachan and Neubig (2018) compares different sharing methods and finds different sharing methods have a great impact on performance.", "Recently, Zhang et al. (2021) analyze when and where language specific capacity matters.", "Li et al. (2020) uses a binary conditional latent variable to decide which language each layer belongs to.", "Model Pruning Our approach follows the standard pattern of model pruning: training, finding the sparse network and fine-tuning (Frankle and Carbin, 2019; Liu et al., 2019).", "Frankle and Carbin (2019) and Liu et al. (2019) highlight the importance of the sparse network architecture.", "Zhu and Gupta (2018) proposed a method to automatically adjust the sparse threshold.", "Sun et al. (2020) learns different sparse architecture for different tasks.", "Evci et al. (2020) iteratively redistribute the sparse network architecture by the gradient.", "We describe LaSS method in this section.", "The goal is to learn a single unified model for many translation directions.", "Our overall idea is to find sub-networks corresponding to each language pair, and then only update the parameters of those subnetworks during the joint training.", "A multilingual NMT model learns a mapping function f from a sentence in one of many languages to another language.", "We adopt the multilingual Transformer (mTransformer) as the backbone network (Johnson et al., 2017).", "mTransformer has the same encoder-decoder architecture with layers of multihead attention, residual connection, and layer normalization.", "In addition, it has two lanuage identifying tokens for the source and target.", "Define a multilingual dataset {D s i t i } Ni =1 where s i , t i represents the source and target language.", "We train an initial multilingual MT model with the following loss.", "Training a single model jointly on multiple language directions will lead to performance degradation for rich resource pairs (Johnson et al., 2017).", "The single model will improve on low resource language pairs, but will reduce performance on pairs like English-German.", "Intuitively, jointly training on all translation pairs will obtain an average model.", "For rich resources, such averaging may hurt the performance since a multilingual MT model must distribute its modeling capacity for all translation directions.", "Based on this intuition, our idea is to find a sub-network of the original multilingual model.", "Such sub-network is specific to each language pair.", "We start from a multilingual base model 0 .", "The 0 is trained with Eq.", "(1).", "A sub-network is indicated by a binary mask vector M s i t i { 0 , 1 } | | for language pair s i t i .", "Each element being 1 indicates to retain the weight and 0 to abandon the weight.", "Then the parameters associated with s i t i is s i t i = { j 0 | M js i t i = 1 } , where j denotes the j th element in 0 .", "The parameters s i t i are only responsible for the particular language s i and t i .", "We intend to find such language specific sub-networks.", "Figure 1 illustrates the original model and its language specific sub-networks.", "Given an initial model 0 , we adopt a simple method to find the language specific mask for each language pairs.", "1. Start with a multilingual MT model 0 jointly trained on {D s i t i } Ni =1 .", "2. For each language pair s i t i , fine-tuning 0 on D s i t i .", "Intuitively, fine-tuning 0 on specific language pair s i t i will amplify the magnitude of the important weights for s i t i and diminish the magnitude of the unimportant weights.", "3. Rank the weights in fine-tuned model and prune the lowest percent.", "The mask M s i t i is obtained by setting the remaining indices of parameters to be 1. 3.3 Structure-aware Joint Training Once we get masks M s i t i for all language pairs, we further continue to train 0 with language-grouped batching and structure-aware updating.", "First, we create random batches of bilingual sentence pairs where each batch contains only samples from one pair.", "This is different from the plain joint multilingual training where each batch can contain fully random sentence pairs from all languages.", "Specifically, a batch B s i t i is randomly drawn from the language-specific data D s i t i .", "Second, we evaluate the loss in Eq.", "1 on the batch B s i t i .", "During the back-propagation step, we only update the parameters in 0 belonging to the sub-network indicated by M s i t i .", "We iteratively update the parameters until convergence.", "In this way, we still get a single final model that is able to translate all language directions.", "During the inference, this model and its masks M s i t i , i = 1 , . . . , N are used together to make predictions.", "For every given input sentence in language s and a target language t , the forward inference step only uses the parameter (cid:12) M s t to calculate model output.", "Datasets and Evaluation The experiments are conducted on IWSLT and WMT benchmarks.", "For IWSLT, we collect 8 English-centric language pairs from IWSLT2014, whose size ranges from 89k to 169k.", "To simulate the scenarios of imbalanced datasets, we collect 18 language pairs ranging from low-resource (Gu, 11k) to rich-resource (Fr, 37m) from previous years' WMT.", "The details of the datasets are listed in Appendix.", "We apply byte pair encoding (BPE) (Sennrich et al., 2016) to preprocess multilingual sentences, resulting in a vocabulary size of 30k for IWSLT and 64k for WMT.", "Besides, we apply over-sampling for IWSLT and WMT to balance the training data distribution with a temperature of T = 2 and T = 5 respectively.", "Similar to Lin et al. (2020), we divide the language pairs into 3 categories: low-resource ( < 1M), medium-resource ( > 1M and < 10M) and rich resource ( > 10M).", "We perform many-to-many multilingual translation throughout this paper, and add special language tokens at both the source and the target side.", "In all our experiments, we evaluate our model with commonly used standard testsets.", "For zero-shot, where standard testsets (for example, Fr Zh) of some language pairs are not available, we use OPUS-100 (Zhang et al., 2020) testsets instead.", "We report tokenized BLEU, as well as win ratio (WR), informing the proportion of language pairs we outperform the baseline.", "In zero-shot translation, we also report translation-language accuracy 1 , which is commonly used to measure the accuracy of translating into the right target language.", "Model Settings Considering the diversity of dataset volume, we perform our experiments with variants of Transformer architecture.", "For IWSLT, we adopt a smaller Transformer (Transformer-small 2 (Wu et al., 2019)).", "For WMT, we adopt 1 https://github.com/Mimino666/ langdetect 2 Transformer-base with d ff = 1024 and n head = 4 Lang Fa Pl Ar He Size 89K 128k 140K 144K Baseline 16.9 16.4 20.9 29 LaSS 17.9 17.0 22.9 30.9 +1.0 +0.6 +2.0 +1.9 Lang Nl De It Es Size 153K 160K 167K 169K Baseline 30.9 28.1 29.2 35.2 LaSS 33.0 29.8 30.9 37.3 +2.1 +1.7 +1.7 +2.1 Table 1: Results on IWLST dataset.", "Transformer-base and Transformer-big 3 .", "The pruning rate of IWSLT and WMT is 0.7 and 0.3, respectively.", "For simplicity, we only report the highest BLEU from the best pruning rate and we also discuss the impact of different pruning rate on performance in Sec.6.", "In Sec. 6 we discuss the relationship of performance and pruning rate.", "For more training details please refer to Appendix.", "This section shows the efficacy and generalization of LaSS.", "Firstly, we show that LaSS obtains consistent performance gains on IWSLT and WMT datasets with different Transformer architecture variants.", "Further, we show that LaSS can easily generalize to new language pairs without losing the accuracy for previous language pairs.", "Finally, we observe that LaSS can even improve zero-shot translation, obtaining performance gains by up to 26.5 BLEU.", "Results on IWSLT We first show our results on IWSLT.", "As shown in Table 1, LaSS consistently outperforms the multilingual baseline on all language pairs, confirming that using LaSS to alleviate parameter interference can help boost performance.", "Results on WMT To further verify the generalization of LaSS, we also conduct experiments on 3 For details of the Transformer setting, please refer to Vaswani et al. (2017) WMT, where the dataset is more imbalanced across different language pairs.", "We adopt two different Transformer architecture variants, i.e., Transformer-base and Transformer-big.", "As shown in Table 2, LaSS obtains consistent gains over multilingual baseline on WMT for both Transformer-base and Transformer-big.", "For Transformer-base, LaSS achieves an average improvement of 1.2 BLEU on 36 language pairs over baseline, while for Transformer-big, LaSS obtains 0.6 BLEU improvement.", "We observe that with the dataset scale of language pairs increasing, the improvements of BLEU and WR become larger, suggesting that the language pairs with large scale dataset benefit more from LaSS than language pairs of low resource.", "This phenomenon is intuitive since rich resource dataset suffers more parameter interference than low resource dataset.", "We also find that the BLEU and WR gains obtained in Transformer-base are larger than that in Transformer-large.", "We attribute it to the more severe parameter interference for smaller models.", "For comparison, we also include the results of LaSS with randomly initialized masks.", "Not surprising, Random underperforms the baseline by a large margin, since Random intensifies rather than alleviates the parameter interference.", "LaSS has shown its efficacy in the above section.", "A natural question arises that can LaSS adapt to a new language or language pair that it has not seen in training phase?", "In other words, can LaSS generalize to other language pairs?", "In this section, we show the generalization of LaSS in two settings.", "We firstly show that LaSS can easily adapt to new unseen languages to match bilingual models with training for only a few hundred steps while keeping the performance of the existing language pairs hardly dropping.", "Secondly, we show that LaSS can also boost performance in zero-shot translation scenario, obtaining performance gains by up to 26.5 BLEU.", "The model is Transformer-big trained on WMT dataset.", "En Ar and En It are both unseen language pairs.", "Previous works have studied the easy and rapid adaptation to a new task or language pair (Bapna and Firat, 2019; Rebuffi et al., 2017).", "We show Arch Setting Model Low Medium Rich All BLEU WR BLEU WR BLEU WR BLEU WR Transformer-base Baseline 16.7 -18.8 -25.3 -20.4 Random -2.2 0.0 -2.3 0.0 -2.6 0.0 -2.4 0.0 LaSS +0.7 80.0 +1.3 85.7 +1.7 100.0 +1.2 88.9 Transformer-big Baseline 18.8 -22.2 -29.0 -23.5 Random -1.3 0.0 -1.8 0.0 -1.5 0.0 -1.6 0.0 LaSS +0.1 50.0 +0.7 92.9 +0.8 100.0 +0.6 83.3 Table 2: Average BLEU and Win Ratio (WR) of WMT dataset on Low ( < 1M), Medium (1M 10M) and Rich ( > 10M) resource dataset.", "that LaSS can also easily adapt to new unseen languages without dramatic drop for other existing languages.", "We distribute a new sub-network to each new language pair and train the sub-network with the specific language pair for fixed steps.", "In this way, the new language pair will only update the corresponding parameters and it can alleviate the interference and catastrophic forgetting (Kirk-patrick et al., 2016) to other language pairs.", "We verify the extensibility of LaSS on 4 language pairs.", "For LaSS, as described in Sec.3, we first fine-tune the multilingual base model and prune to obtain the specific mask for the new language pair.", "For both multilingual baseline and our method, we train on only the specific language pair for fixed steps.", "Figure 2 shows the trend of BLEU score along with the training steps.", "We observe that 1) LaSS consistently outperforms the multilingual baseline model along with the training steps.", "LaSS reaches the bilingual model performance with fewer steps.", "2) Besides, the degradation of other language pairs is much smoother than the baseline.", "When reaching the bilingual baseline performance, LaSS hardly drops on other language pairs, while the multilingual baseline model dramatically drops by a large margin.", "We attribute the easy adaptation for specific languages to the language specific sub-network.", "LaSS only updates the corresponding parameters, avoiding updating all parameters which will hurt the performance of other languages.", "Another benefit of updating corresponding parameters is its fast adaptation towards specific language pairs.", "Zero-shot translation is the translation between known languages that the model has never seen", "together at training time (e.g., Fr En and En Zh are both seen in training phase, while Fr Zh is not.).", "It is the ultimate goal of Multilingual NMT and has been a common indicator to measure the model capability (Johnson et al., 2017; Zhang et al., 2020).", "One of the biggest challenges is the off-target issue (Zhang et al., 2020), which means that the model translates into a wrong target language.", "In previous experiments, we apply specific masks to their corresponding language pairs.", "As the training dataset is English-centric, non-English-centric masks are not available.", "We remedy it by merging two masks to create non-English-centric masks.", "For example, We create X Y mask by combining the encoder mask of X En and the es it nldepl ar fahe e s i t n l d e p l a r f a h e 47 48 49 50 51", "As shown in Table 3, surprisingly, by directly applying X Y masks, LaSS obtains consistent gains over baselines in all language pairs for both BLEU and translation-language accuracy, indicating that the superiority of LaSS in learning to bridge between languages.", "It is worth noting that for Fr Zh, LaSS outperforms the baseline by 26.5 BLEU, reaching 32 BLEU.", "We also sample a few translation examples from Fr Zh to analyze why LaSS can help boost zero-shot (More examples are listed in Appendix).", "As shown in Table 4 as well as translation-language accuracy in Table 3, we observe that the multilingual baseline has severe off-target issue.", "As a counterpart, LaSS significantly alleviates the off-target issue, translating into the right target language.", "We attribute the success of on-target in zero-shot to the language specific parameters as a strong signal, apart from language indicator, to the model to translate into the target language.", "In this section, we conduct a set of analytic experiments to better understand the characteristics of language specific sub-network.", "We first measure the relationship between language specific subnetwork as well as its capacity and language family.", "Secondly, we study how masks affect performance in zero-shot scenario.", "Lastly, we discuss the relationship between pruning rate and performance.", "We conduct our analytic experiments on IWSLT dataset.", "For readers not familiar with language family and clustering, Figure 4 is the hierarchical clustering according to language family.", "Ideally, similar languages should share more parameters since they share more language characteristics.", "Therefore, a natural question arises: Does the model automatically capture the relationship of language family defined by human?", "We calculate the similarity of masks between language pairs to measure the sub-network relationship between language pairs.", "We define mask similarity as the number of 1 where two masks share divided by the number of 1 of the first mask: Sim ( M 1 , M 2 ) = (cid:107) M 1 M 2 (cid:107) 0 (cid:107) M 1 (cid:107) 0 , (2) where (cid:107)(cid:107) 0 represent L 0 norm.", "Mask similarity re-flects the degree of sharing among different language pairs.", "Figure", "3(a) and", "3(b) shows the mask similarity in En X and X En.", "We observe that, for both En X and X En, the mask similarity is positively correlated to the language family similarity.", "The color of grids in Figure is deeper between similar languages (for example, es and it) while more shallow between dissimilar languages (for example, es and he).", "We also plot the similarity between En X and X En in Figure", "3(c) .", "We observe that, unlike En X or X En, the mask similarity does not correspond to language family similarity.", "We suspect that the mask similarity is determined by combination of source and target languages.", "That means that En Nl does not necessarily share more parameters with Nl En than En De.", "To take a step further, we study how model schedule language specific capacity across layers.", "Figure 5 Target Languages Fr Cs De Es Ru Zh BLEU ACC BLEU ACC BLEU ACC BLEU ACC BLEU ACC BLEU ACCS ou r ce L a ngu a g e s Fr baseline -2.0 1.7 2.9 3.1 6.4 15.1 1.5 4.4 5.5 4.9 LaSS -5.4 32.6 7.5 35.9 23.0 77.7 4.6 24.7 32.0 31.3 -+3.4 +30.9 +4.6 +32.8 +16.6 +62.6 +3.1 20.3 +26.5 +26.4 Cs baseline 3.9 7.0 -2.6 2.1 5.6 13.9 2.5 9.6 0.9 0.9 LaSS 15.3 61.1 -7.7 37.2 18.5 74.2 6.6 34.5 13.5 35.3 +11.4 +54.1 -+5.1 +35.1 +12.9 +60.3 +4.1 +24.9 +12.6 +34.4 De baseline 6.3 18.8 2.6 5.7 -5.6 14.0 2.2 8.6 5.7 19.6 LaSS 17.9 70.3 7.4 40.5 -19.4 75.1 6.1 33.2 16.1 41.6 +11.6 +51.5 +4.8 +34.8 -+13.8 +61.1 +3.9 +24.6 +10.4 +22.0 Es baseline 7.4 17.5 2.0 1.6 2.6 1.9 -1.4 3.7 3.6 9.2 LaSS 20.8 66.3 4.9 25.7 6.7 30.3 -4.5 22.2 15.2 42.8 +13.4 +48.8 +2.9 +24.1 +4.1 +28.4 -+3.1 +18.5 +11.6 +33.6 Ru baseline 5.6 19.9 2.4 8.1 2.0 2.4 6.3 20.6 -10.5 13.4 LaSS 16.2 69.0 8.0 47.7 5.9 32.0 18.8 75.5 -30.0 33.1 +10.6 +49.1 +5.6 +39.6 +3.9 +29.6 +12.5 +54.9 -+19.5 +19.7 Zh baseline 5.6 4.0 0.3 1.0 1.1 1.6 0.8 2.1 4.8 5.6 -LaSS 18 53.2 1.7 22.9 1.2 7.1 3.8 28.0 7.2 27.6 - +12.4 +49.2 +1.4 +21.9 +0.1 +5.5 +3.0 +25.9 +2.4 +22.0 -Table 3: BLEU score and Translation-language Accuracy (ACC, in percentage) of zero-shot translation for multilingual baseline and LaSS.", "shows the similarity of different components on the encoder and decoder side along with the increase of layer.", "More concretely, we plot query, key, value on the attention sub-layer and fully-connected layer on the positional-wise feed-forward sub-layer.", "We observe that", "a) On both the encoder and decoder side, the model tends to distribute more language specific components on the top and bottom layers rather than the middle ones.", "This phenomenon is intuitive.", "The bottom layers deal more with embedding, which is language specific, while the top layers are near the output layer, which is also language specific.", "b) For fully-connected layer, the model tends to distribute more language specific capacity on the middle layers for the encoder, while distribute more language specific capacity in the decoder for the top layers.", "In Sec.4, we show that simply applying X Y masks can boost zero-shot performance.", "We conduct experiments to analyze how masks affect zero-performance.", "Concretely, we take Fr Zh as an example, replacing the encoder or decoder mask with another language mask, respectively.", "As shown in Table 5, we observe that replacing the encoder mask with other languages causes only littler performance drop, while replacing the decoder mask causes dramatic performance drop.", "It suggests that the decoder mask is the key ingredient of performance improvement.", "To better understand the pruning rate, we plot the performance along with the increase of pruning", "rate in Figure 6.", "For WMT, the best choice for is 0.3 for both Transformer-base and Transformer-big, while for IWSLT the best lies between 0.6 0.7.", "The results are consistent with our intuition, that large scale training data need a smaller pruning rate to keep the model capacity.", "Therefore, we suggest tuning based on both the dataset and model size.", "For large datasets such as WMT, setting a smaller is better, while a larger will slightly decrease the performance (i.e. less than 0.5 BLEU score).", "For small datasets like IWSLT, setting a larger may yield better performance.", "In this paper, we propose to learn Language-Specific Sub-network (LaSS) for multilingual", "NMT.", "Extensive experiments on IWSLT and WMT have shown that LaSS is able to alleviate parameter interference and boost performance.", "Further, LaSS can generalize well to new language pairs by training with a few hundred steps, while keeping the performance of existing language pairs.", "Surprisingly, in zero-shot translation, LaSS surpasses the multilingual baseline by up to 26.5 BLEU.", "Extensive analytic experiments are conducted to understand the characteristics of language specific sub-network.", "Future work includes designing a more dedicated end-to-end training strategy and incorporating the insight we gain from analysis to design a further improved LaSS." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "objective", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "result" ]
[ "Analyzing language in context, both from a theoretical and from a computational perspective, is receiving increased interest.", "Complementing the research in linguistics on discourse and information structure, in computational linguistics identifying discourse concepts was also shown to improve the performance of certain applications, for example, Short Answer Assessment systems (Ziai and Meurers, 2014).", "Building on the research that established detailed annotation guidelines for manual annotation of information structural concepts for written (Dipper et al., 2007; Ziai and Meurers, 2014) and spoken language data (Calhoun et al., 2010), this paper presents the first approach automating the analysis of focus in authentic written data.", "Our classification approach combines a range of lexical, syntactic, and semantic features to achieve an accuracy of 78.1% for identifying focus.", "The interpretation of language is well known to depend on context.", "Both in theoretical and computational linguistics, discourse and information structure of sentences are thus receiving increased interest: attention has shifted from the analysis of isolated sentences to the question how sentences are structured in discourse and how information is packaged in sentences analyzed in context.", "As a consequence, a rich landscape of approaches to discourse and information structure has been developed (Kruijff-Korbayova and Steedman, 2003).", "Among these perspectives, the Focus-Background dichotomy provides a particularly valuable structuring of the information in a sentence in relation to the discourse.", "(1) is an example question-answer pair from Krifka and Musan (2012, p. 4) where the focus in the answer is marked by brackets.", "In the answer in (1), the NP the pictures is focussed and hence indicates that there are alternative things that John could show Mary.", "It is commonly assumed that focus here typically indicates the presence of alternative denotations (denotation focus, Krifka and Musan 2012, p.8), making it a semantic notion.", "Depending on the language, different devices are used to mark focus, such as prosodic focus marking or different syntactic constructions (e.g. clefts).", "In this paper, we adopt a notion of focus based on alternatives, as advanced by Rooth (1992) and more recently, Krifka and Musan (2012), who define focus as indicating the presence of alternatives that are relevant for the interpretation of linguistic expressions (Krifka and Musan, 2012, p. 7).", "Formal semantics has tied the notion of alternatives to an explicit relationship between questions and answers called Question-Answer Congruence (Stechow, 1991), where the idea is that an answer is congruent to a question if both evoke the same set of alternatives.", "Questions can thus be seen as a way of making alternatives explicit in the discourse, an idea also taken up by the Question-Under-Discussion (QUD) approach (Roberts, 2012) to discourse organization.", "Complementing the theoretical linguistic approaches, in the last decade corpus-based approaches started exploring which information structural notions can reliably be annotated in what kind of language data.", "While the information status (Given-New) dimension can be annotated successfully (Riester et al., 2010; Nissim et al., 2004) and even automated (Hempelmann et al., 2005; Nissim, 2006; Cahill and Riester, 2012), the inter-annotator agreement results for Focus-Background (Ritz et al., 2008; Calhoun et al., 2010) show that it is difficult to obtain high levels of agreement, especially due to disagreement 117 about the extent or size of the focused unit.", "More recently, Ziai and Meurers (2014) showed that for data collected in task contexts including explicit questions, such as answers to reading comprehension questions, reliable focus annotation is possible.", "In addition, an option for externally validating focus annotation was established by showing that such focus annotation improves the performance of Short Answer Assessment (SAA) systems.", "Focus enables the system to zoom in on the part of the answer addressing the question instead of considering all parts of the answer as equal.", "In this paper, we want to build on this strand of research and develop an approach for automatically identifying focus in authentic data including explicit question contexts.", "In contrast to Calhoun (2007) and Sridhar et al. (2008), who make use of prosodic properties to tackle the identification of focus for content words in spoken language data, we target the analysis of written texts.", "We start in section 2 by discussing relevant related work before introducing the gold standard focus annotation we are using as foundation of our work in section 3. Section 4 then presents the different types of features used for predicting which tokens form a part of the focus.", "In section 5 we employ a supervised machine learning setup to evaluate the perspective and specific features in terms of the ability to predict the gold standard focus labeling.", "Building on these intermediate results and the analysis thereof in section 6, in section 7 we then present two additional feature groups which lead to our final focus detection model.", "Finally, section 8 explores options for extrinsically showing the value of the automatic focus annotation for the automatic meaning assessment of short answers.", "It confirms that focus analysis pays off when aiming to generalize assessment to previously unseen data and contexts.", "There is only a very small number of approaches dealing with automatically labeling information structural concepts.", "1 Most approaches related to detecting focus automatically almost exclusively center on detecting the kontrast' notion in the English Switchboard corpus (Calhoun et al., 2010).", "We therefore focus on the Switchboard-based ap-1 For a broader perspective of computational approaches in connection with information structure, see Stede (2012).", "The availability of the annotated Switchboard corpus (Calhoun et al., 2005, 2010) sparked interest in information-structural categories and enabled several researchers to publish studies on detecting focus.", "This is especially true for the Speech Processing community, and indeed many approaches described below are intended to improve computational speech applications in some way, by detecting prominence through a combination of various linguistic factors.", "Moreover, with the exception of Badino and Clark (2008), all approaches use prosodic or acoustic features.", "All approaches listed below tackle the task of detecting kontrast' (as focus is called in the Switchboard annotation) automatically on various subsets of the corpus using different features and classification approaches.", "For each approach, we therefore report the features and classifier used, the data set size as reported by the authors, the (of-ten very high) majority baseline for a binary distinction between kontrast' and background, and the best accuracy obtained.", "If available in the original description of the approach, we also report the accuracy obtained without acoustic and prosodic features.", "Calhoun (2007) investigated how focus can be predicted through what she calls prominence structure.", "The essential claim is that a focus is more likely if a word is more prominent than expected given its syntactic, semantic and discourse properties.", "The classification experiment is based on 9,289 words with a 60% majority baseline for the background' class.", "Calhoun (2007) reports 77.7% for a combination of prosodic, syntactic and semantic features in a logistic regression model.", "Without the prosodic and acoustic features, the accuracy obtained is at 74.8%.", "There is no information on a separation between training and test set, likely due to the setup of the study being geared towards determining relevant factors in predicting focus, not building a focus prediction model for a real application case.", "Relatedly, the approach uses only gold-standard annotation already available in the corpus as the basis for features, not automatic annotation.", "Sridhar et al. (2008) use lexical, acoustic and part-of-speech features in trying to detect pitch accent, givenness and focus.", "Concerning focus, the work attempts to extend Calhoun (2007)'s analysis to understand what prosodic and acoustic dif-118 ferences exist between the focus classes and background items in conversational speech.", "14,555 words of the Switchboard corpus are used in total, but filtered for evaluation later to balance the skewed distribution between kontrast' and background'.", "With the thus obtained random baseline of 50%, Sridhar et al. (2008) obtain 73% accuracy when using all features, which again drops only slightly to 72.95% when using only parts of speech.", "They use a decision tree classifier to combine the features in 10-fold cross-validation for training and testing.", "Badino and Clark (2008) aim to model contrast both for its role in analyzing discourse and information structure, and for its potential in speech applications.", "They use a combination of lexical, syntactic and semantic features in an SVM classifier.", "No acoustic or prosodic features are employed in the model.", "In selecting the training and testing data, they filter out many kontrast' instances, such as those triggered across sentence boundaries, those above the word level, and those not sharing the same broad part of speech with the trigger word.", "The resulting data set has 8,602 instances, of which 96.8% are background'.", "Badino and Clark (2008) experiment with different kernel settings for the SVM and obtain the best result of 97.19% using a second-order polynomial kernel, and leave-one-out testing.", "In contrast to all approaches above, we target the analysis of written texts, for which prosodic and acoustic information is not available, so we must rely on lexis, syntax and semantics exclusively.", "Also, the vast majority of the approaches discussed make direct use of the manually annotated information in the corpus they use in order to derive their features.", "While this is a viable approach when the aim is to determine the relevant factors for focus detection, it does not represent a real-life case where annotated data often unavailable.", "In our focus detection model, we only use automatically determined annotation as the basis for our features for predicting focus.", "Since our approach also makes use of question properties, it is also worth mentioning that there are a number of approaches on Answer Typing as a step in Question Answering (QA) approaches in order to constrain the search space of possible candidate answers and improve accuracy.", "While earlier approaches such as Li and Roth (2002) used a fixed set of answer types for classifying factoid questions, other approaches such as Pinchak and Lin (2006) avoid assigning pre-determined classes to questions and instead favor a more data-driven label set.", "In more recent work, Lally et al. (2012) use a sophisticated combination of deep parsing, lexical clues and broader question labels to analyze questions.", "The present work is based on the German CREG corpus (Ott et al., 2012).", "CREG contains responses by American learners of German to comprehension questions on reading texts.", "Each response is rated by two teaching assistants with regard to whether it answers the question or not.", "While many responses contain ungrammatical language, the explicit questions in CREG generally make it possible to interpret responses.", "More importantly for our work, they can be seen as Questions Under Discussion and thus form an ideal foundation for focus annotation in authentic data.", "As a reference point for the automatic detection of focus, we used the CREG-ExpertFocus data set (De Kuthy et al., 2016) containing 3,187 student answers and 990 target answers (26,980 words in total).", "It was created using the incremental annotation scheme described in Ziai and Meurers (2014), where annotators first look at the surface question form, then determine the set of alternatives, and finally mark instances of the alternative set in answers.", "De Kuthy et al. (2016) report substantial agreement in CREG-ExpertFocus ( . 7 ) and provide an adjudicated gold standard, which thus presents a high-quality basis for training our focus detection classifier.", "As described in section 3 above, focus was marked in a span-based way in the data set used: each instance of focus starts at a specific word and ends at another word.", "Since in principle any part of speech can be focused, we cannot constrain ourselves to a pre-defined set of markables for automatic classification.", "We therefore conceptualized the task of automatic focus detection on a per-word level: for each word in an answer, as identified by the OpenNLP tokenizer and sentence segmenter 2 , the classifier needs to decide whether it is an instance of focus or background .", "Besides the choice of 2 http://opennlp.apache.org 119 classification algorithm, the crucial question naturally is the choice of linguistic features, which we turn to next.", "Various types of linguistic information on different linguistic levels can in principle be relevant for focus identification, from morphology to semantics.", "We start by exploring five groups of features, which are outlined below.", "In section 7, we discuss two more groups designed to address specific problems observed with the initial model.", "Syntactic answer properties (SynAns) A word's part-of-speech and syntactic function are relevant general indicators with respect to focus: since we are dealing with meaning alternatives, the meaning of e.g. a noun is more likely to denote an alternative than a grammatical function word such as a complementizer or article.", "Similarly, a word in an argument dependency relation is potentially a stronger indicator for a focused alternative in a sentence than a word in an adjunct relation.", "We therefore included two features: the word's part-of-speech tag in the STTS tag set (Schiller et al., 1995) determined using TreeTagger (Schmid, 1994), and the dependency relation to the word's head in the Hamburg dependency scheme (Foth et al., 2014, p. 2327) determined using MaltParser (Nivre et al., 2007) as features in our model.", "Question properties The question constitutes the direct context for the answer and dictates its information structure and information requirements to fulfill.", "In particular, the type of wh -phrase (if present) of a question is a useful indicator of the type of required information: a who -question, such as Who rang the doorbell?', will typically be answered with a noun phrase, such as the milk-man'.", "We identified surface question forms such as who , what , how etc. using a regular expression approach developed by Rudzewitz (2015) and included them as features.", "Related to question forms, we also extracted the question word's dependency relation to its head , analogous to the answer feature described above.", "Surface givenness As a rough and robust approximation to information status, we add a boolean feature indicating the presence of the current word in the question .", "We use the lem-matized form of the word as determined by TreeTagger (Schmid, 1994).", "Positional properties Where a word occurs in the answer or the question can be relevant for its information structural status.", "It has been observed since Halliday (1967) that given material tends to occur earlier in sentences (here: answers), while new or focused content tends to occur later.", "We encode this observation in three different features: the position of the word in the answer (normal-ized by sentence length), the distance from the fi-nite verb (in words), and the position of the word in the question (if it is given).", "Conjunction features To explicitly tie answer properties to question properties, we explored different combinations of the features described above.", "Specifically, we encoded the current word's POS depending on the question form , and the current word's POS depending on the wh -word's POS .", "To constrain the feature space and get rid of unnecessary distinctions, we converted the answer word's POS to a coarse-grained version before computing these features, which collapses all variants of determiners, pronouns, adjectives/adverbs, prepositions, nouns and verbs into one label, respectively.", "3 5 Intrinsic Evaluation 5.1 Setup To employ the features described above in an actual classifier, we trained a logistic regression model using the WEKA toolkit (Hall et al., 2009).", "We also experimented with other classification algorithms such as SVMs, but found that they did not offer superior performance for this task.", "The data set used consists of all expert focus annotation available (3,187 student answers, see section 3), with the exception of the answers occurring in the extrinsic evaluation test set we use in section 8, which leaves a total of 2,240 student answers with corresponding target answers and questions.", "We used 10-fold cross-validation on this data set to experiment and select the optimal model for focus detection.", "3 For a list (in German) of the full tag set, see http://www.ims.uni-stuttgart.de/ forschung/ressourcen/lexika/TagSets/stts-table.html 120 5.2 Results Table 1 lists the accuracies 4 obtained for our different feature groups, as well as three baselines: a POS baseline, following Sridhar et al. (2008), a baseline that only includes the simple givenness feature, and the majority baseline.", "The majority class is focus , occurring in 58.1% of the 26,980 cases (individual words).", "We can see that each feature group incrementally adds to the final model's performance, with particularly noticeable boosts coming from the givenness and positional features.", "Another clear observation is that the classifier is much better at detecting focus than background , possibly also due to the skewedness of the data set.", "Note that performance on background increases also with the addition of the Question' feature set, indicating the close relation between the set of alternatives introduced by the question and the focus selecting from that set, even though our approximation to computationally determining alternatives in questions is basic.", "It is also clear that the information intrinsic in the answers, as encoded in the SynAns' and Position' feature sets, already provides significant performance benefits, suggesting that a classifier trained only on these features could be trained and applied to settings where no explicit questions are available.", "In order to help explain the gap between automatic and manual focus annotation, let us take a step back from quantitative evaluation and examine a few characteristic examples in more detail.", "Figure 1 shows a case where a why -question is answered with an embedded weil' (because) 4 We show per-class and overall accuracies, the former is also known as recall or true positive rate.", "clause.", "The classifier successfully marked weil' and the end of the clause as focus , but left out the pronoun es' (it) in the middle, presumably because pronouns are given and often not focused in other answers.", "We did experiment with using a sequence classification approach in order to remedy such problems, but it performed worse overall than the logistic regression model we presented in section 4. We therefore suggest that in such cases, a global constraint stating that why -questions are typically answered with a full clause would be a more promising approach, combining knowledge learned bottom-up from data with top-down linguistic insight.", "In Figure 2, we can see two different problems.", "One is again a faulty gap, namely the omission of the conjunction und' (and).", "The other is the focus marking of the word AG' (corporation) in the beginning of the sentence: since the question asks for an enumeration of the institutions that form a corporation, marking AG' as focused is erroneous.", "This problem likely occurs often with nouns because the classifier has learned that content words are often focused.", "Moreover, the surface givenness feature does not encode that AG' is in fact an abbreviation of Aktiengesellschaft' and therefore given.", "It would thus be beneficial to extend our analysis of givenness beyond surface identity, a direction we explore in the next section.", "Finally, Figure 3 presents a case where an enumeration is marked correctly, including the conjunctive punctuation in between, showing that cases of longer foci are indeed within reach for a word-by-word focus classifier.", "Based on our analysis of problematic cases outlined in the previous section, we explored two different avenues for improving our focus detection model, which we describe below.", "We have seen in section 5.2 that surface-based givenness is helpful in predicting focus.", "However, it clearly has limitations, as for example synonymy cannot be captured on the surface.", "We also exemplified one such limitation in Figure 2. In order to overcome these limitations, we implemented an approach based on distributional semantics.", "This avenue is motivated by the fact that Ziai et al. (2016) have shown Givenness modeled 121 Warum sollte man Dresden besuchen?", "as distributional similarity to be helpful for SAA at least in some cases.", "We used the word vector model they derived from the DeWAC corpus (Baroni et al., 2009) using word2vec's continuous bag-of-words training algorithm with hierarchical softmax (Mikolov et al., 2013).", "The model has a vocabulary of 1,825,306 words and uses 400 dimensions for each.", "Having equipped ourselves with a word vector model, the question arises how to use it in focus detection in such a way that it complements the positive impact that surface-based givenness already demonstrates.", "Rather than using an empirically determined (and hence data-dependent) empirical threshold for determining givenness as done by Ziai et al. (2016), we here use raw cosine similarities 5 as features and let the classifier assign appropriate weights to them during training.", "Concretely, we calculate maximum, minimum and average cosine between the answer word and the question words .", "As a fourth feature, we calculate the cosine between the answer word and the additive question word vector , which is the sum of the individual question word vectors.", "Another source of evidence we wanted to exploit is constituency-based syntactic annotation.", "So far, 5 We normalize cosine similarity as cosine distance to obtain positive values between 0 and 2: dist = 1 sim we have worked with part-of-speech tags and dependency relations as far as syntactic representation is concerned.", "However, while discontinuous focus is possible, focus as operationalized in the scheme by Ziai and Meurers (2014) most often marks an adjacent group of words, a tendency that our word-based classifier did not always follow, as exemplified by the cases in Figures 1 and 2. Such groups very often correspond to a syntactic phrase, so constituent membership is likely indicative in predicting the focus status of an individual word.", "Similarly, the topological field (Hohle, 1986) identifying the major section of a sentence in relation to the clausal main verb is potentially relevant for a word's focus status.", "Cheung and Penn (2009) present a parsing model that demonstrates good performance in determining both topological fields and phrase structure for German.", "The model is trained on the TuBa-D/Z treebank (Telljohann et al., 2004), whose rich syntactic model encodes topological fields as nodes in the syntax tree itself.", "Following Cheung and Penn (2009), we trained an updated version of their model using the current version of the Berkeley Parser (Petrov and Klein, 2007) and release 10 of the TuBa-D/Z.", "6 Based on the new parsing model, we integrated two new features into our focus detection model: 6 http://www.sfs.uni-tuebingen.de/en/ ascl/resources/corpora/tueba-dz.html 122 the direct parent constituent node of a word and the nearest topological field node of a word .", "7.3 Final Results Table 2 shows the impact of the new feature groups discussed above.", "While the improvements may seem modest quantitatively, they show that the added features are well-motivated and do make an impact.", "Overall, it is especially apparent that the key to better performance is reducing the number of false positives in this data set: while the accuracy for focus stays roughly the same, the one for background improves steadily with each feature set addition.", "Complementing the intrinsic evaluation above, in this section we demonstrate how focus can be successfully used to improve performance in an authentic CL task, namely Short Answer Assessment (SAA).", "It has been pointed out that evaluating the annotation of a theoretical linguistic notion only intrinsically is problematic because there is no nontheoretical grounding involved (Riezler, 2014).", "Therefore, besides a comparison to the gold standard, we also evaluated the resulting annotation in a larger computational task, the automatic meaning assessment of short answers to reading comprehension questions.", "Here the goal is to decide, given a question (Q) and a correct target answer (TA), whether the student answer (SA) actually answers the question or not.", "An example from Meurers et al. (2011) is shown in Figure 4. We used the freely available CoMiC system (Comparing Meaning in Context, Meurers et al. 2011) as a testbed for our experiment.", "CoMiC is an alignment-based system operating in three stages: Figure 4: Short Answer Assessment example 1. Annotating linguistic units (words, chunks and dependencies) in student and target answer on various levels of abstraction 2. Finding alignments of linguistic units between student and target answer based on annotation (see Figure 4) 3. Classifying the student answer based on number and type of alignments (see Table 3), using a supervised machine learning setup Feature Description 1. Keyword Overlap Percent of dependency heads aligned (relative to target) 2./3.", "Token Overlap Percent of aligned target/student tokens 4./5.", "Chunk Overlap Percent of aligned target/student chunks (as identified by OpenNLP 3 ) 6./7.", "Triple Overlap Percent of aligned target/student dependency triples 8. Token Match Percent of token alignments that were token-identical 9. Similarity Match Percent of token alignments resolved using PMI-IR (Turney, 2001) 10. Type Match Percent of token alignments resolved using GermaNet hierarchy (Hamp and Feldweg, 1997) 11. Lemma Match Percent of token alignments that were lemma-resolved 12. Synonym Match Percent of token alignments sharing same GermaNet synset 13. Variety of Match Number of kinds of (0-5) token-level alignments (features 812) Table 3: Standard features in the CoMiC system In stage 2, CoMiC integrates a simplistic approach to givenness, excluding all words from alignment that are mentioned in the question.", "We transferred the underlying method to the notion of focus and implemented a component that excludes all non-focused words from alignment, resulting 3 http://opennlp.apache.org/ 123 in alignments between focused parts of answers only.", "The hypothesis is that the alignment of focused elements in answers adds information about the quality of the answer with respect to the question, leading to a higher answer classification accuracy.", "We experimented with two different settings involving the standard CoMiC system and a focus-augmented variant:", "i) using standard CoMiC with the givenness filter by itself as a baseline, and", "ii) augmenting standard CoMiC by additionally producing a focus version of each classification feature in Table 3. In each case, we used WEKA's k -nearest-neighbor implementation for CoMiC, following positive results by Rudzewitz (2016).", "We use two test sets randomly selected from the CREG-5K data set (Ziai et al., 2016), one based on an unseen answers and one based on an unseen questions test scenario, based on the methodology of (Dzikovska et al., 2013): in unseen an-swers', the test set can contain answers to the same questions already part of the training set (but not the answers themselves), whereas in unseen questions' both questions and answers are new in the test set.", "In order to arrive at a fair and generalizable testing setup, we removed all answers from the CREG-5K training set that also occur in the CREG-ExpertFocus set used to train our focus detection classifier.", "This ensures that neither the focus classifier nor CoMiC have seen any of the test set answers before.", "The resulting smaller training set contains 1606 student answers, while the test sets contain 1002 (unseen answers) and 1121 (unseen questions), respectively.", "Table 4 summarizes the results for the different CoMiC variants and test sets in terms of accuracy in classifying answers as correct vs. incorrect .", "Standard CoMiC' refers to the standard CoMiC system and +Focus' refers to the augmented system using both feature versions.", "For reference on what is possible with Focus information, we provide the results of the oracle experiment by De Kuthy et al. (2016), even though the test setup and data setup are slightly different.", "In addition to our two test sets introduced above, we tested the systems on the training set using 10-fold cross validation.", "We also provide the majority baseline of the respective data set along with the majority class.", "One can see that in general, the focus classifier seems to introduce too much noise to positively impact classification results.", "The standard CoMiC system outperforms the focus-augmented version for the cross validation case and the unseen an-swers' set.", "This is in contrast to the experiments reported by De Kuthy et al. (2016) using manual focus information, where the augmented system clearly outperforms all other variants.", "This shows that while focus information is clearly useful in Short Answer Assessment, it needs to be reliable enough to be of actual benefit.", "Recall also that the way we use focus information in CoMiC implies a strong commitment: only focused words are aligned and included in feature extraction, which does not produce the desired result if the focus information is not accurate.", "A possible way of remedying this situation would be to use focus as an extra feature or less strict modifier of existing features.", "There is thus room for improvement both in the automatic detection of focus and its use in extrinsic tasks.", "However, one result stands out encouragingly: in the unseen questions' case, the focus-augmented version beats standard CoMiC, if only by a relatively small margin.", "This shows that even automatically determined information structural properties provide benefits when more concrete information, in the form of previously seen answers to the same questions, is not available.", "Our classifier thus successfully transfers general knowledge about focus to new question material.", "We presented the first automatic focus detection approach for written data, and the first such approach for German.", "The approach uses a rich feature set including abstractions to grammatical notions (parts of speech, dependencies), word order aspects captured by a topological field model of German, an approximation of Givenness and the relation between material in the answer and that of the question word.", "Using a word-by-word classification approach that takes into account both syntactic and semantic properties of answer and question words, we achieve an accuracy of 78.1% on a data set of 26,980 words in 10-fold cross validation.", "The focus detection pipeline developed for the experiment is freely available to other researchers.", "Complementing the intrinsic evaluation, we 124 Test set Instances Majority baseline CoMiC +Focus Oracle experiment reported by De Kuthy et al. (2016) on CREG-ExpertFocus leave-one-out 3187 51.0% (correct) 83.2% 85.6% 10-fold CV 1606 54.4% (correct) 83.2% 82.3% Unseen answers 1002 51.3% (correct) 80.6% 80.5% Unseen questions 1121 51.1% (incorrect) 77.4% 78.4% Table 4: CoMiC results on different test sets using standard and focus-augmented features provide an extrinsic evaluation of the approach as part of a larger CL task, the automatic content assessment of answers to reading comprehension questions.", "We show that while automatic focus detection does not yet improve content assessment for answers similar to the ones previously seen, it does provide a benefit in test cases where the questions and answers are completely new, i.e., where the system needs to generalize beyond the specific cases and contexts previously seen.", "Contextualizing our work, one can see two different strands of research in the automatic analysis of focus.", "In comparison to Calhoun (2007) and follow-up approaches, who mainly concentrate on linking prosodic prominence to focus in dialogues, we do not limit our analysis to content words, but analyze every word of an utterance.", "This is made feasible due to the explicit task context we have in the form of answers to reading comprehension questions.", "We believe this nicely illustrates two avenues for obtaining relevant evidence on information structure: On the one hand, there is evidence obtained bottom-up through the data such as the rich information on prominence in spoken language data such as the corpus used by Calhoun (2007).", "On the other hand, there is top-down evidence from the task context, which sets up expectations about what is to be addressed for the current question under discussion.", "Following the QUD research strand, the approach presented in this paper could be scaled up beyond explicit question-answer pairs: De Kuthy et al. (2018) spell out an explicit analysis of text in terms of QUDs and show that it is possible to annotate explicit QUDs with high inter-annotator agreement.", "Combined with an automated approach to question generation, it could thus be possible to recover implicit QUDs from text and subsequently apply our current approach to any text, based on an independently established, general formal pragmatic analysis.", "Finally, the qualitative analysis we exemplified is promising in terms of obtaining valuable insights to be addressed in future work.", "For example, the analysis identified faulty gaps in focus marking.", "In future work, integrating insights from theoretical linguistic approaches to focus and the notion of focus projection established there (cf., e.g., De Kuthy and Meurers 2012) could provide more guidance for ensuring contiguity of focus domains.", "We would like to thank Kordula De Kuthy and the anonymous reviewers for detailed and helpful comments on different versions of this paper.", "This work has been funded by the Deutsche Forschungsgemeinschaft through Collaborative Research Center 833." ]
[ "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "objective", "abstain", "method", "method", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "other", "abstain", "abstain", "other", "other", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "result", "abstain", "other", "objective", "result", "method", "abstain", "result", "abstain", "result", "method", "result", "abstain", "abstain", "other", "other" ]
[ "Neural image-to-text radiology report generation systems offer the potential to improve radiology reporting by reducing the repetitive process of report drafting and identifying possible medical errors.", "However, existing report generation systems, despite achieving high performances on natural language generation metrics such as CIDEr or BLEU, still suffer from incomplete and inconsistent generations.", "Here we introduce two new simple rewards to encourage the generation of factually complete and consistent radiology reports: one that encourages the system to generate radiology domain entities consistent with the reference, and one that uses natural language inference to encourage these entities to be described in inferentially consistent ways.", "We combine these with the novel use of an existing semantic equivalence metric (BERTScore).", "We further propose a report generation system that optimizes these rewards via reinforcement learning.", "On two open radiology report datasets, our system substantially improved the F 1 score of a clinical information extraction performance by +22 .", "1 ( + 63 . 9% ).", "We further show via a human evaluation and a qualitative analysis that our system leads to generations that are more factually complete and consistent compared to the baselines.", "An important new application of natural language generation (NLG) is to build assistive systems that take X-ray images of a patient and generate a textual report describing clinical observations in the images (Jing et al., 2018; Li et al., 2018; Liu et al., 2019; Boag et al., 2020; Chen et al., 2020).", "Figure 1 shows an example of a radiology report generated by such a system.", "This is a clinically important task, offering the potential to reduce radiologists' repetitive work and generally improve clinical communication (Kahn et al., 2009).", "Automatic radiology report generation systems have achieved promising performance as measured by widely used NLG metrics such as CIDEr (Vedantam et al., 2015) and BLEU (Papineni et al., 2002) on several datasets (Li et al., 2018; Jing et al., 2019; Chen et al., 2020).", "However, reports that achieve high performance on these NLG metrics are not always factually complete or consistent.", "In addition to the use of inadequate metrics, the factual incompleteness and inconsistency issue in generated reports is further exacerbated by the inadequate training of these systems.", "Specifically, the standard teacher-forcing training algorithm (Williams and Zipser, 1989) used by most existing work can lead to a discrepancy between what the model sees during training and test time (Ran-zato et al., 2016), resulting in degenerate outputs with factual hallucinations (Maynez et al., 2020).", "Liu et al. (2019) and Boag et al. (2020) have shown that reports generated by state-of-the-art systems still have poor quality when evaluated by their clinical metrics as measured with an information extraction system designed for radiology reports.", "For example, the generated report in Figure 1 is incomplete since it neglects an observation of atelectasis that can be found in the images.", "It is also inconsistent since it mentions left-sided pleural effusion which is not present in the images.", "Indeed, we show that existing systems are inadequate in factual completeness and consistency, and that an image-to-text radiology report generation system can be substantially improved by replacing widely used NLG metrics with simple alternatives.", "We propose two new simple rewards that can encourage the factual completeness and consistency of the generated reports.", "First, we propose the Exact Entity Match Reward ( fact ENT ) which captures the completeness of a generated report by measuring its coverage of entities in the radiology domain, compared with a reference report.", "The goal of the reward is to better capture disease and anatomical knowledge that are encoded in the entities.", "Second, we propose the Entailing Entity Match Reward ( fact ENTNLI ), which extends fact ENT with a natural language inference (NLI) model that further considers how inferentially consistent the generated entities are with their descriptions in the reference.", "We add NLI to control the overestimation of disease when optimizing towards fact ENT .", "We use these two metrics along with an existing semantic equivalence metric, BERTScore (Zhang et al., 2020a), to potentially capture synonyms (e.g., left and right effusions are synonymous with bilat-eral effusions) and distant dependencies between diseases (e.g., a negation like . . . but underlying consolidation or other pulmonary lesion not excluded) that are present in radiology reports.", "Although recent work in summarization, dialogue, and data-to-text generation has tried to address this problem of factual incompleteness and inconsistency by using natural language inference (NLI) (Falke et al., 2019; Welleck et al., 2019), question answering (QA) (Wang et al., 2020a), or content matching constraint (Wang et al., 2020b) approaches, they either show negative results or are not directly applicable to the generation of radiology reports due to a substantial task and domain difference.", "To construct the NLI model for fact ENTNLI , we present a weakly supervised approach that adapts an existing NLI model to the radiology domain.", "We further present a report generation model which directly optimizes a Transformer-based architecture with these rewards using reinforcement learning (RL).", "We evaluate our proposed report generation model on two publicly available radiology report generation datasets.", "We find that optimizing the proposed rewards along with BERTScore by RL leads to generated reports that achieve substantially improved performance in the important clinical metrics (Liu et al., 2019; Boag et al., 2020; Chen et al., 2020), demonstrating the higher clinical value of our approach.", "We make all our code and the expert-labeled test set for evaluating the radiology NLI model publicly available to encourage future research 1 .", "To summarize, our contributions in this paper are:", "1. We propose two simple rewards for image-to-text radiology report generation, which focus on capturing the factual completeness and consistency of generated reports, and a weak supervision-based approach for training a radiology-domain NLI model to realize the second reward.", "2. We present a new radiology report generation model that directly optimizes these new rewards with RL, showing that previous approaches that optimize traditional NLG metrics are inadequate, and that the proposed approach substantially improves performance on clinical metrics (as much as + 64 . 2% ) on two publicly available datasets.", "Wang et al. (2018) and Jing et al. (2018) first proposed multi-task learning models that jointly generate a report and classify disease labels from a chest X-ray image.", "Their models were extended to use multiple images (Yuan et al., 2019), to adopt a hybrid retrieval-generation model (Li et al., 2018), or to consider structure information (Jing et al., 2019).", "More recent work has focused on generating reports that are clinically consistent and accurate.", "Liu et al. (2019) presented a system that generates accurate reports by fine-tuning it with their Clinically 1 https://github.com/ysmiura/ifcc Coherent Reward.", "Boag et al. (2020) evaluated several baseline generation systems with clinical metrics and found that standard NLG metrics are ill-equipped for this task.", "Very recently, Chen et al. (2020) proposed an approach to generate radiology reports with a memory-driven Transformer.", "Our work is most related to Liu et al. (2019); their system, however, is dependent on a rule-based information extraction system specifically created for chest X-ray reports and has limited robustness and generalizability to different domains within radiology.", "By contrast, we aim to develop methods that improve the factual completeness and consistency of generated reports by harnessing more robust statistical models and are easily generalizable.", "A variety of recent work has focused on consistency and faithfulness in generation.", "Our work is inspired by Falke et al. (2019), Welleck et al. (2019), and Matsumaru et al. (2020) in using NLI to rerank or filter generations in text summarization, dialogue, and headline generations systems, respectively.", "Other attempts in this direction include evaluating consistency in generations using QA models (Durmus et al., 2020; Wang et al., 2020a; Maynez et al., 2020), with distantly supervised classifiers (Kryscinski et al., 2020), and with task-specific content matching constraints (Wang et al., 2020b).", "Liu et al. (2019) and Zhang et al. (2020b) studied improving the factual correctness in generating radiology reports with rule-based information extraction systems.", "Our work mainly differs from theirs in the direct optimization of factual completeness with an entity-based reward and of factual consistency with a statistical NLI-based reward.", "The problem of generating text from image data has been widely studied in the image captioning setting.", "While early work focused on combining convolutional neural network (CNN) and recurrent neural network (RNN) architectures (Vinyals et al., 2015), more recent work has discovered the effectiveness of using the Transformer architecture (Vaswani et al., 2017).", "Li et al. (2019) and Pan et al. (2020) introduced an attention process to exploit semantic and visual information into this architecture.", "Herdade et al. (2019), Cornia et al. (2020), and Guo et al. (2020) extended this architecture to learn geometrical and other relationships between input Memory-Augmented Attention Add & Norm Feed Forward Add & Norm MaskedSelf-Attention Add & Norm Feed Forward Add & Norm Cross-Attention Add & Norm # !", "regions.", "We find Meshed-Memory Transformer (Cornia et al., 2020) ( M 2 Trans) to be more effective in our radiology report generation task than the traditional RNN-based models and Transformer models (an empirical result will be shown in 4), and therefore use it as our base architecture.", "Formally, given K individual images x 1 ...K of a patient, our task involves generating a sequence of words to form a textual report y , which describes the clinical observations in the images.", "This task resembles image captioning, except with multiple images as input and longer text sequences as output.", "We therefore extend a state-of-the-art image captioning model, M 2 Trans (Cornia et al., 2020), with multi-image input as our base architecture.", "We first briefly introduce this model and refer interested readers to Cornia et al. (2020).", "Figure 2 illustrates an overview of the M 2 Trans model.", "Given an image x k , image regions are first extracted with a CNN as X = CNN( x k ) .", "X is then encoded with a memory-augmented attention process M mem ( X ) as M mem ( X ) = Att(W q X , K , V ) (1) Att( Q , K , V ) = softmax (cid:18) QKT d (cid:19) V (2) K = [ W k X ; M k ] (3) V = [ W v X ; M v ] (4) where W q , W k , W v are weights, M k , M v are memory matrices, d is a scaling factor, and [ ; ] is the concatenation operation.", "Att( Q , K , V ) is an attention process derived from the Transformer architecture (Vaswani et al., 2017) and extended to include memory matrices that can encode a priori knowledge between image regions.", "In the encoder, this attention process is a self-attention process since all of the query Q , the key K , and the value V depend on X .", "M mem ( X ) is further processed with a feed forward layer, a residual connection, and a layer normalization to output X .", "This encoding process can be stacked N times and is applied to K images, and n -th layer output of K image will be X n, K .", "The meshed decoder first processes an encoded text Y with a masked self-attention and further processes it with a feed forward layer, a residual connection, and a layer normalization to output Y .", "Y is then passed to a cross attention C ( X n, K , Y ) and a meshed attention M mesh ( XN , K , Y ) as M mesh ( XN , K , Y ) = (cid:88) n n (cid:12) C ( X n, K , Y ) (5) C ( X n, K , Y ) = max K (Att(W q Y , W k X n , K , W v X n , K )) (6) n = (cid:16) W n [ Y ; C ( X n, K , Y )] + b n (cid:17) (7) where (cid:12) is element-wise multiplication, max K is max-pooling over K images, is sigmoid function, W n is a weight, and b n is a bias.", "The weighted summation in M mesh ( XN , K , Y ) exploits both low-level and high-level information from the N stacked encoder.", "Differing from the self-attention process in the encoder, the cross attention uses a query that depends on Y and a key and a value that depend on X .", "M mesh ( XN , K , Y ) is further processed with a feed forward layer, a residual connection, and a layer normalization to output Y .", "As like in the encoder, the decoder can be stacked N times to output YN .", "YN is further passed to a feed forward layer to output report y .", "We designed an F-score entity match reward to capture factual completeness.", "This reward assumes that entities encode disease and anatomical knowledge that relates to factual completeness.", "A named entity recognizer is applied to y and the corresponding reference report y .", "Given entities E gen and E ref recognized from y gen and y ref respectively, precision (pr) and recall (rc) of entity match are calculated as pr ENT = (cid:80) e E gen ( e, E ref ) | E gen | (8) rc ENT = (cid:80) e E ref ( e, E gen ) | E ref | (9) ( e, E ) = (cid:40) 1 , for e E 0 , otherwise (10) The harmonic mean of precision and recall is taken as fact ENT to reward a balanced match of entities.", "We used Stanza (Qi et al., 2020) and its clinical models (Zhang et al., 2020c) as a named entity recognizer for radiology reports.", "For example in the case of Figure 1, the common entities among the reference report and the generated report are pleural and effusion , resulting to fact ENT = 33 .", "3 .", "We additionally designed an F-score style reward that expands fact ENT with NLI to capture factual consistency.", "NLI is used to control the overestimation of disease when optimizing towards fact ENT .", "In fact ENTNLI , in Eq.", "10 is expanded to ( e,E )= 1 , for e E NLI e ( P , h ) (cid:54) = contradiction 1 , for NLI e ( P , h ) = entailment 0 , otherwise (11) NLI e ( P , h ) = nli ( p, h ) where p = arg max p P sim ( h, p ) (12) where h is a sentence that includes e , P is all sentences in a counter part text (if h is a sentence in a generated report, P is all sentences in the corresponding reference report), nli ( , ) is an NLI function that returns an NLI label which is one of { entailment , neutral , contradiction } , and sim ( , ) is a text similarity function.", "We used BERTScore (Zhang et al., 2020a) as sim ( , ) in the experiments (the detail of BERTScore can be found in Appendix A).", "The harmonic mean of precision and recall is taken as fact ENTNLI to encourage a balanced factual consistency between a generated text and the corresponding reference text.", "For example in the case of Figure 1, the sentence The left-sided pleural effusion has increased in size and is now moderate in size. will be contradictory to There is no left pleural effusion. resulting in pleural and effusion being rejected in y gen .", "We integrate the proposed factual rewards into self-critical sequence training (Rennie et al., 2017).", "An RL loss LRL is minimized as the negative expectation of the reward r .", "The gradient of the loss is estimated with a single Monte Carlo sample as LRL ( ) = log P ( y sp | x 1 ...K ) ( r ( y sp ) r ( y gd )) (13) where y sp is a sampled text and y gd is a greedy decoded text.", "Paulus et al. (2018) and Zhang et al. (2020b) have shown that a generation can be improved by combining multiple losses.", "We combine a factual metric loss with a language model loss and an NLG loss as L = 1 LNLL + 2 LRL _ NLG + 3 LRL _ FACT (14) where LNLL is a language model loss, LRL _ NLG is the RL loss using an NLG metric (e.g., CIDEr or BERTScore), LRL _ FACT is the RL loss using a factual reward (e.g., fact ENT or fact ENTNLI ), and are scaling factors to balance the multiple losses.", "We propose a weakly-supervised approach to construct an NLI model for radiology reports.", "(There already exists an NLI system for the medical domain, MedNLI (Romanov and Shivade, 2018), but we found that a model trained on MedNLI does not work well on radiology", "reports.) Given a large scale dataset of radiology reports, a sentence pair is sampled and filtered with weakly-supervised rules.", "The rules are prepared to extract a randomly sampled sentence pair ( s 1 and s 2 ) that are in an entailment, neutral, or contradiction relation.", "We designed the following 6 rules for weak-supervision.", "Neutral 4 (N4) (1) NE of s 1 are equal to NE of s 2 and (2) s 1 and s 2 include observation keywords.", "The rules rely on a semantic similarity measure and the overlap of entities to determine the relationship between s 1 and s 2 .", "In the neutral rules and the contradiction rule, we included similarity measures to avoid extracting easy to distinguish sentence pairs.", "We evaluated this NLI by preparing training data, validation data, and test data.", "For the training data, the training set of MIMIC-CXR (Johnson et al., 2019) is used as the source of sentence pairs.", "2 k pairs are extracted for E1 and C1, 0 .", "5 k pairs are extracted for N1, N2, N3, and N4, resulting in a total of 6 k pairs.", "The training set of MedNLI is also used as additional data.", "For the validation data and the test data, we sampled 480 sentence pairs from the validation section of MIMIC-CXR and had them annotated by two experts: one medical expert and one NLP expert.", "Each pair is annotated twice swapping its premise and hypothesis resulting in 960 pairs and are split in half resulting in 480 pair for a validation set and 480 pairs for a test set.", "The test set of MedNLI is also used as alternative test data.", "We used BERT (Devlin et al., 2019) as an NLI model since it performed as a strong baseline in the existing MedNLI system (Ben Abacha et al., 2019), and used Stanza (Qi et al., 2020) and its clinical models (Zhang et al., 2020c) as a named entity recognizer.", "Table 1 shows the result of the model trained with and without the weakly-supervised data.", "The accuracy of NLI on radiology data increased substantially by +24 .", "5 % with the addition of the radiology NLI training set.", "(See Appendix A for the detail of the rules, the datasets, and the model configuration.) 4 Experiments 4.1 Data We used the training and validation sets of MIMIC-CXR (Johnson et al., 2019) to train and validate models.", "MIMIC-CXR is a large publicly available database of chest radiographs.", "We extracted the findings sections from the reports with a text extraction tool for MIMIC-CXR 2 , and used them as our reference reports as in previous work (Liu et al., 2019; Boag et al., 2020).", "Findings section is a natural language description of the important aspects in a radiology image.", "The reports with empty findings sections were discarded, resulting in 152173 and 1196 reports for the training and validation set, respectively.", "We used the test set of MIMIC-CXR and the entire Open-i Chest X-ray dataset (Demner-Fushman et al., 2012) as two individual test sets.", "Open-i is another publicly available database of chest radiographs which has been widely used in past studies.", "We again extracted the findings sections, resulting in 2347 reports for MIMIC-CXR and 3335 reports for Open-i.", "Open-i is used only for testing since the number of reports is too small to train and test a neural report generation model.", "BLEU4, CIDEr-D & BERTScore: We first use general NLG metrics to evaluate the generation quality.", "These metrics include the 4-gram BLEU scroe (Papineni et al., 2002, BLEU4), CIDEr score (Vedantam et al., 2015) with gaming penalties (CIDEr-D), and the F 1 score of the BERTScore (Zhang et al., 2020a).", "Clinical Metrics: However, NLG metrics such as BLEU and CIDEr are known to be inadequate for evaluating factual completeness and consistency.", "We therefore followed previous work (Liu et al., 2019; Boag et al., 2020; Chen et al., 2020) by additionally evaluating the clinical accuracy of the generated reports using a clinical information extraction system.", "We use CheXbert (Smit et al., 2020), an information extraction system for chest reports, to extract the presence status of a series of observations (i.e., whether a disease is present or not), and score a generation by comparing the values of these observations to those obtained from 2 https://github.com/MIT-LCP/mimic-cxr/tree/master/txt the reference 3 .", "The micro average of accuracy, precision, recall, and F 1 scores are calculated over 5 observations (following previous work (Irvin et al., 2019)) for: atelectasis , cardiomegaly , consolidation , edema , and pleural effusion 4 .", "fact ENT & fact ENTNLI : We additionally include our proposed rewards fact ENT and fact ENTNLI as metrics to compare their values for different models.", "We used M 2 Trans as our report generation model and used DenseNet-121 (Huang et al., 2017) as our image encoder.", "We trained M 2 Trans with the following variety of joint losses.", "NLL+CDr CIDEr-D and NLL loss is jointly optimized with 1 = 0 .", "01 and 2 = 0 .", "99 for the scaling factors.", "NLL+BS The F 1 score of BERTScore and NLL loss is jointly optimized with 1 = 0 .", "01 and 2 = 0 .", "99 .", "NLL+BS+ fc E fact ENT is added to NLL+BS with 1 = 0 .", "01 , 2 = 0 .", "495 , and 3 = 0 .", "495 .", "NLL+BS+ fc EN fact ENTNLI is added to NLL+BS with 1 = 0 .", "01 , 2 = 0 .", "495 , and 3 = 0 .", "495 .", "We additionally prepared three previous models that have been tested on MIMIC-CXR.", "TieNet We reimplemented the model of Wang et al. (2018) consisting of a CNN encoder and an RNN decoder optimized with a multitask setting of language generation and image classification.", "CNN RNN 2 We reimplemented the model of Liu et al. (2019) consisting of a CNN encoder and a hierarchical RNN decoder optimized with CIDEr and Clinically Coherent Reward 3 We used CheXbert instead of CheXpert (Irvin et al., 2019) since CheXbert was evaluated to be approximately 5 .", "5% more accurate than CheXpert.", "The evaluation using CheXpert can be found in Appendix C. 4 These 5 observations are evaluated to be most represented in real-world radiology reports and therefore using these 5 observations (and excluding others) leads to less variance and more statistical strength in the results.", "We include the detailed results of the clinical metrics in Appendix C for completeness.", "R2Gen The model of Chen et al. (2020) with a CNN encoder and a memory-driven Transformer optimized with NLL loss.", "We used the publicly available official code and its checkpoint as its implementation.", "Table 2 shows the results of the baselines 5 and M 2 Trans optimized with the five different joint losses.", "We find that the best result for a metric or a reward is achieved when that metric or reward is used directly in the optimization objective.", "Notably, for the proposed factual rewards, the increases of +3 .", "6 fact ENT and +4 .", "9 fact ENTNLI are observed 5 These MIMIC-CXR scores have some gaps from the previously reported values with some possible reasons.", "First, TieNet and CNN RNN 2 in Liu et al. (2019) are evaluated on a pre-release version of MIMIC-CXR.", "Second, we used report-level evaluation for all models, but Chen et al. (2020) tested R2Gen using image-level evaluation.", "on MIMIC-CXR with M 2 Trans when compared against M 2 Trans w/ BS.", "For the clinical metrics, the best recalls and F 1 scores are obtained with M 2 Trans using fact ENT as a reward, achieving a substantial +22 .", "1 increase ( +63 . 9% ) in F 1 score against the best baseline R2Gen.", "We further find that using fact ENTNLI as a reward leads to higher precision and accuracy compared to fact ENT with decreases in the recalls.", "The best precisions and accuracies were obtained in the baseline CNN RNN 2 .", "This is not surprising since this model directly optimizes the clinical metrics with its Clinically Coherent Reward.", "However, this model is strongly optimized against precision resulting in the low recalls and F 1 scores.", "The results of M 2 Trans without the proposed rewards and BERTScore reveal the strength of M 2 Trans and the inadequacy of NLL loss and CIDEr for factual completeness and consistency.", "M 2 Trans w/ NLL shows strong improvements in the clinical metrics against R2Gen.", "These improvements are a little surprising since both models are Transformer-based models and are optimized with NLL loss.", "We assume that these improvements are due to architecture differences such as memory matrices in the encoder of M 2 Trans.", "The differ-M 2 Trans w/ BS R2Gen No Proposed (simple) (Chen et al., 2020) difference 36 .", "ence between NLL and NLL+CDr on M 2 Trans indicates that NLL and CIDEr are unreliable for factual completeness and consistency.", "We performed a human evaluation to further con-firm whether the generated radiology reports are factually complete and consistent.", "Following prior studies of radiology report summarization (Zhang et al., 2020b) and image captioning evaluation (Vedantam et al., 2015), we designed a simple human evaluation task.", "Given a reference report (R) and two candidate model generated reports (C1, C2), two board-certified radiologists decided whether C1 or C2 is more factually similar to R. To consider cases when C1 and C2 are difficult to differentiate, we also prepared No difference as an answer.", "We sampled 100 reports randomly from the test set of MIMIC-CXR for this evaluation.", "Since this evaluation is (financially) expensive and there has been no human evaluation between the baseline models, we selected R2Gen as the best previous model and M 2 Trans w/ BS as the most simple proposed model, in order to be able to weakly infer that all of our proposed models are better than all of the baselines.", "Table 3 shows the result of the evaluation.", "The majority of the reports were labeled No difference but the proposed approach received three times as much preference as the baseline.", "There are two main reasons why No difference was frequent in human evaluation.", "First, we found that a substantial portion of the examples were normal studies (no abnormal observations), which leads to generated reports of similar quality from both models.", "Second, in some reports with multiple abnormal observations, both models made mistakes on a subset of these observations, making it difficult to decide which model output was better.", "The integrations of fact ENT and fact ENTNLI showed improvements in the clinical metrics.", "We further examined whether these rewards can be Metric BLEU4 0 .", "used to estimate the performance of the clinical metrics to see whether the proposed rewards can be used in an evaluation where a strong clinical information extraction system like CheXbert is not available.", "Table 4 shows Spearman correlations calculated on the generated reports of NLL+BS.", "fact ENTNLI shows the strongest correlation with the clinical accuracy which aligns with the optimization where the best accuracy is obtained with NLL+ BS+ fact ENTNLI .", "This correlation value is slightly lower than a Spearman correlation which Maynez et al. (2020) observed with NLI for the factual data ( 0 . 264 ).", "The result suggests the effectiveness of using the factual rewards to estimate the factual completeness and consistency of radiology reports, although the correlations are still limited, with some room for improvement.", "The evaluation with the clinically findings metrics showed improved generation performance by integrating BERTScore, fact ENT , and fact ENTNLI .", "As a qualitative analysis, we examined some of the generated reports to see the improvements.", "Example 1 in Figure 3 shows the improved factual completeness and consistency with BERTScore.", "The atelectasis is correctly generated and left plural effusion is correctly suppressed with NLL+BS.", "Example 2 in Figure 4 shows the improved factual completeness with fact ENTNLI .", "The edema is correctly generated and atelectasis is correctly suppressed with NLL+BS+ fc EN .", "These examples reveal the strength of integrating the three metrics to generate factually complete and consistent reports.", "Despite observing large improvements with our model in the clinical finding metrics evaluation, the model is still not complete and some typical factual errors can be found in their generated reports.", "For example, Example 3 in Figure 4 includes a comparison of an observation against a previous study as Images Reference R2Gen M 2 Trans w/ NLL+BS E x a m p l e 1 Largerightpleuraleffusionisunchangedinsize.Thereisassociatedrightbasilaratelectasis/scarring,alsostable.Healedrightribfracturesarenoted.Ontheleft,thereispersistentapicalpleuralthickeningandapicalscarring.Linearopacitiesprojectingoverthelowerlobearealsocompatiblewithscarring,un-changed.Thereisnoleftpleuraleffu-sion.Thereisnopneumothorax.Hilarandcardiomediastinalcontoursaredif-ficulttoassess,butappearunchanged.Vascularstentisseenintheleftaxil-lary/subclavianregion.", ". . . appear more prominent since . . . in the reference but our model (or any previous models) can not capture this kind of comparison since the model is not designed to take account the past reports of a patient as input.", "Additionally, in this example, edema is mentioned with uncertainty as cannot be excluded in the reference but the generated report with fact ENTNLI simply indicates it as There is mild pulmonary edema.", "We proposed two new simple rewards and combined them with a semantic equivalence metric to improve image-to-text radiology report generation systems.", "The two new rewards make use of radiology domain entities extracted with a named entity recognizer and a weakly-supervised NLI to capture the factual completeness and consistency of the generated reports.", "We further presented a Transformer-based report generation system that directly optimizes these rewards with self-critical reinforcement learning.", "On two open datasets, we showed that our system generates reports that are more factually complete and consistent than the baselines and leads to reports with substantially higher scores in clinical metrics.", "The integration of entities and NLI to improve the factual completeness and consistency of generation is not restricted to the domain of radiology reports, and we predict that a similar approach might similarly improve other data-to-text tasks.", "We would like to thank the anonymous reviewers and the members of the Stanford NLP Group for their very helpful comments that substantially improved this paper." ]
[ "abstain", "abstain", "objective", "objective", "objective", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "objective", "abstain", "objective", "method", "method", "other", "method", "method", "objective", "abstain", "method", "objective", "objective", "objective", "other", "other", "other", "other", "other", "other", "method", "objective", "other", "method", "other", "other", "objective", "other", "other", "other", "other", "other", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "method", "abstain", "abstain", "method", "abstain", "abstain", "other", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "objective", "abstain", "method", "result", "result", "other" ]
[ "Word alignment was once a core unsupervised learning task in natural language processing because of its essential role in training statistical machine translation (MT) models.", "Although unnecessary for training neural MT models, word alignment still plays an important role in interactive applications of neural machine translation, such as annotation transfer and lexicon injection.", "While statistical MT methods have been replaced by neural approaches with superior performance, the twenty-year-old GIZA++ toolkit remains a key component of state-of-the-art word alignment systems.", "Prior work on neural word alignment has only been able to outperform GIZA++ by using its output during training.", "We present the first end-to-end neural word alignment method that consistently outperforms GIZA++ on three data sets.", "Our approach repurposes a Transformer model trained for supervised translation to also serve as an unsupervised word alignment model in a manner that is tightly integrated and does not affect translation quality.", "Although word alignments are no longer necessary to train machine translation (MT) systems, they still play an important role in applications of neural MT. For example, they enable injection of an external lexicon into the inference process to enforce the use of domain-specific terminology or improve the translations of low-frequency content words (Arthur et al., 2016).", "The most important application today for word alignments is to transfer text annotations from source to target (Muller, 2017; Tezcan and Vandeghinste, 2011; Joanis et al., 2013; Escartn and Arcedillo, 2015).", "For example, if part of a source sentence is underlined, the corresponding part of its translation should be underlined as well.", "HTML tags and other markup must be transferred for published documents.", "Although annotations could in principle be generated directly as part of the output sequence, they are instead typically transferred via word alignments because example annotations typically do not exist in MT training data.", "The Transformer architecture provides state-of-the-art performance for neural machine translation (Vaswani et al., 2017).", "The decoder has multiple layers, each with several attention heads, which makes it difficult to interpret attention activations as word alignments.", "As a result, the most widely used tools to infer word alignments, namely GIZA++ (Och and Ney, 2003) and FastAlign (Dyer et al., 2013), are still based on the statistical IBM word alignment models developed nearly thirty years ago (Brown et al., 1993).", "No previous unsupervised neural approach has matched their performance.", "Recent work on alignment components that are integrated into neural translation models either un-derperform the IBM models or must use the output of IBM models during training to outperform them (Zenkel et al., 2019; Garg et al., 2019).", "This work combines key components from Zenkel et al. (2019) and Garg et al. (2019) and presents two novel extensions.", "Statistical alignment methods contain an explicit bias towards contiguous word alignments in which adjacent source words are aligned to adjacent target words.", "This bias is expressed in statistical systems using a hidden Markov model (HMM) (Vogel et al., 1996), as well as symmetrization heuristics such as the grow-diag-final algorithm (Och and Ney, 2000b; Koehn et al., 2005).", "We design an auxiliary loss function that can be added to any attention-based network to encourage contiguous attention matrices.", "The second extension replaces heuristic symmetrization of word alignments with an activation optimization technique.", "After training two alignment models that translate in opposite direc-Figure 1: Word alignment generated by a human annotator.", "tions, we infer a symmetrized attention matrix that jointly optimizes the likelihood of the correct output words under both models in both languages.", "Ablation experiments highlight the effectiveness of this novel extension, which is reminiscent of agreement-based methods for statistical models (Liang et al., 2006; Graca et al., 2008; DeNero and Macherey, 2011).", "End-to-end experiments show that our system is the first to consistently yield higher alignment quality than GIZA++ using a fully unsupervised neural model that does not use the output of a statistical alignment model in any way.", "Statistical alignment models directly build on the lexical translation models of Brown et al. (1993), known as the IBM models.", "The most popular statistical alignment tool is GIZA++ (Och and Ney, 2000b, 2003; Gao and Vogel, 2008).", "For optimal performance, the training pipeline of GIZA++ relies on multiple iterations of IBM Model 1, Model 3, Model 4 and the HMM alignment model (Vogel et al., 1996).", "Initialized with parameters from previous models, each subsequent model adds more assumptions about word alignments.", "Model 2 introduces non-uniform distortion, and Model 3 introduces fertility.", "Model 4 and the HMM alignment model introduce relative distortion, where the likelihood of the position of each alignment link is conditioned on the position of the previous alignment link.", "While simpler and faster tools exist such as FastAlign (Dyer et al., 2013), which is based on a reparametrization of IBM Model 2, the GIZA++ implementation of Model 4 is still used today in applications where alignment quality is important.", "In contrast to GIZA++, our neural approach is easy to integrate on top of an attention-based translation network, has a training pipeline with fewer steps, and leads to superior alignment quality.", "Moreover, our fully neural approach that shares most parameters with a neural translation model can potentially take advantage of improvements to the underlying translation model, for example from domain adaptation via fine-tuning.", "Most neural alignment approaches in the literature, such as Tamura et al. (2014) and Alkhouli et al. (2018), rely on alignments generated by statistical systems that are used as supervision for training the neural systems.", "These approaches tend to learn to copy the alignment errors from the supervising statistical models.", "Zenkel et al. (2019) use attention to extract alignments from a dedicated alignment layer of a neural model without using any output from a statistical aligner, but fail to match the quality of GIZA++.", "Garg et al. (2019) represents the current state of the art in word alignment, outperforming GIZA++ by training a single model that is able to both translate and align.", "This model is supervised with a guided alignment loss, and existing word alignments must be provided to the model during training.", "Garg et al. (2019) can produce alignments using an end-to-end neural training pipeline guided by attention activations, but this approach under-performs GIZA++.", "The performance of GIZA++ is only surpassed by training the guided alignment loss using GIZA++ output.", "Our method also uses guided alignment training, but our work is the first to surpass the alignment quality of GIZA++ without relying on GIZA++ output for supervision.", "Stengel-Eskin et al. (2019) introduce a discriminative neural alignment model that uses a dot-product-based distance measure between learned source and target representation to predict if a given source-target pair should be aligned.", "Alignment decisions condition on the neighboring decisions using convolution.", "The model is trained using gold alignments.", "In contrast, our approach is fully unsupervised; it does not require gold alignments generated by human annotators during training.", "Instead, our system implicitly learns reasonable alignments by predicting future target words as part of the translation task, but selects attention activations using an auxiliary loss function to find contiguous alignment links that explain the data.", "Given a source-language sentence x = x 1 , . . . , x n of length n and its target-language translation y = y 1 , . . . , y m of length m , an alignment A is a set of pairs of source and target positions:", "Aligned words are assumed to correspond to each other, i.e. the source and the target word are translations of each other within the context of the sentence.", "Gold alignments are commonly generated by multiple annotators based on the Blinker guidelines (Melamed, 1998).", "The most commonly used metric to compare automatically generated alignments to gold alignments is alignment error rate (AER) (Och and Ney, 2000b).", "Bahdanau et al. (2015) introduced attention-based neural networks for machine translation.", "These models typically consist of an encoder for the source sentence and a decoder that has access to the previously generated target tokens and generates the target sequence from left to right.", "Before predicting a token, the decoder attends to the position-wise source representations generated by the encoder, and it produces a context vector that is a weighted sum of the contextualized source embeddings.", "The Transformer (Vaswani et al., 2017) attention mechanism uses a query Q and a set of k key-value pairs K, V with Q R d and V, K R k d .", "Attention logits AL computed by a scaled dot product are converted into a probability distribution A using the softmax function.", "The attention A serves as mixture weights for the values V to form a context vector c : AL = calcAttLogits ( Q, K ) = Q KT d A = calcAtt ( Q, K ) = softmax ( AL ) c = applyAtt ( A, V ) = A V A state-of-the-art Transformer includes multiple attention heads whose context vectors are stacked to form the context activation for a layer, and the encoder and decoder have multiple layers.", "For all experiments, we use a downscaled Transformer model trained for translation with a 6-layer encoder, a 3-layer decoder, and 256-dimensional hidden states and embedding vectors.", "For the purpose of word alignment, this translation Transformer is used as-is to extract representations of the source and the target sequences, and our alignment technique does not change the parameters of the Transformer.", "Therefore, improvements to the translation system can be expected to directly carry over to alignment quality, and the alignment component does not affect translation output in any way.", "To improve the alignment quality achieved by interpreting attention activations, Zenkel et al. (2019) designed an additional alignment layer on top of the Transformer architecture.", "In the alignment layer, the context vector is computed as applyAtt ( A, V ) , just as in other decoder layers, but this context vector is the only input to predicting the target word via a linear layer and a softmax that gives a probability distribution over the target vocabulary.", "This design forces attention onto the source positions that are most useful in predicting the target word.", "Figure 2 depicts its architecture.", "This alignment layer uses the learned representations of the underlying translation model.", "Alignments can be extracted from the activations of this model by running a forward pass to obtain the attention weights A from the alignment layer and subsequently selecting the maximum probability source position for each target position as an alignment link: { ( argmax i ( A i,j ) , j ) : j [1 , m ] } .", "The alignment layer predicts the next target token y i based on the source representations x extracted from the encoder of the Transformer and all past target representations y <i extracted from the decoder.", "Thus the probability is conditioned as p ( y i | x, y <i ) .", "The encoder representation used as key and value for the attention component is the sum of the input embeddings and the encoder output.", "This ensures that lexical and context information are both salient in the input to the attention component.", "Extracting alignments with attention-based models works well when used in combination with greedy translation inference (Li et al., 2019).", "However, the alignment task involves predicting an alignment between a sentence and an observed translation, which requires forced decoding.", "When a token in the target sentence is unexpected given the preceding target prefix, attention activations computed Encoder Output Emb.", "during forced decoding are not reliable because they do not explicitly condition on the target word being aligned.", "Zenkel et al. (2019) introduce a method called attention optimization , which searches for attention activations that maximize the probability of the output sequence by directly optimizing the attention activations A in the alignment layer using gradient descent for the given sentence pair ( x, y ) to maximize the probability of each observed target token y i while keeping all other parameters of the neural network M fixed: argmax A p ( y i | y <i , x, A ; M ) Attention optimization yields superior alignments when used during forced decoding when gradient descent is initialized with the activations from a forward pass through the alignment layer.", "The models described so far are based on autoregressive translation models, so they are limited to only attend to the left context of the target sequence.", "However, for the word alignment task the current and future target context is also available and should be considered at inference time.", "Garg et al. (2019) train a single model to both predict the target sentence and the alignments using guided alignment training.", "When the model is trained to Encoder Output Emb.", "predict alignments, the full target context can be used to obtain improved alignment quality.", "The alignment loss requires supervision by a set of alignment links for each sentence pair in the training data.", "These alignments can be generated by the current model or can be provided by an external alignment system or human annotators.", "Assuming one alignment link per target token, we denote the alignment source position for the target token at position t as a t .", "1 The guided alignment loss L a , given attention probabilities A a t ,t for each source position a t and target position t for a target sequence of length m , is defined as: L a ( A ) = 1 m m (cid:88) i =1 log( A a t ,t ) As depicted in Figure 3, we insert an additional self-attention component into the original alignment layer, and leave the encoder and decoder of the Transformer unchanged.", "In contrast to Garg et al. (2019), this design does not require updating any translation model parameters; we only optimize the alignment layer parameters with the guided alignment loss.", "Adding an alignment layer for guided alignment training has a small parameter overhead as it only adds a single decoder layer, resulting in an increase in parameters of less than 5%.", "2 Unlike the standard decoder-side self-attention layers in the Transformer architecture, the current and future target context are not masked in the 1 For the purpose of the guided alignment loss we assume target tokens that do not have an alignment link to be aligned to the end-of-sentence (EOS) token of the source sequence.", "alignment layer self-attention component in order to provide the full target sentence as context.", "Alignment layer parameters are trained using the guided alignment loss.", "Contiguous alignment connections are very common in word alignments, especially for pairs of Indo-European languages.", "That is, if a target word at position t is aligned to a source word at position s , the next target word at position t + 1 is often aligned to s 1 , s or s + 1 (Vogel et al., 1996).", "encourages alignments with contiguous clusters of links.", "The attention activations form a 2-dimensional matrix A R n m , where n is the number of source tokens and m the number of target tokens: each entry represents a probability that specifies how much attention weight the network puts on each source word to predict the next target word.", "By using a convolution with a static kernel K over these attention scores, we can measure how much attention is focused on each rectangle within the two dimensional attention matrix: A = conv ( A, K ) LC = m (cid:88) t =1 log( max s { 1 ,...,n } ( A s,t )) We use a 2 2 kernel K R 2 2 with each element set to 0.5.", "Therefore, A R n m will contain the normalized attention mass of each 2 2 square of the attention matrix A .", "The resulting values after the convolution will be in the interval [0 . 0 , 1 . 0] .", "For each target word we select the square with the highest attention mass, encouraging a sparse distribution over source positions in A and thus effectively training the model towards strong attention values on neighboring positions.", "We mask the contiguity loss such that the end of sentence symbol is not considered during this procedure.", "We apply a position-wise dropout of 0.1 on the attention logits before using the softmax function to obtain A , which turned out to be important to avoid getting stuck in trivial solutions during training.", "3 Optimizing the alignment loss especially encour-3 A trivial solution the network converged to when adding the contiguity loss without dropout was to align each target token to the same source token.", "ages diagonal and horizontal patterns 4 as visualized in Figure 4.", "These correspond well to a large portion of patterns appearing in human alignment annotations as shown in Figure 1. 5 Bidirectional Attention Optimization A common way to extract word alignments is to train two models, one for the forward direction (source to target) and one for the backward direction (target to source).", "For each model, one can extract separate word alignments and symmetrize these using heuristics like grow-diagonal (Och and Ney, 2000b; Koehn et al., 2005).", "However, this approach uses the hard word alignments of both directions as an input, and does not consider any other information of the forward and backward model.", "For attention-based neural networks it is possible to adapt attention optimization as described in Section 3.4 to consider two models at the same time.", "The goal of attention optimization is to find attention activations that lead to the correct prediction of the target sequence for a single neural network.", "We extend this procedure to optimize the likelihood of the sentence pair jointly under both the forward and the backward model, with the additional bias to favor contiguous alignments.", "Figure 5 depicts this procedure.", "Since attention optimization uses gradient descent to find good attention activations, it is important to start with a reasonable initialization.", "We extract the attention logits (attention before applying the softmax) from the forward ( AL ) F and the backward model ( AL ) B and average these to get a starting point for gradient descent: ( AL ) init = 12 (( AL ) F + ( AL ) TB ) .", "Our goal is to find attention logits AL that lead to the correct prediction for both the forward MF", "4 Vertical patterns are not encouraged, as it is not possible to have an attention probability above 0.5 for two source words and the same target word, because we use the softmax function over the source dimension.", "and the backward model MB , while also representing contiguous alignments.", "We will use the cross entropy loss CE for a whole target sequence y of length m to define the loss, given probabilities for each target token p ( y t | A t ; M ) under model parameters M and a given attention activation vector A t : CE ( p ( y | A ; M )) = m (cid:88) t =1 log( p ( y t | A t ; M )) Let x, y be the source and target sequence, so that we can define a loss function for each component with the interpolation parameter for the contiguity loss LC as follows: LF = CE ( p ( y | softmax ( AL ); MF )) LB = CE ( p ( x | softmax ( ATL ); MB )) L = LF + LB + L C We apply gradient descent to optimize all losses simultaneously, thus approximating a solution of argmin ALL ( x, y | AL , MF , MB ) .", "After optimizing the attention logits, we still have to decide which alignment links to extract, i.e. how to convert the soft attentions into hard alignments.", "For neural models using a single direction a common method is to extract the alignment with the highest attention score for each target token.", "For our bidirectional method we use the following approach: We merge the attention probabilities extracted from both directions using element-wise multiplication, where denotes a Hadamard product: AF = softmax ( AL ) AB = softmax ( ATL ) TAM = AF AM This favors alignments that effectively predict observed words in both the source and target sentences.", "Given the number of source tokens n and target tokens m in the sentence, we select min( n, m ) alignments that have the highest values in the merged attention scores AM .", "In contrast to selecting one alignment per target token, this allows unaligned tokens, one-to-many, many-to-one and many-to-many alignment patterns.", "We use the same experimental setup 5 as described by Zenkel et al. (2019) and used by Garg et al. (2019).", "It contains three language pairs: German English, Romanian English and English French (Och and Ney, 2000a; Mihalcea and Pedersen, 2003).", "We learn a joint byte pair encoding (BPE) for the source and the target language with 40k merge operation (Sennrich et al., 2016).", "To convert from alignments between word pieces to alignments between words, we align a source word to a target word if an alignment link exists between any of its word pieces.", "Using BPE units instead of words also improved results for GIZA++ (e.g., 20.9% vs. 18.9% for German English in a single direction).", "Therefore, we use the exact same input data for GIZA++ and all our neural approaches.", "For training GIZA++ we use five iterations each for Model 1, the HMM model, Model 3 and Model 4.", "Most of the language pairs do not contain an adequately sized development set for word alignment experiments.", "Therefore, rather than early stopping, we used a fixed number of updates for each training stage across all languages pairs: 90k for training the translation model, 10k for the alignment layer and 10k for guided alignment training (batch-size: 5 https://github.com/lilt/ alignment-scripts 36k words).", "Training longer did not improve or degrade test-set AER on German English; the AER only fluctuated by less than 1% when training the alignment layer for up to 20k updates while evaluating it every 2k updates.", "We also trained a base transformer with an alignment layer for German English, but achieved similar results in terms of AER, so we used the smaller model described in sub-section 3.2 for other language pairs.", "We adopted most hyperparameters from Zenkel et al. (2019), see the Supplemental Material for a summary.", "We tuned the interpolation factor for the contiguity loss on German English.", "Results of ablation experiments for the contiguity loss can be found in Table 1. Our first experiment uses the contiguity loss during training and we extract the alignments from the forward pass using a single direction without application of attention optimization.", "We observe an absolute improvement of 6.4% AER (34.2% to 27.8%) after adding the contiguity loss during training.", "Afterwards, we use the model trained with contiguity loss and use attention optimization to extract alignments.", "Adding the contiguity loss during attention optimization further improves the AER scores by 1.2%.", "Both during training and attention optimization we used an interpolation coefficient of = 1 .", "0 for the contiguity loss.", "By visualizing the attention activations in Figure 7 we see that the contiguity loss leads to sparse activations.", "Additionally, by favoring contiguous alignments it disambiguates correctly the alignment between the words we and wir, which appear twice in the sentence pair.", "In the remaining experiments we use the contiguous loss for both training and attention optimization.", "While we used a kernel of size 2x2 in our experiments, we also looked at different sizes.", "Using a 1x1 kernel 6 during attention optimization leads to an AER of 22.8%, while a 3x3 kernel achieves the best result with an AER of 21.2%, compared to 21.5% of the 2x2 kernel.", "Larger kernel sizes lead to slightly worse results: 21.4% for a 4x4 kernel and 21.5% for a 5x5 kernel.", "The most commonly used methods to merge alignments from models trained in opposite direc-6", "tions are variants of grow-diagonal.", "We extract hard alignments for both German English and English German with (monolingual) attention optimization, which leads to an AER of 21.5% and 25.6%, respectively.", "Merging these alignments with grow-diagonal leads to an AER of 19.6%, while grow-diagonal-final yields an AER of 19.7%.", "We tuned the interpolation factor for the contiguity loss during bidirectional optimization.", "A parameter of 1.0 leads to an AER of 18.2%, 2.0 leads to 18.0% while 5.0 leads to 17.9%.", "Compared to unidirectional attention optimization it makes sense to pick a higher interpolation factor for the contiguity loss, as it is applied with the loss of the forward and backward model.", "For the remaining experiments we use 5.0 as the interpolation factor.", "Bidirectional attention optimization improves the resulting alignment error rate compared to the grow-diagonal heuristic by up to 1.8% for German English.", "These results are summarized in Table 2. Variants of grow-diagonal have to rely on the hard alignments generated by the forward and the backward model.", "They only choose from these alignment links and therefore do not have the ability to generate new alignment links.", "In contrast, bidirectional attention optimization takes the parameters of the underlying models into account and optimizes the underlying attention logits simultaneously for both models to fit the sentence pair.", "In the example in Figure 8 bidirectional attention optimization is able to correctly predict 0 2 4 6 8 10 15 20 25 30 Gradient Descent Steps Unidir Unidir+CL Bidir Bidir+CL Figure 6: AER with respect to gradient descent steps during attention optimization for German English.", "an alignment link between ubereinstimmend and proven that did not appear at all in the individual alignments of the forward and backward model.", "We plot the behavior of attention optimization with a varying number of gradient descent steps in Figure 6.", "For both unidirectional and bidirectional models attention optimization leads to steadily improving results.", "Without using the additional contiguity loss, the lowest AER appears after three gradient descent steps and slightly increases afterwards.", "When using the contiguity loss AER results continue to decrease with additional steps.", "The contiguity loss seems to stabilize optimization and avoids overfitting of the optimized attention activations when tuning them for a single sentence pair.", "We now use the alignment layer with the full decoder context by adding an additional self-attention layer that does not mask out the future target context.", "We extract alignments from the previous models with bidirectional attention optimization and use those alignments for guided alignment training.", "This works surprisingly well.", "While the alignments used for training yielded an AER of 17.9% after bidirectional attention optimization (Table 4), the full context model trained with these alignments further improved the AER to 16.0% while using a Method DeEn EnFr RoEn Att.", "single model for German English (Table 3).", "After guided alignment training is complete, we do not apply attention optimization, since that would require a distribution over target words, which is not available in this model.", "We now report AER results across all three language pairs.", "Precision and recall scores are included in the Supplemental Material.", "We first extract alignments from a unidirectional model, a common use case where translations and alignments need to be extracted simultaneously.", "Table 3 compares our results to GIZA++ and Zenkel et al. (2019).", "7 We observe that guided alignment training leads to gains across all language pairs.", "In a single direction our approach consistently outperforms GIZA++ by an absolute AER difference between 1.3% (EnFr) and 3.9% (RoEn).", "Table 4 compares bidirectional results after symmetrization.", "We compare to purely neural and purely statistical systems.", "8 For symmetrizing alignments of the guided model and GIZA++, we use grow-diagonal.", "Bidirectional attention optimization is already able to outperform GIZA++ and Garg et al. (2019) on all language pairs except English French.", "Using guided alignment training further improves results across all language pairs 7 Garg et al. (2019) only report bidirectional results after symmetrization.", "8 For additional comparisons including neural models bootstrapped with GIZA++ alignments, see the Supplemental Material.", "and leads to a consistent AER improvement compared to GIZA++ and neural results reported by Garg et al. (2019).", "These results show that it is possible to outperform GIZA++ both in a single direction and after symmetrization without using any alignments generated from statistical alignment systems to bootstrap training.", "This work presents the first end-to-end neural approach to the word alignment task which consistently outperforms GIZA++ in terms of alignment error rate.", "Our approach extends a pre-trained state-of-the-art neural translation model with an additional alignment layer, which is trained in isolation without changing the parameters used for the translation task.", "We introduce a novel auxiliary loss function to encourage contiguity in the alignment matrix and a symmetrization algorithm that jointly optimizes the alignment matrix within two models which are trained in opposite directions.", "In a final step the model is re-trained to leverage full target context with a guided alignment loss.", "Our results on three language pairs are consistently superior to both GIZA++ and prior work on end-to-end neural alignment.", "As the resulting model repurposes a pre-trained translation model without changing its parameters, it can directly benefit from improvements in translation quality, e.g. by adaptation via fine-tuning." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "objective", "other", "other", "other", "other", "other", "other", "other", "objective", "method", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "objective", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "method", "abstain", "other", "objective", "other", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "abstain", "result", "abstain" ]
[ "Non-autoregressive (NAR) models generate all the tokens of a sequence in parallel, resulting in faster generation speed compared to their autoregressive (AR) counterparts but at the cost of lower accuracy.", "Different techniques including knowledge distillation and source-target alignment have been proposed to bridge the gap between AR and NAR models in various tasks such as neural machine translation (NMT), automatic speech recognition (ASR), and text to speech (TTS).", "With the help of those techniques, NAR models can catch up with the accuracy of AR models in some tasks but not in some others.", "In this work, we conduct a study to understand the difficulty of NAR sequence generation and try to answer: (1) Why NAR models can catch up with AR models in some tasks but not all?", "(2) Why techniques like knowledge distillation and source-target alignment can help NAR models.", "Since the main difference between AR and NAR models is that NAR models do not use dependency among target tokens while AR models do, intuitively the difficulty of NAR sequence generation heavily depends on the strongness of dependency among target tokens.", "To quantify such dependency, we propose an analysis model called CoMMA to characterize the difficulty of different NAR sequence generation tasks.", "We have several interesting findings: 1) Among the NMT, ASR and TTS tasks, ASR has the most target-token dependency while TTS has the least.", "2) Knowledge distillation reduces the target-token dependency in target sequence and thus improves the accuracy of NAR models.", "3) Source-target alignment constraint encourages dependency Equal contribution.", "of a target token on source tokens and thus eases the training of NAR models.", "Non-autoregressive (NAR) models (Oord et al., 2017; Gu et al., 2017; Chen et al., 2019; Ren et al., 2019), which generate all the tokens in a target sequence in parallel and can speed up inference, are widely explored in natural language and speech processing tasks such as neural machine translation (NMT) (Gu et al., 2017; Lee et al., 2018; Guo et al., 2019a; Wang et al., 2019; Li et al., 2019b; Guo et al., 2019b), automatic speech recognition (ASR) (Chen et al., 2019) and text to speech (TTS) synthesis (Oord et al., 2017; Ren et al., 2019).", "However, NAR models usually lead to lower accuracy than their autoregressive (AR) counterparts since the inner dependencies among the target tokens are explicitly removed.", "Several techniques have been proposed to alleviate the accuracy degradation, including 1) knowledge distillation (Oord et al., 2017; Gu et al., 2017; Guo et al., 2019a,b; Ren et al., 2019), 2) imposing source-target alignment constraint with fertility (Gu et al., 2017), word mapping (Guo et al., 2019a), attention distillation (Li et al., 2019b) and duration prediction (Ren et al., 2019).", "With the help of those techniques, it is observed that NAR models can match the accuracy of AR models for some tasks (Ren et al., 2019), but the gap still exists for some other tasks (Gu et al., 2017; Chen et al., 2019).", "Therefore, several questions come out naturally: (1) Why the gap still exists for some tasks?", "Are some tasks more difficult for NAR generation than others?", "(2) Why the techniques like knowledge distillation and source-target alignment can help NAR generation?", "The main difference between AR and NAR models is that NAR models do not consider the dependency among target tokens, which is also the root cause of accuracy drop of NAR models.", "Thus, to better understand NAR sequence generation and answer the above questions, we need to characterize and quantify the target-token dependency, which turns out to be non-trivial since the sequences could be of different modalities (i.e., speech or text).", "For this purpose, we design a novel model called COnditional Masked prediction model with Mix-Attention (CoMMA), inspired by the mix-attention in He et al. (2018) and the masked language modeling in Devlin et al. (2018): in CoMMA, (1) the prediction of one target token can attend to all the source and target tokens with mix-attention, and 2) target tokens are randomly masked with varying probabilities.", "CoMMA can help us to measure target-token dependency using the ratio of the attention weights on target context over that on full (both source and target) context when predicting a target token: bigger ratio, larger dependency among target tokens.", "We conduct a comprehensive study in this work and obtain several interesting discoveries that can answer previous questions.", "First, we find that the rank of the target-token dependency among the three tasks is ASR > NMT > TTS: ASR has the largest dependency while TTS has the smallest.", "This finding is consistent with the accuracy gap between AR and NAR models and demonstrates the difficulty of NAR generation across tasks.", "Second, we replace the target sequence of original training data with the sequence generated by an AR model (i.e., through knowledge distillation) and use the new data to train CoMMA; we find that the target-token dependency is reduced.", "Smaller target-token dependency makes NAR training easier and thus improves the accuracy.", "Third, source-target alignment constraint such as explicit duration prediction (Ren et al., 2019) or implicit attention distillation (Li et al., 2019b) also reduces the target-token dependency, thus helping the training of NAR models.", "The main contributions of this work are as follows: We design a novel model, conditional masked prediction model with mix-attention (CoMMA), to measure the token dependency for sequence generation.", "With CoMMA, we find that: 1) Among the three tasks, ASR is the most difficult and TTS is the least for NAR generation; 2) both knowledge distillation and imposing source-target alignment constraint reduce the target-token dependency, and thus reduce the difficulty of training NAR models.", "In this section, we analyze the token dependency in the target sequence with a novel conditional masked prediction model with mix-attention (CoMMA).", "We first introduce the design and structure of CoMMA, and then describe how to measure the target token dependency based on CoMMA.", "It is non-trivial to directly measure and compare the target token dependency in different modalities (i.e., speech or text) and different conditional source modalities (i.e., speech or text).", "Therefore, we have several considerations in the design of CoMMA: 1) We use masked language modeling in BERT (Devlin et al., 2018) with source condition to train CoMMA, which can help measure the dependency on target context when predicting the current masked token.", "2) In order to ensure the dependency on source and target tokens can be comparable, we use mix-attention (He et al., 2018) to calculate the attention weights on both source and target tokens in a single softmax function.", "The model architecture of CoMMA is shown in Figure 1.", "Specifically, CoMMA differs from standard Transformer (Vaswani et al., 2017) as follows: 1) Some tokens are randomly replaced by a special mask token (cid:104) M (cid:105) with probability p , and the model is trained to predict original unmasked tokens.", "2) We employ mix-attention mechanism (He et al., 2018) where layer i in the decoder can attend to itself and the layer i in the encoder at the same time and compute the attention weights in a single softmax function.", "We share the parameters of attention and feed-forward layer between the encoder and decoder.", "3) Following He et al. (2018), we add source/target embedding to tell the model whether a token is from the source or target sequence, and also add position embedding with the positions of source and target tokens both starting from zero.", "4) The encoder and decoder pre-net (Shen et al., 2018) vary in different tasks: For TTS, encoder pre-net consists of only embedding lookup table, and decoder pre-net consists of 2-layer dense network CoMMA model N Add & Norm Feed Forward Add & Norm Linear Self-Attention X 1 X 2 X 3 X 4 X 5 EOS Y 1 M Y 3 M Y 5 Y 6 Source Token Target Token Softmax Mixed Attention Y 2 Y 4 Encoder Pre-Net Decoder Pre-Net E s E s E s E s E s E s E t E t E t E t E t E t Source/Target Embedding E x1 E x2 E x3 E x4 E x5 E eos E y1 E m E y3 E m E y5 E y6 Input Tokens Token Embedding E 1 E 2 E 3 E 4 E 5 E 6 E 1 E 2 E 3 E 4 E 5 E 6 CoMMA model PositionalEmbedding", "with ReLU activation.", "For ASR, encoder pre-net consists of 3-layer 2D convolutional network, and decoder pre-net consists of only embedding lookup table.", "For NMT, both encoder and decoder pre-net consist of only embedding lookup table.", "CoMMA is designed to measure the target token dependency in a variety of sequence generations, including AR (unidirectional) generation, NAR generation, bidirectional generation or even identity copy.", "To this end, we vary the mask probability p (the ratio of the masked tokens in the whole target tokens 1 ) in a uniform distribution p U (0 . 0 , 1 . 0) when training CoMMA.", "In this way, p = 1 covers NAR generation, p = 0 covers identity copy, and in some cases, p can also cover AR generation.", "To measure the target token dependency, we define a metric called attention density ratio R , which represents the ratio of the attention density (the normalized attention weights) on target context in mix-attention when predicting the target token with a well-trained CoMMA.", "We describe the calculation of R in the following steps.", "1 Considering the continuity of the mel-spectrogram frames in speech sequence, we mask the frames by chunk, each chunk with frame size 10.", "where A i,j denotes the attention weights from token i to token j in mix-attention, and i [1 , N ] represents the target token while j [ N + 1 , N + M ] represents the source token, M and N is the length of source and target sequence respectively, (cid:80) N + M j =1 A i,j = 1 .", "i represents the ratio of attention density on target context when predicting target token i .", "Second, we average the attention density ratio i over all the predicted tokens (with masked probability p ) in a sentence and get 1 |M p | (cid:88) i M p i , (2) where M p represents the set of masked target tokens under mask probability p and |M p | denotes the number of tokens in the set.", "Third, for a given p , we calculate R ( p ) over all test data and average them to get the final attention density ratio R ( p ) = Avg ( 1 |M p | (cid:88) i M p i ) .", "We vary p and calculate R ( p ) to measure the density ratio under different conditions, where a small p represents more target context that can be leveraged and a large p represents less context.", "In the extreme cases, p = 1 represent NAR generation while p = 0 represents to learn identity copy.", "Given the proposed attention density ratio R ( p ) based on CoMMA, we can measure the target token dependency of the NAR model in different tasks, Task NMT ASR TTS AR Transformer (Vaswani et al., 2017) Transformer ASR (Karita et al., 2019) Transformer TTS (Li et al., 2019a) NAR NAT (Gu et al., 2017) w/ AC NAR-ASR (Chen et al., 2019) w/ AC FastSpeech (Ren et al., 2019) Table 1: The AR and NAR model we consider in each task.", "In this section, we aim to find out why the gap still exists for ASR and NMT tasks, while in TTS, NAR can catch up with the accuracy of AR model.", "We also analyze the causes of different difficulties for different tasks.", "We start from evaluating the accuracy gap between AR and NAR models for NMT, ASR and TTS, and then measure the token dependency based on our proposed CoMMA.", "We first train the AR and NAR models in each task and check the accuracy gap between AR and NAR models to measure the difficulty of NAR generation in each task.", "Configuration of AR and NAR Model The AR and NAR models we considered are shown in Table 1, where we use Transformer as the AR models while the representative NAR models in each task.", "For a fair comparison, we make some modifica-tions on the NAR models: 1) For ASR, we train a Transformer ASR first as teacher model and then constrain the attention distributions of NAR-ASR with the alignments converted by teacher attention weights, which will be introduced and discussed in Section", "5. 2) For NMT, we constrain the KL-divergence of the encoder-to-decoder attention distributions between the AR and NAR models following Li et al. (2019b).", "We also list the hyperparameters of AR and NAR models for each task in Section A. Datasets and Evaluations for NMT, ASR and TTS We conduct experiments on IWSLT 2014 German-English (De-En) translation dataset 2 for NMT, LibriTTS dataset (Zen et al., 2019) for ASR and LJSpeech dataset (Ito) for TTS.", "For 2", "speech data, we transform the raw audio into mel-spectrograms following Shen et al. (2018) with 50 ms frame size and 12.5 ms hop size.", "For text data, we tokenize sentences with moses to-kenizer 3 and then segment into subword symbols using Byte Pair Encoding (BPE) (Sennrich et al., 2015) for subword-level analysis, and convert the text sequence into phoneme sequence with grapheme-to-phoneme conversion (Sun et al., 2019) for phoneme-level analysis.", "We use BPE for NMT and ASR, while phoneme for TTS by default unless otherwise stated.", "We train all models on 2 NVIDIA 2080Ti GPUs using Adam optimizer with 1 = 0 .", "9 , 2 = 0 .", "98 , = 10 9 and following the same learning rate schedule in (Vaswani et al., 2017).", "For ASR, we evaluate word error rate (WER) on test-clean set in LibriTTS dataset.", "For NMT, we evaluate the BLEU score on IWSLT 2014 De-En test set.", "For TTS, we randomly split the LJSpeech dataset into 3 sets: 12500 samples for training, 300 samples for validation and 300 samples for testing, and then evaluate the mean opinion score (MOS) on the test set to measure the audio quality.", "The output mel-spectrograms of TTS model are transformed into audio samples using the pretrained WaveGlow (Prenger et al., 2019).", "Each audio is listened by at least 20 testers, who are all native English speakers.", "Table", "2. It can be seen that NAR model can match the accuracy of AR model gap in TTS, while the gap still exists in ASR and NMT.", "We calculate both the WER and BLEU metrics in ASR and NMT for better comparison.", "It can be seen that ASR has a larger gap than NMT.", "Larger accuracy gap may indicate more difficult for NAR generation in this task.", "Next, we try to understand what factors influence difficulties among different tasks.", "In the last subsection, we analyze the difficulty of NAR models from the perspective of the accuracy gap.", "In this subsection, we try to find evidence from the target token dependency, which is supposed to be consistent with the accuracy gap to measure the task difficulty.", "Configuration of CoMMA We train CoMMA with the same configuration on NMT, ASR and TTS: the hidden size and the feed-forward hidden size and the number of layers are set to 512, 1024 and 6 respectively.", "We list other hyperparameters of CoMMA in Section B. We also use the same datasets for each task as described in Section 3.1 to train CoMMA.", "Results of Token Dependency We use the attention density ratio calculated from CoMMA (as described in Section 2.2) to measure the target token dependency and show the results in Figure", "2. It can be seen that the rank of attention density ratio R ( p ) is ASR > NMT > TTS for all p .", "Considering that R ( p ) measures how much context information from target side is needed to generate a target token, we can see that ASR has more dependency on the target context and less on the source context, while TTS is the opposite, which is consistent with the accuracy gap between AR and NAR models as we described in Section 3.1.", "As we vary p from 0.1 to 0.5, R ( p ) decreases for all tasks since more tokens in the target side are masked.", "We also find that R ( p ) in NMT decreases quicker than the other two tasks, which indicates that NMT is good at learning from source context when less context information can be leveraged from the target side while R ( p ) in ASR decreases little.", "This can also explain why NAR in NMT achieves less gap than ASR.", "In the current and next sections, we investigate why some techniques can help NAR generation from the aspect of target token dependency.", "We only analyze knowledge distillation and attention alignment techniques which are widely used in NAR, but we believe our analysis method can be applied to other NAR techniques, such as iterative refinement (Lee et al., 2018), fine-tuning from an AR model (Guo et al., 2019b) and so on.", "Most existing NAR models (Oord et al., 2017; Gu et al., 2017; Wang et al., 2019; Guo et al., 2019a,b; Ren et al., 2019) rely on the technique of knowledge distillation, which generates the new target sequence given original source sequence from a pre-trained AR model and trains the NAR model for better accuracy.", "In this section, we first conduct experiments to verify the accuracy improvements of knowledge distillation.", "Next, based on our proposed CoMMA, we analyze why knowledge distillation could help NAR models.", "Knowledge Distillation for NAR Models Given a well-trained AR model T and source sequence x X from the original training data, a new target sequence can be generated through", "We can use beam search for NMT and ASR and greedy search for TTS to generate y (cid:48) .", "Given the set of generated sequence pairs ( X , Y (cid:48) ) , we train the NAR models with negative log-likelihood loss L (( X , Y (cid:48) ); ) = (cid:88) ( x,y (cid:48) ) ( X , Y (cid:48) ) log P ( y (cid:48) | x ; ) , (5) where is the parameters set of the NAR model.", "Experimental Results We only conducted knowledge distillation on NMT and TTS since there is no previous works on ASR yet.", "We train the NAR models in NMT and TTS with raw target token sequence instead of teacher outputs and compare the results with that in Table", "2. The accuracy improvements of knowledge distillation are shown in Table", "3. It can be seen that knowledge distillation can boost the accuracy of NAR in NMT and TTS, which is consistent with the previous works.", "Recently, Zhou et al. (2019) find that knowledge distillation can reduce the complexity of data sets and help NAT to better model the variations in the output data.", "However, this explanation is reasonable on its own, but mainly from the perspective of data level and is not easy to understand.", "In this subsection, we analyze knowledge distillation from a more understandable and intuitive perspective, by observing the change of the token dependency based on our proposed CoMMA.", "We measure the target token dependency by training CoMMA with the original training data and new data generated through knowledge distillation, respectively.", "The results are shown in Figure", "3. It can be seen that knowledge distillation can decrease the attention density ratio R ( p ) on both tasks, indicating that knowledge distillation can reduce the dependency on the target-side context when predicting a target token, which can be helpful for NAT model training.", "Without the help of target context, NAR models usually suffer from ambiguous attention to the source context, which affects the accuracy.", "ReFigure 3: Attention density ratio R ( p ) for NMT and TTS tasks under different p with and without knowledge distillation, where KD means knowledge distillation.", "cently, many works have proposed a variety of approaches to help with the source-target alignment of NAR models, which can improve the estimation of the soft alignment in attention mechanism model.", "For example, Li et al. (2019b) constrain the KL-divergence of the encoder-to-decoder attention distributions between the AR and NAR models.", "Gu et al. (2017) predict the fertility of the source tokens to approximate the alignments between target sequence and source sequence.", "Guo et al. (2019a) convert the source token to target token with phrase table or embedding mapping for alignments.", "Ren et al. (2019) predict the duration (the number of mel-spectrograms) of each phoneme.", "In this section, we first study the effectiveness of alignment constraint for NAR models, and then analyze why alignment constraint can help the NAR models by observing the changes of token dependency based on our proposed CoMMA.", "Alignment Constraint for NAR Models We choose the attention constraint mechanism which is commonly used based on previous works for each task.", "For NMT, we follow Li et al. (2019b) to minimize the KL-divergence between the attention distributions of AR and NAR model as follow: L ac = 1 NN (cid:88) i =1 DKL ( A (cid:48) i || A i ) , (6) where A (cid:48) i and A i denote the source-target attention weights from the AR teacher model and NAR student model respectively.", "A (cid:48) , A RN M where N and M are the number of tokens in the target and source sequence.", "For TTS, we follow Ren et al. (2019) to extract the encoder-to-decoder attention alignments from the well-trained AR teacher model and convert them to phoneme duration sequence, and then train the duration predictor to expand the hidden of the source sequence to match the length of target sequence.", "For ASR, since there is no previous work proposing alignment constraint for NAR, we design a new alignment constraint method and explore its effectiveness.", "We first calculate the expectation position of teacher's attention distributions for i -th target token: E i = (cid:80) Mj =1 j A (cid:48) i,j and cast it to the nearest integer.", "Then we constrain the attention weights of i -th target token for NAR model so that it can only attend to the source position between E i 1 and E i +1 .", "Specially, the first target token can only attend to the source position between 1 and E 2 while the last target token can only attend to the position between EN 1 and M .", "We apply this alignment constraint for ASR only in the training stage.", "Experimental Results We follow the model configuration and datasets as described in Section 3.1, and explore the accuracy improvements when adding attention constraint to NAR models.", "The results are shown in Table", "4. It can be seen that attention constraint can not only improve the performance of NMT and TTS as previous works (Li et al., 2019b; Ren et al., 2019) demonstrated, but also help the NAR-ASR model achieve better scores.", "For simplicity, we use the method described in Equation 6 to help the training of CoMMA, where the teacher model is the AR model and student model is CoMMA.", "We minimize KL-divergence between the per-head encoder-to-decoder attention distributions of the AR model and CoMMA.", "First, we normalize the encoder-to-decoder attention weights in each head of mix-attention to convert each row of the attention weights to a distribution: A i,j = A i,N + j (cid:80) Mk =1 A i,N + k for each i [1 , N ] , j [1 , M ] , (7) where A RN ( N + M ) is the weights of mix-attention described in Section 2.2, A RN M is the normalized encoder-to-decoder attention weights, M and N is the length of source and target sequence.", "Then, we compute the KL-divergence loss for each head as follows: L ac = 1 NN (cid:88) i =1 DKL ( A (cid:48) i || A i ) , (8) where A (cid:48) RN M is the encoder-to-decoder attention of AR teacher model.", "We average L ac over all heads and layers and get the final attention constraint loss for CoMMA.", "We measure the token dependency by calculating the attention density ratio R ( p ) based on CoMMA, and show the results in Figure", "4. It can be seen that alignment constraint can help reduce ratio R ( p ) on each task and thus reduce the dependency on target context when predicting target tokens.", "In the meanwhile, alignment constraint can help the model extract more information from the source context, which can help the learning of NAR models.", "Another interesting finding is that NAR model in TTS benefits from attention constraint most as shown in Table 4, and in the meanwhile, TTS has the least attention density ratio as shown in Figure", "4. These observations suggest that NAR models with small target token dependency could benefit largely from alignment constraint.", "Several works try to analyze and understand NAR models on different tasks.", "We discuss these analyses from the two aspects: knowledge distillation and source-target alignment constraint.", "Knowledge Distillation Knowledge distillation has long been used to compress the model size (Hin-ton et al., 2015; Furlanello et al., 2018; Yang et al., 2018; Anil et al., 2018; Li et al., 2017) or transfer the knowledge of teacher model to student model (Tan et al., 2019; Liu et al., 2019a,b), and soon been applied to NAR models (Gu et al., 2017; Oord et al., 2017; Guo et al., 2019a; Wang et al., 2019; Li et al., 2019b; Guo et al., 2019b; Ren et al., 2019) to boost the accuracy.", "Some works focus on studying why knowledge distillation works: Phuong and Lampert (2019) provide some insights into the mechanisms of knowledge distillation by studying the special case of linear and deep linear classifiers and find that data geometry, optimization bias and strong monotonicity determine the success of distillation; Yuan et al. (2019) argue that the success of KD is also due to the regularization of soft targets, which might be as important as the similarity information between categories.", "However, few works have studied the cause of why knowledge distillation benefits NAR training.", "Recently, Zhou et al. (2019) investigate why knowledge distillation is important for the training of NAR model in NMT task and find that knowledge distillation can reduce the complexity of data sets and help NAR model to learn the variations in the output data.", "attention distributions and hidden states of NAR model.", "Lee et al. (2018) presents some experiments and analysis to prove the necessity for multiple iterations generation for NAT.", "They also investigate the effectiveness of knowledge distillation in different task and make the assumption that teacher model can essentially clean the training data so that the distilled NAR model substantially outperforms NAR model trained with raw data.", "Attention Alignment Constraint Previous work pointed out that adding additional alignment knowledge can improve the estimation of the soft alignment in attention mechanism model.", "For example, Chen et al. (2016) uses the Viterbi alignments of the IBM model 4 as an additional knowledge during NMT training by calculating the divergence between the attention weights and the statistical alignment information.", "Compared with AR model, the attention distributions of NAR model are more ambiguous, which leads to the poor performance of the NAR model.", "Recent works employ attention alignment constraint between the well-trained AR and NAR model to train a better NAR model.", "Li et al. (2019b) leverages intermediate hidden information from a well-trained AR-NMT teacher model to improve the NAR-NMT model by minimizing KL-divergence between the per-head encoder-decoder attention of the teacher and the student.", "Ren et al. (2019) choose the encoder-decoder attention head from the AR-TTS teacher as the attention alignments to improve the performance of the NAR model in TTS.", "In this paper, we conducted a comprehensive study on NAR models in NMT, ASR and TTS tasks to analyze several research questions, including the difficulty of NAR generation and why knowledge distillation and alignment constraint can help NAR models.", "We design a novel CoMMA and a metric called attention density ratio to measure the dependency on target context when predicting a target token, which can analyze these questions in a uni-fied method.", "Through a series of empirical studies, we demonstrate that the difficulty of NAR generation correlates on the target token dependency, and knowledge distillation as well as alignment constraint reduces the dependency of target tokens and encourages the model to rely more on source context for target token prediction, which improves the accuracy of NAR models.", "We believe our analyses can shed light on the understandings and further improvements on NAR models.", "Fine-tuning by curriculum learning for non-autoregressive neural machine translation.", "arXiv preprint arXiv:1911.08717 .", "Tianyu He, Xu Tan, Yingce Xia, Di He, Tao Qin, Zhibo Chen, and Tie-Yan Liu.", "2018.", "Layer-wise coordination between encoder and decoder for neural machine translation.", "In Advances in Neural Information Processing Systems , pages 79447954.", "Geoffrey Hinton, Oriol Vinyals, and Jeff Dean.", "2015.", "Distilling the knowledge in a neural network.", "arXiv preprint arXiv:1503.02531 .", "Keith Ito.", "The lj speech dataset, 2017a.", "url ttps.", "kei-thito.", "com/LJ-Speech-Dataset .", "Jason Lee, Elman Mansimov, and Kyunghyun Cho.", "2018.", "Deterministic non-autoregressive neural sequence modeling by iterative refinement.", "arXiv preprint arXiv:1802.06901 .", "Naihan Li, Shujie Liu, Yanqing Liu, Sheng Zhao, Ming Liu, and M Zhou.", "2019a.", "Neural speech synthesis with transformer network.", "AAAI.", "Yuncheng Li, Jianchao Yang, Yale Song, Liangliang Cao, Jiebo Luo, and Li-Jia Li.", "2017.", "Learning from noisy labels with distillation.", "In ICCV , pages 1928 1936.", "Zhuohan Li, Zi Lin, Di He, Fei Tian, Tao Qin, Liwei Wang, and Tie-Yan Liu.", "2019b.", "Hint-based training for non-autoregressive machine translation.", "arXiv preprint arXiv:1909.06708 .", "Yuchen Liu, Hao Xiong, Zhongjun He, Jiajun Zhang, Hua Wu, Haifeng Wang, and Chengqing Zong.", "2019b.", "End-to-end speech translation with knowledge distillation.", "arXiv preprint arXiv:1904.08075 .", "Mary Phuong and Christoph Lampert.", "2019.", "Towards understanding knowledge distillation.", "In International Conference on Machine Learning , pages 51425151.", "Ryan Prenger, Rafael Valle, and Bryan Catanzaro.", "2019.", "Waveglow: A flow-based generative network for speech synthesis.", "In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) , pages 36173621.", "This work was supported in part by the National Key R&D Program of China (Grant No.2018AAA0100603), Zhejiang Natural Science Foundation (LR19F020006), National Natural Science Foundation of China (Grant No.61836002), National Natural Science Foundation of China (Grant No.U1611461), and National Natural Science Foundation of China (Grant No.61751209).", "This work was also partially funded by Microsoft Research Asia.", "Thanks Tao Qin for the valuable suggestions, comments and guidance on this paper." ]
[ "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "result", "objective", "abstain", "objective", "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "objective", "objective", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other" ]
[ "In this paper, we study Multimodal Named Entity Recognition (MNER) for social media posts.", "Existing approaches for MNER mainly suffer from two drawbacks: (1) despite generating word-aware visual representations, their word representations are insensitive to the visual context; (2) most of them ignore the bias brought by the visual context.", "To tackle the first issue, we propose a multimodal interaction module to obtain both image-aware word representations and word-aware visual representations.", "To alleviate the visual bias, we further propose to leverage purely text-based entity span detection as an auxiliary module, and design a Unified Multimodal Transformer to guide the final predictions with the entity span predictions.", "Experiments show that our unified approach achieves the new state-of-the-art performance on two benchmark datasets.", "Recent years have witnessed the explosive growth of user-generated contents on social media platforms such as Twitter.", "While empowering users with rich information, the flourish of social media also solicits the emerging need of automatically extracting important information from these massive unstructured contents.", "As a crucial component of many information extraction tasks, named entity recognition (NER) aims to discover named entities in free text and classify them into pre-defined types, such as person ( PER ), location ( LOC ) and organization ( ORG ).", "Given its importance, NER has attracted much attention in the research community (Yadav and Bethard, 2018).", "Although many methods coupled with either discrete shallow features (Zhou and Su, 2002; Finkel et al., 2005; Torisawa et al., 2007) or continuous deep features (Lample et al., 2016; Ma and Hovy, Corresponding author.", "[ Oracle Arena LOC ] wearing off White x [ Jordan MISC ]", "Figure 1: Two examples for Multimodal Named Entity Recognition (MNER).", "Named entities and their entity types are highlighted.", "2016) have shown success in identifying entities in formal newswire text, most of them perform poorly on informal social media text (e.g., tweets) due to its short length and noisiness.", "To adapt existing NER models to social media, various methods have been proposed to incorporate many tweet-specific features (Ritter et al., 2011; Li et al., 2012, 2014; Limsopatham and Collier, 2016).", "More recently, as social media posts become increasingly multimodal, several studies proposed to exploit useful visual information to improve the performance of NER (Moon et al., 2018; Zhang et al., 2018; Lu et al., 2018).", "In this work, following the recent trend, we focus on multimodal named entity recognition (MNER) for social media posts, where the goal is to detect named entities and identify their entity types given a { sentence, image } pair.", "For example, in Fig. 1.a, it is expected to recognize that Kevin Durant , Oracle Arena , and Jordan belong to the category of person names (i.e., PER ), place names (i.e., LOC ), and other names (i.e., MISC ), respectively.", "While previous work has shown success of fusing visual information into NER (Moon et al., 2018; Zhang et al., 2018; Lu et al., 2018), they still suffer from several limitations: (1) The first obstacle lies in the non-contextualized word representations, where each word is represented by the same vector, regardless of the context it occurs in.", "However, the meanings of many polysemous entities in social media posts often rely on its context words.", "Take Fig. 1.a as an example, without the context words wearing off , it is hard to figure out whether Jordan refers to a shoe brand or a person.", "(2) Although most existing methods focus on modeling inter-modal interactions to obtain word-aware visual representations, the word representations in their final hidden layer are still based on the textual context, which are insensitive to the visual context.", "Intuitively, the associated image often provides more context to resolve polysemous entities, and should contribute to the final word representations (e.g., in Fig. 1.b, the image can supervise the final word representations of Kian and David to be closer to persons than animals).", "(3) Most previous approaches largely ignore the bias of incorporating visual information.", "Actually, in most social media posts, the associated image tends to highlight only one or two entities in the sentence, without mentioning the other entities.", "In these cases, directly integrating visual information will inevitably lead the model to better recognize entities highlighted by images, but fail to identify the other entities (e.g., Oracle Arena and King of the Jungle in Fig. 1).", "To address these limitations, we resort to existing pre-trained contextualized word representations, and propose a unified multimodal architecture based on Transformer (Vaswani et al., 2017), which can effectively capture inter-modality interactions and alleviate the visual bias.", "Specifically, we first adopt a recently pre-trained contextualized representation model (Devlin et al., 2018) as our sentence encoder, whose multi-head self-attention mechanism can guide each word to capture the semantic and syntactic dependency upon its context.", "Second, to better capture the implicit alignments between words and images, we propose a multimodal interaction (MMI) module, which essentially couples the standard Transformer layer with cross-modal attention mechanism to produce an image-aware word representation and a word-aware visual representation for each input word, respectively.", "Finally, to largely eliminate the bias of the visual context, we propose to leverage text-based entity span detection as an auxiliary task, and design a unified neural architecture based on Transformer.", "In particular, a conversion matrix is designed to construct the correspondence between the auxiliary and the main tasks, so that the entity span information can be fully utilized to guide the final MNER predictions.", "Experimental results show that our Unified Multimodal Transformer (UMT) brings consistent performance gains over several highly competitive unimodal and multimodal methods, and outperforms the state-of-the-art by a relative improvement of 3.7% and 3.8% on two benchmarks, respectively.", "We propose a Multimodal Transformer model for the task of MNER, which empowers Transformer with a multimodal interaction module to capture the inter-modality dynamics between words and images.", "To the best of our knowledge, this is the first work to apply Transformer to MNER.", "Based on the above Multimodal Transformer, we further design a unified architecture to incorporate a text-based entity span detection module, aiming to alleviate the bias of the visual context in MNER with the guidance of entity span predictions from this auxiliary module.", "In this section, we first formulate the MNER task, and give an overview of our method.", "We then delve into the details of each component in our model.", "Task Formulation: Given a sentence S and its associated image V as input, the goal of MNER is to extract a set of entities from S , and classify each extracted entity into one of the pre-defined types.", "As with most existing work on MNER, we formulate the task as a sequence labeling problem.", "Let S = ( s 1 , s 2 , . . . , s n ) denote a sequence of input words, and y = ( y 1 , y 2 , . . . , y n ) be the corresponding label sequence, where y i Y and Y is the pre-defined label set with the BIO2 tagging schema (Sang and Veenstra, 1999).", "Fig. 2.a illustrates the overall architecture of our Unified Multimodal Transformer, which contains three main components: (1) representation learning for unimodal input; (2) a Multimodal Transformer for MNER; and (3) a unified architecture with auxiliary entity span detection (ESD) module.", "As shown at the bottom of Fig. 2.a, we first extract contextualized word representations and visual block representations from the input sentence and the input image, respectively.", "The right part of Fig. 2.a illustrates our Multimodal Transformer model for MNER.", "Specifically, a Transformer layer is first employed to derive each word's textual hidden representation.", "Next, a multimodal interaction (MMI) module is devised to fully capture the inter-modality dynamics between the textual hidden representations and the visual block representations.", "The hidden representations from MMI are then fed to a conditional random field (CRF) layer to produce the label for each word.", "To alleviate the visual bias in MNER, we further stack a purely text-based ESD module in the left part of Fig. 2.a, where we feed its hidden representations to another CRF layer to predict each word's entity span label.", "More importantly, to utilize this for our main MNER task, we design a conversion matrix to encode the dependency relations between corresponding labels from ESD to MNER, so that the entity span predictions from ESD can be integrated to get the final MNER label for each word.", "Word Representations : Due to the capability of giving different representations for the same word in different contexts, we employ the recent contextualized representations from BERT (Devlin et al.,", "2018) as our sentence encoder.", "Following Devlin et al. (2018), each input sentence is preprocessed by inserting two special tokens, i.e., appending [CLS] to the beginning and [SEP] to the end, respectively.", "Formally, let S (cid:48) = ( s 0 , s 1 , . . . , s n +1 ) be the modified input sentence, where s 0 and s n +1 denote the two inserted tokens.", "Let X = ( x 0 , x 1 , . . . , x n +1 ) be the word representations of S (cid:48) , where x i is the sum of word, segment, and position embeddings for each token s i .", "As shown in the bottom left of Fig. 2.a, X is then fed to the BERT encoder to obtain C = ( c 0 , c 1 , . . . , c n +1 ) , where c i R d is the generated contextualized representation for x i .", "Visual Representations : As one of the state-of-the-art CNN models for image recognition, Residual Network (ResNet) (He et al., 2016) has shown its capability of extracting meaningful feature representations of the input image in its deep layers.", "We therefore keep the output from the last convolutional layer in a pretrained 152-layer ResNet to represent each image, which essentially splits each input image into 7 7=49 visual blocks with the same size and represents each block with a 2048-dimensional vector.", "Specifically, given an input image V , we first resize it to 224 224 pixels, and obtain its visual representations from ResNet, denoted as U = ( u 1 , u 2 , . . . , u 49 ) , where u i is the 2048-dimensional vector representation for the i -th visual block.", "To project the visual representations into the same space of the word representations, we further convert U with a linear transformation: V = W (cid:62) u U , where W u R 2048 d is the weight matrix 1 .", "As shown in the bottom right of Fig. 2.a, V = ( v 1 , v 2 , . . . , v 49 ) is the visual representations generated from ResNet.", "In this subsection, we present our proposed Multimodal Transformer for MNER.", "As illustrated on the right of Fig. 2.a, we first add a standard Transformer layer over C to obtain each word's textual hidden representation: R = ( r 0 , r 1 , . . . , r n +1 ) , where r i R d denotes the generated hidden representation for x i .", "Motivation: While the above Transformer layer can capture which context words are more relevant to the prediction of an input word x i , they fail to consider the associated visual context.", "On the one hand, due to the short length of textual contents on social media, the additional visual context may guide each word to learn better word representations.", "On the other hand, since each visual block is often closely related to several input words, incorporating the visual block representation can potentially make the prediction of its related words more accurately.", "Inspired by these observations, we propose a multimodal interaction (MMI) module to learn an image-aware word representation and a word-aware visual representation for each word.", "Cross-Modal Transformer (CMT) Layer: As shown on the left of Fig. 2.b, to learn better word representations with the guidance of associated images, we first employ an m -head cross-modal attention mechanism (Tsai et al., 2019), by treating V R d 49 as queries, and R R d ( n +1) as keys and values:", "CA i ( V , R ) = softmax ([ W q i V ] (cid:62) [ W k i R ] (cid:112) d/m )[ W v i R ] (cid:62) ; MH-CA ( V , R ) = W (cid:48) [ CA 1 ( V , R ) , . . . , CA m ( V , R )] (cid:62) ,", "where CA i refers to the i -th head of cross-modal attention, { W q i , W k i , W v i } R d/m d , and W (cid:48) R d d denote the weight matrices for the query, key, value, and multi-head attention, respectively.", "Next, we stack another three sub-layers on top: (cid:101) P = LN ( V + MH-CA ( V , R )); (1) P = LN ( (cid:101) P + FFN ( (cid:101) P )) , (2) 1 Bias terms are omitted to avoid confusion in this paper.", "where FFN is the feed-forward network (Vaswani et al., 2017), LN is the layer normalization (Ba et al., 2016), and P = ( p 1 , p 2 , . . . , p 49 ) is the output representations of the CMT layer.", "Coupled CMT Layer: However, since the visual representations are treated as queries in the above CMT layer, each generated vector p i is corresponding to the i -th visual block instead of the i -th input word.", "Ideally, the image-aware word representation should be corresponding to each word.", "To address this, we propose to couple P with another CMT layer, which treats the textual representations R as queries, and P as keys and values.", "As shown in the top left of Fig. 2.a, this coupled CMT layer generates the final image-aware word representations, denoted by A = ( a 0 , a 1 , . . . , a n +1 ) .", "To obtain a visual representation for each word, it is necessary to align each word with its closely related visual blocks, i.e., assigning high/low attention weights to its related/unrelated visual blocks.", "Hence, as shown in the right part of Fig. 2.b, we use a CMT layer by treating R as queries and V as keys and values, which can be considered as a symmetric version of the left CMT layer.", "Finally, it generates the word-aware visual representations, denoted by Q = ( q 0 , q 1 , . . . , q n +1 ) .", "Visual Gate: As pointed out in some previous studies (Zhang et al., 2018; Lu et al., 2018), it is unreasonable to align many function words such as the , of , and well with any visual block.", "Therefore, it is important to incorporate a visual gate to dynamically control the contribution of visual features.", "Following the practice in previous work, we design a visual gate by combining the information from the above word representations A and visual representations Q as follows: g = ( W (cid:62) a A + W (cid:62) q Q ) , (3) where { W a , W q } R d d are weight matrices, and is the element-wise sigmoid function.", "Based on the gate output, we can obtain the final word-aware visual representations as B = g Q .", "To integrate the word and the visual representations, we concatenate A and B to obtain the final hidden representations H = ( h 0 , h 1 , . . . , h n +1 ) , where h i R 2 d .", "Following Lample et al. (2016), we then feed H to a standard CRF layer, which defines the probability of the label sequence y given the input sentence S and its associated image V : P ( y | S, V ) = exp( score ( H , y )) (cid:80) y (cid:48) exp( score ( H , y (cid:48) )); (4) score ( H , y ) = n (cid:88) i =0 T y i ,y i +1 + n (cid:88) i =1 E h i ,y i ; (5) E h i ,y i = w y i MNER h i , (6) where T y i ,y i +1 is the transition score from the label y i to the label y i +1 , E h i ,y i is the emission score of the label y i for the i -th word, and w y i MNER R 2 d is the weight parameter specific to y i .", "Motivation: Since the Multimodal Transformer presented above mainly focuses on modeling the interactions between text and images, it may lead the learnt model to overemphasize the entities highlighted by the image but ignore the remaining entities.", "To alleviate the bias, we propose to leverage text-based entity span detection (ESD) as an auxiliary task based on the following observation.", "As ResNet is pre-trained on ImageNet (Deng et al., 2009) for the image recognition task, its high-level representations are closely relevant to the final predictions, i.e., the types of contained objects.", "This indicates that the visual representations from ResNet should be quite useful for identifying types of the detected entities, but are not necessarily relevant to detecting entity spans in the sentence.", "Therefore, we use purely text-based ESD to guide the final predictions for our main MNER task.", "Auxiliary Entity Span Detection Module: Formally, we model ESD as another sequence labeling task, and use z = ( z 1 , . . . , z n ) to denote the sequence of labels, where z i Z and Z = { B , I , O } .", "As shown in the left part of Fig. 2.a, we employ another Transformer layer to obtain its specific hidden representations as T = ( t 0 , t 1 , . . . , t n +1 ) , followed by feeding it to a CRF layer to predict the probability of the label sequence z given S : P ( z | S ) = exp( (cid:80) ni =0 T z i ,z i +1 + (cid:80) ni =1 w z i ESD t i ) (cid:80) z (cid:48) exp( (cid:80) ni =0 T z (cid:48) i ,z (cid:48) i +1 + (cid:80) ni =1 w z (cid:48) i ESD t i ) , where w z i ESD R d is the parameter specific to z i .", "Conversion Matrix: Although ESD is modeled as an auxiliary task separated from MNER, the two tasks are highly correlated since each ESD label should be only corresponding to a subset of labels in MNER.", "For example, given the sentence in Fig. 2.a, if the first token is predicted to be the TWITTER-2015 TWITTER-2017 Entity Type Train Dev Test Train Dev Test Person 2217 552 1816 2943 626 621 Location 2091 522 1697 731 173 178 Organization 928 247 839 1674 375 395 Miscellaneous 940 225 726 701 150 157 Total 6176 1546 5078 6049 1324 1351 Num of Tweets 4000 1000 3257 3373 723 723 Table 1: The basic statistics of our two Twitter datasets.", "beginning of an entity in ESD (i.e., have the label B ), it should be also the beginning of a typed entity in MNER (e.g., have the label B-PER ).", "To encode such inter-task correspondence, we propose to use a conversion matrix W c R |Z||Y| , where each element W cj,k defines the conversion probability from Z j to Y k .", "Since we have some prior knowledge (e.g., the label B can only convert to a label subset { B-PER , B-LOC , B-ORG , B-MISC } ), we initialize W c as follows: if Z j is not corresponding to Y k , W cj,k is set to 0; otherwise, W cj,k is set to 1 | C j | , where C j denotes a subset of Y that is corresponding to Z j .", "Modified CRF Layer for MNER: After obtaining the conversion matrix, we further propose to fully leverage the text-based entity span predictions to guide the final predictions of MNER.", "Specifically, we modify the CRF layer for MNER by incorporating the entity span information from ESD into the emission score defined in Eqn.", "(6): E h i ,y i = w y i MNER h i + w z i ESD t i W cz i ,y i .", "Given a set of manually labeled training samples D = { S j , V j , y j , z j } Nj =1 , our overall training objective function is a weighted sum of the sentence-level negative log-likelihood losses for our main MNER task and the auxiliary ESD task 2 :", "where is a hyperparameter to control the contribution of the auxiliary ESD module.", "We conduct experiments on two multimodal NER datasets, comparing our Unified Multimodal Transformer (UMT) with a number of unimodal and multimodal approaches.", "2 We obtain z j by removing the type information in y j .", "Datasets: We take two publicly available Twitter datasets respectively constructed by Zhang et al. (2018) and Lu et al. (2018) for MNER.", "Since the two datasets mainly include multimodal user posts published on Twitter during 2014-2015 and 2016-2017, we denote them as TWITTER-2015 and TWITTER-2017 respectively.", "Table 1 shows the number of entities for each type and the counts of multimodal tweets in the training, development, and test sets of the two datasets 3 .", "We have released the two datasets preprocessed by us for research purpose via this link: https://github.com/jefferyYu/UMT.", "Hyperparameters: For each unimodal and multimodal approach compared in the experiments, the maximum length of the sentence input and the batch size are respectively set to 128 and 16.", "For our UMT approach, most hyperparameter settings follow Devlin et al. (2018) with the following exceptions: (1) the word representations C are initialized with the cased BERT base model pre-trained by Devlin et al. (2018), and fine-tuned during training.", "(2) we employ a pre-trained 152-layer ResNet 4 to initialize the visual representations U and keep them fixed during training.", "(3) For the number of cross-modal attention heads, we set it as m =12.", "(4) The learning rate, the dropout rate, and the tradeoff parameter are respectively set to 5e-5, 0.1, and 0.5, which can achieve the best performance on the development set of both datasets via a small grid search over the combinations of [1e-5, 1e-4], [0.1, 0.5], and [0.1, 0.9].", "To demonstrate the effect of our Unified Multimodal Transformer (UMT) model, we first consider a number of representative text-based approaches for NER: (1) BiLSTM-CRF (Huang et al., 2015), a pioneering study which eliminates the heavy reliance on hand-crafted features, and simply employs a bidirectional LSTM model followed by a CRF layer for each word's final prediction; (2) CNN-BiLSTM-CRF (Ma and Hovy, 2016), a widely adopted neural network model for NER, which is an improvement of BiLSTM-CRF by replacing each word's word embedding with the concatenation of its word embedding and CNN-based character-level word representations; (3) HBiLSTM-CRF (Lample et al., 2016), an end-to-end hierarchical LSTM architectures, which replaces the bottom CNN layer in CNN-BiLSTM-CRF with an LSTM layer to obtain the character-level word representations; (4) BERT (Devlin et al., 2018), a multi-layer bidirectional Transformer encoder, which gives contextualized representations for each word, followed by stacking a softmax layer for final predictions; (5) BERT-CRF , a variant of BERT by replacing the softmax layer with a CRF layer.", "Besides, we also consider several competitive multimodal approaches for MNER: (1) GVATT-HBiLSTM-CRF (Lu et al., 2018), a state-of-the-art approach for MNER, which integrates HBiLSTM-CRF with the visual context by proposing a visual attention mechanism followed by a visual gate to obtain word-aware visual representations; (2) AdaCAN-CNN-BiLSTM-CRF (Zhang et al., 2018), another state-of-the-art approach based on CNN-BiLSTM-CRF , which designs an adaptive co-attention network to induce word-aware visual representations for each word; (3) GVATT-BERT-CRF TWITTER-2015 TWITTER-2017 Methods P R F1 P R F1 UMT-BERT-CRF 71.67 75.23 73.41 85.28 85.34 85.31 w/o ESD Module 70.48 74.80 72.58 84.60 84.16 84.42 w/o Conversion Matrix 70.43 74.98 72.63 84.72 84.97 84.85 w/o Image-Aware WR 70.33 75.44 72.79 83.83 85.94 84.87 w/o Visual Gate 71.34 75.15 73.19 85.31 84.68 84.99 Table 3: Ablation Study of Unified Multimodal Transformer.", "and AdaCAN-BERT-CRF , our two variants of the above two multimodal approaches, which replace the sentence encoder with BERT ; (4) MT-BERT-CRF , our Multimodal Transformer model introduced in Section 2.3; (5) UMT-BERT-CRF , our unified architecture by incorporating the auxiliary entity span detection module into Multimodal Transformer, as introduced in Section 2.4.", "All the neural models are implemented with Py-Torch, and all the experiments are conducted on NVIDIA RTX 2080 Ti GPUs.", "In Table 2, we report the precision ( P ), recall ( R and F1 score ( F1 ) achieved by each compared method on our two Twitter datasets.", "First, comparing all the text-based approaches, we can clearly observe that BERT outperforms the other compared methods with a significant margin on both datasets.", "Moreover, it is easy to see that empowering BERT with a CRF layer can further boost the performance.", "All these observations indicate that the contextualized word representations are indeed quite helpful for the NER task on social media texts, due to the context-aware characteristics.", "This agrees with our first motivation.", "Second, comparing the state-of-the-art multimodal approaches with their corresponding unimodal baselines, we can find that the multimodal approaches can generally achieve better performance, which demonstrates that incorporating the visual context is generally useful for NER.", "Besides, we can see that although GVATT-HBiLSTM-CRF and AdaCAN-CNN-BiLSTM-CRF can significantly outperform their unimodal baselines, the performance gains become relatively limited when replacing their sentence encoder with BERT .", "This suggests the challenge and the necessity of proposing a more effective multimodal approach.", "Third, in comparison with the two existing multimodal methods, our Multimodal Transformer MT-BERT-CRF outperforms the state-of-the-art by 2.5% and 2.8% respectively, and also achieves bet-Figure 3: The number of entities (shown in y-axis) that are incorrectly predicted by BERT-CRF, but get corrected by each multimodal method Figure 4: The number of entities (shown in y-axis) that are correctly predicted by BERT-CRF, but wrongly predicted by each multimodal method ter performance than their BERT variants.", "We conjecture that the performance gains mainly come from the following reason: the two multimodal methods only focus on obtaining word-aware visual representations, whereas our MT-BERT-CRF approach targets at generating both image-aware word representations and word-aware visual representations for each word.", "These observations are in line with our second motivation.", "Finally, comparing all the unimodal and multimodal approaches, it is clear to observe that our Unified Multimodal Transformer (i.e., UMT-BERT-CRF ) can achieve the best performance on both datasets, outperforming the second best methods by 1.14% and 1.05%, respectively.", "This demonstrates the usefulness of the auxiliary entity span detection module, and indicates that the auxiliary module can help our Multimodal Transformer alleviate the bias brought by the associated images, which agrees with our third motivation.", "To investigate the effectiveness of each component in our Unified Multimodal Transformer (UMT) architecture, we perform comparison between the full UMT model and its ablations with respect to the auxiliary entity span detection (ESD) module and the multimodal interaction (MMI) module.", "As shown in Table 3, we can see that all the components in UMT make important contributions to the final results.", "On the one hand, removing the whole ESD module will significantly drop the performance, which shows the importance of alleviating the visual bias.", "In particular, discarding the conversion matrix in the ESD module also leads to the performance drop, which indicates the usefulness of capturing the label correspondence between the auxiliary module and our main MNER task.", "our MMI module, Image-Aware Word Representations (WR) demonstrates its indispensable role in the final performance due to the moderate performance drop after removal.", "Besides, removing the visual gate also results in minor performance drop, indicating its importance to the full model.", "Importance of MMI and ESD Modules: To better appreciate the importance of two main contributions (i.e., MMI and ESD modules) in our proposed approaches, we conduct additional analysis on our two test sets.", "In Fig. 3 and Fig. 4, we show the number of entities that are wrongly/correctly predicted by BERT-CRF, but correctly/wrongly predicted by each multimodal method 5 .", "First, we can see from Fig. 3 that with the MMI module, our MT-BERT-CRF and UMT-BERT-CRF approaches correctly identify more entities, compared with the two multimodal baselines.", "Table 4.A shows a specific example.", "We can see that our two methods correctly classify the type of Wolf Hall as MISC whereas the compared systems wrongly predict its type as LOC , probably because our MMI module enforces the image-aware word representations of Wolf Hall to be closer to drama names.", "Second, in Fig. 4, it is clear to observe that compared with the other three methods, UMT-BERT-CRF can significantly decrease the bias brought by the visual context due to incorporating our auxiliary ESD module.", "In Table 4.B, we show a concrete example: since Game of Thrones is ignored by the image, the two multimodal baselines fail to identify them; in contrast, with the help of the auxiliary 5 Note that here we use strict matches (i.e., correct span and type predictions).", "ESD module, UMT-BERT-CRF successfully eliminates the bias.", "Effect of Incorporating Images: To obtain a better understanding of the general effect of incorporating associated images into our MNER task, we carefully examine our test sets and choose two representative test samples to compare the prediction results of different approaches.", "First, we observe that most improvements gained by multimodal methods come from those samples where the textual contents are informal or incomplete but the visual context provides useful clues.", "For example, in Table 4.C, we can see that without the visual context, BERT-CRF fails to identify that the two entities refer to two singers in the concert, but all the multimodal approaches can correctly classify their types after incorporating the image.", "Second, by manually checking the test set of our two datasets, we find that in around 5% of the social media posts, the associated images might be irrelevant to the textual contents due to two kinds of reasons: (1) these posts contain image memes, cartoons, or photos with metaphor; (2) their images and textual contents reflect different aspects of the same event.", "In such cases, we observe that multimodal approaches generally perform worse than BERT-CRF .", "A specific example is given in Table 4.D, where all the multimodal methods wrongly classify Siri as PER because of the unrelated face in the image.", "As a crucial component of many information extraction tasks including entity linking (Derczynski et al., 2015), opinion mining (Maynard et al., 2012), and event detection (Ritter et al., 2012), named", "Methods for NER: In the literature, various supervised learning approaches have been proposed for NER.", "Traditional approaches typically focus on designing various effective NER features, followed by feeding them to different linear classi-fiers such as maximum entropy, conditional random fields (CRFs), and support vector machines (Chieu and Ng, 2002; Florian et al., 2003; Finkel et al., 2005; Ratinov and Roth, 2009; Lin and Wu, 2009; Passos et al., 2014; Luo et al., 2015).", "To reduce the feature engineering efforts, a number of recent studies proposed to couple different neural network architectures with a CRF layer (Laf-ferty et al., 2001) for word-level predictions, including convolutional neural networks (Collobert et al., 2011), recurrent neural networks (Chiu and Nichols, 2016; Lample et al., 2016), and their hierarchical combinations (Ma and Hovy, 2016).", "These neural approaches have been shown to achieve the state-of-the-art performance on different benchmark datasets based on formal text (Yang et al., 2018).", "However, when applying these approaches to social media text, most of them fail to achieve satisfactory results.", "To address this issue, many studies proposed to exploit external resources (e.g., shallow parser, Freebase dictionary, and orthographic characteristics) to incorporate a set of tweet-specific features into both traditional approaches (Ritter et al., 2011; Li et al., 2014; Baldwin et al., 2015) and recent neural approaches (Lim-sopatham and Collier, 2016; Lin et al., 2017), which can obtain much better performance on social media text.", "Methods for Multimodal NER (MNER): As multimodal data become increasingly popular on social media platforms, several recent studies focus on the MNER task, where the goal is to leverage the associate images to better identify the named entities contained in the text.", "Specifically, Moon et al. (2018) proposed a multimodal NER network with modality attention to fuse the textual and visual information.", "To model the inter-modal interactions and filter out the noise in the visual context, Zhang et al. (2018) and Lu et al. (2018) respectively proposed an adaptive co-attention network and a gated visual attention mechanism for MNER.", "In this work, we follow this line of work.", "But different from them, we aim to propose an effective multimodal method based on the recent Transformer architecture (Vaswani et al., 2017).", "To the best of our knowledge, this is the first work to apply Transformer to the task of MNER.", "In this paper, we first presented a Multimodal Transformer architecture for the task of MNER, which captures the inter-modal interactions with a multimodal interaction module.", "Moreover, to alleviate the bias of the visual context, we further proposed a Unified Multimodal Transformer (UMT), which incorporates an entity span detection module to guide the final predictions for MNER.", "Experimental results show that our UMT approach can consistently achieve the best performance on two benchmark datasets.", "There are several future directions for this work.", "On the one hand, despite bringing performance improvements over existing MNER methods, our UMT approach still fails to perform well on social media posts with unmatched text and images, as analyzed in Section 3.5.", "Therefore, our next step is to enhance UMT so as to dynamically filter out the potential noise from images.", "On the other hand, since the size of existing MNER datasets is relatively small, we plan to leverage the large amount of unlabeled social media posts in different platforms, and propose an effective framework to combine them with the small amount of annotated data to obtain a more robust MNER model.", "We would like to thank three anonymous reviewers for their valuable comments.", "This research is supported by the National Research Foundation, Singapore under its International Research Centres in Singapore Funding Initiative, and the Natural Science Foundation of China under Grant 61672288.", "Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not reflect the views of National Research Foundation, Singapore." ]
[ "method", "abstain", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "result", "objective", "objective", "objective", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "other", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "result", "result", "result", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "objective", "objective", "objective", "result", "abstain", "method", "method", "objective", "other", "other", "other" ]
[ "There are common semantics shared across text and images.", "Given a sentence in a source language, whether depicting the visual scene helps translation into a target language?", "Existing multimodal neural machine translation methods (MNMT) require triplets of bilingual sentence image for training and tuples of source sentence image for inference.", "In this paper, we propose ImagiT, a novel machine translation method via visual imagination.", "ImagiT first learns to generate visual representation from the source sentence, and then utilizes both source sentence and the imag-ined representation to produce a target translation.", "Unlike previous methods, it only needs the source sentence at the inference time.", "Experiments demonstrate that ImagiT benefits from visual imagination and significantly outperforms the text-only neural machine translation baselines.", "Further analysis reveals that the imagination process in ImagiT helps fill in missing information when performing the degradation strategy.", "Visual foundation has been introduced in a novel multimodal Neural Machine Translation (MNMT) task (Specia et al., 2016; Elliott et al., 2017; Barrault et al., 2018), which uses bilingual (or multilingual) parallel corpora annotated by images describing sentences' contents (see Figure", "1(a)).", "The superiority of MNMT lies in its ability to use visual information to improve the quality of translation, but its effectiveness largely depends on the availability of data sets, especially the quantity and quality of annotated images.", "In addition, because the cost of manual image annotation is relatively high, at this stage, MNMT is mostly applied on a small and specific dataset, Multi30K (Elliott et al., 2016), and is not suitable for large-scale text-only Neural Machine Translation (NMT) (Bahdanau et al., Model A bird flies over the water Ein Vogel fliegt ber das Wasser", "2015; Vaswani et al., 2017).", "Such limitations hinder the applicability of visual information in NMT.", "To address the bottlenecks mentioned above, Zhang et al. (2020) propose to build a lookup table from an image dataset and then using the search-based method to retrieve pictures that match the source language keywords.", "However, the lookup table is built from Multi30K, which leads to a relatively limited coverage of the pictures, and potentially introduces much irrelevant noise.", "It does not always find the exact image corresponding to the text, or the image may not even exist in the database.", "Elliott and Kdr (2017) present a multitask learning framework to ground visual representation to a shared space.", "Their architecture called imagination shares an encoder between a primary NMT task and an auxiliary task of ranking the visual features for image retrieval.", "However, neither the image is explicitly generated, nor the visual feature is directly leveraged by the translation decoder, the model simply learns the visual grounded shared encoder.", "Based on other researchers' earlier exploration, we hypothesize that the potential of vision in conventional text-only NMT has not been fully discovered.", "Different with Elliott and Kdr (2017) implicit approach, we understand imagination to be more like picturing, since it is similar to humans who can visually depict figures in the mind from an utterance.", "Our approach aims to explicitly imagine a vague figure (see Figure", "1(b)) to guide the translation, since A picture is worth a thousand words , and imagining the picture of a sentence is the instinctive reaction of a human being who is learning bilingualism.", "In this paper, we propose a novel end-to-end machine translation model that is embedded in visual semantics with generative imagination (ImagiT) (see Figure", "1(b)).", "Given a source language sentence, ImagiT first encodes it and transforms the word representations into visual features through an attentive generator, which can effectively capture the semantics of both global and local levels, and the generated visual representations can be considered as semantic-equivalent reconstructions of sentences.", "A simple yet effective integration module is designed to aggregate the textual and visual modalities.", "In the final stage, the model learns to generate the target language sentence based on the joint features.", "To train the model in an end-to-end fashion, we apply a visual realism adversarial loss and a text-image pair-aware adversarial loss, as well as text-semantic reconstruction loss and target language translation loss based on cross-entropy.", "In contrast with most prior MNMT work, our proposed ImagiT model does not require images as input during the inference time but can leverage visual information through imagination, making it an appealing method in low-resource scenario.", "Moreover, ImagiT is also flexible, accepting external parallel text data or non-parallel image captioning data.", "We evaluate our Imagination modal on the Multi30K dataset.", "The experiment results show that our proposed method significantly outperforms the text-only NMT baseline.", "The analysis demonstrates that imagination help the model complete the missing information in the sentence when we perform degradation masking, and we also see improvements in translation quality by pre-training the model with an external non-parallel image captioning dataset.", "To summarize, the paper has the following contributions:", "1. We propose generative imagination, a new setup for machine translation assisted by synthesized visual representation, without annotated images as input;", "2. We propose the ImagiT method, which shows advantages over the conventional MNMT model and gains significant improvements over the text-only NMT baseline;", "3. We conduct experiments to verify and analyze how imagination helps the translation.", "MNMT As a language shared by people worldwide, visual modality may help machines have a more comprehensive perception of the real world.", "Multimodal neural machine translation (MNMT) is a novel machine translation task proposed by the machine translation community, which aims to design multimodal translation frameworks using context from the additional visual modality (Specia et al., 2016).", "The shared task releases the dataset Multi30K (Elliott et al., 2016), which is an extended German version of Flickr30K (Young et al., 2014), then expanded to French and Czech (Elliott et al., 2017; Barrault et al., 2018).", "In the three versions of tasks, scholars have proposed many multimodal machine translation models and methods.", "Huang et al. (2016) encodes word sequences with regional visual objects, while Calixto and Liu (2017) study the effects of incorporating global visual features to initialize the encoder/decoder hidden states of RNN.", "Caglayan et al. (2017) models the image-text interaction by leveraging element-wise multiplication.", "Elliott and Kdr (2017) propose a multitask learning framework to ground visual representation to a shared space and learn with the auxiliary triplet alignment task.", "The common practice is to use convolutional neural networks to extract visual information and then using attention mechanisms to extract visual contexts (Caglayan et al., 2016; Calixto et al., 2016; Libovick and Helcl, 2017).", "Ive et al. (2019) propose a translate-and-refine approach using two-stage decoder.", "Calixto et al. (2019) put forward a latent variable model to capture the multimodal interactions between visual and textual features.", "Caglayan et al. (2019) show that visual content is more critical when the textual content is limited or uncertain in MMT.", "Recently, Yao and Wan (2020) propose multimodal self-attention in Transformer to avoid encoding irrelevant information in images, and Yin et al. (2020) propose a graph-based multimodal fusion encoder to capture various relationships.", "Text-to-image synthesis Traditional Text-to-image (T2I) synthesis mainly uses keywords to search for small image regions, and finally optimizes the entire layout (Zhu et al., 2007).", "After generative adversarial networks (GANs) (Goodfel-low et al., 2014) were proposed, scholars have presented a variety of GAN-based T2I models.", "Reed et al. (2016) propose DC-GAN and design a direct and straightforward network and a training strategy for T2I generation.", "Zhang et al. (2017) propose stackGAN, which contains multiple cascaded generators and discriminators, and the higher stage generates better quality pictures.", "In previous work, scholars only considered global semantics.", "Xu et al. (2018) proposed AttnGAN to apply the attention mechanism to capture fine-grained word-level information.", "MirrorGAN (Qiao et al., 2019) employs a mirror structure, which reversely learns from the inverse task of T2I to further validate whether generated images are consistent with the input texts.", "The inverse task is also known as image captioning.", "As shown in Figure2, ImagiT embodies the encoder-decoder structure for end-to-end machine translation.", "Between the encoder and the decoder, there is an imagination step to generate semantic-equivalent visual representation.", "Technically, our model is composed of following modules: source text encoder, generative imagination network, image captioning, multimodal aggregation and decoder for translation.", "We will elaborate on each of them in the rest of this section.", "Vaswani et al. (2017) propose the state-of-art Transformer-based machine translation framework, which can be written as follows:", "Where Att l , LN , and FFN l are the self-attention module, layer normalization, and the feed-forward network for the l -th identical layer respectively.", "The core of the Transformer is the multi-head self-attention, in each attention head, we have: z i = n (cid:88) j =1 ij ( x j WV ) , (3) ij = softmax (( x i WQ )( x j WK ) (cid:62) d ) .", "WV , WQ , WK are layer-specific trainable parameter matrices.", "For the output of final stacked layer, we use w = { w 0 , w 1 , ..., w L 1 } , w R d L to represent the source word embedding, L is the length of the source sentence.", "Besides, we add a special token to each source language sentence to obtain the sentence representation s R d .", "Generative Adversarial Network (Goodfellow et al., 2014) has been applied to synthesis images similar to ground truth (Zhang et al., 2017; Xu et al., 2018; Qiao et al., 2019).", "We follow the common practice of using the conditioning augmentation (Zhang et al., 2017) to enhance robustness to small perturbations along the conditioning text manifold and improve the diversity of generated samples.", "1 F ca represents the conditioning augmentation function, and s ca represents the enhanced sentence representation.", "{ F 0 , F 1 } are two visual feature converters, sharing similar architecture.", "F 0 contains a fully connected layer and four deconvolution layers (Noh et al., 2015) to obtain image-sized feature vectors.", "Furthermore, we define { f 0 , f 1 } are the visual features after two transformations with different resolution.", "For detailed layer structure and block design, please refer to (Xu et al., 2018).", "1 Zhang et al. (2017) also mentions that the randomness in the Conditioning Augmentation is beneficial for modeling text to image semantic translation as the same sentence usually corresponds to objects with various poses and appearances.", "Where f 0 RM 0 N 0 , z is the noise vector, sampled from the standard normal distribution, and it will be concatenated with s ca .", "Each column of f i is a feature vector of a sub-region of the image, which can also be treat as a pseudo-token.", "To generate fine-grained details at different subregions of the image by paying attention to the relevant words in the source language, we use image vector in each sub-region to query word vectors by leveraging attention strategy.", "F attn is an attentive function to obtain word-context feature, then we have: F attn ( f 0 , s ca ) = L 1 (cid:88) l =0 ( U 0 w l )( softmax ( f T 0 ( U 0 w l ))) (cid:62) , (8) Word feature w l is firstly converted into the common semantic space of the visual feature, U 0 is a perceptron layer.", "Then it will be multiplied with f 0 to acquire the attention score.", "f 1 is the output of the imagination network, capturing multiple levels (word level and sentence level) of semantic meaning.", "f 1 is denoted as the blue block generated visual feature in Figure2.", "It will be utilized directly for target language generation, and it will also be passed to the discriminator for adversarial training.", "Note that for the whole pipeline, upsampling f 1 to an image is redundant.", "Comparing to T2I synthesis works which use cascaded generators and disjoint discrimina-tors(Zhang et al., 2017; Xu et al., 2018; Qiao et al., 2019), we only use one stage to reduce the model size and make our generated visual feature f 1 focus more on text-mage consistency, but not the realism and authenticity.", "Image captioning (I2T) can be regarded as the inverse problem of text-to-image generation, generating the given image's description.", "If an imagined image is semantic equivalent to the source sentence, then its description should be almost identical to the given text.", "Thus we leverage the image captioning to translate the imagined visual representation back to the source language(Qiao et al., 2019), and this symmetric structure can make the imagined visual feature act like a mirror, effectively enhancing the semantic consistency of the imagined visual feature and precisely reflect the underlying semantics.", "Following Qiao et al. (2019), we utilize the widely used encoder-decoder image captioning framework(Vinyals et al., 2015), and fix the parameters of the pre-trained image captioning framework when end-to-end training other modules in ImagiT.", "p t = Decoder ( h t 1 ) , t = 0 , 1 , ..., L 1 , (9) LI 2 T = L 1 (cid:88) t =0 log p t ( T t ) .", "p t is the predicted probability distribution over the words at t -th decoding step, and T t is the T t -th entry of the probability vector.", "After obtaining the imagined visual representation, we aggregate two modalities for the translation decoder.", "Although the vision carries richer information, it also contains irrelevant noise.", "Comparing to encoding and integrating visual feature directly, a more elegant method is to induce the hidden representation under the guide of image-aware attention and graph perspective of Transformer (Yao and Wan, 2020), since each local spatial regions of the image can also be considered as pseudo-tokens, which can be added to the source fully-connected graph.", "In the multimodal self-attention layer, we add the spatial feature of the generated feature map in the source sentence, that is, the attention query vector is the combination of text and visual em-beddings, getting x R ( L + M ) d .", "Then perform image-aware attention, the key and value vectors are just text embeddings, we have: c i = L 1 (cid:88) j =0 ij ( w j WV ) , (11) ij = softmax (( x i WQ )( w j WK ) (cid:62) d ) .", "To train the whole network end-to-end, we leverage adversarial training to alternatively train the generator and the discriminator.", "Especially, as shown in Figure 3, the discriminator take the imagined visual representation, source language sentence, and the real image as input, and we employ two adversarial losses: a visual realism adversarial source language sentence Generated Image Target language sentence Real Image Discriminator Figure 3: Training objective.", "loss, and a text-image pair-aware adversarial loss computed by the discriminator (Zhang et al., 2017; Xu et al., 2018; Qiao et al., 2019).", "LG 0 = 1 2 E f 1 p G [log( D ( f 1 )] 1 2 E f 1 p G [log( D ( f 1 , s )] , (14) f 1 is the generated visual feature computed by equation 7 from the model distribution p G , s is the global sentence vector.", "The first term is to distinguish real and fake, ensuring that the generator generates visually realistic images.", "The second term is to guarantee the semantic consistency between the input text and the generated image.", "LG 0 jointly approximates the unconditional and conditional distributions.", "The final objective function of the generator is defined as: LG = LG 0 + 1 LI 2 T + 2 L trans .", "LD = 1 2 EI p data [log( D ( I )] 1 2 E f 1 p G [log(1 D ( f 1 )] 1 2 EI p data [log( D ( I, s )] 1 2 E f 1 p G [log(1 D ( f 1 , s )] .", "(16)", "We evaluate our proposed ImagiT model on two datasets, Multi30K (Elliott et al., 2016) and Ambiguous COCO (Elliott et al., 2017).", "To show its ability to train with external out-of-domain datasets, we adopt MS COCO (Lin et al., 2014) in the next analyzing section.", "Multi30K is the largest existing human-labeled collection for MNMT, containing 31 K images and consisting of two multilingual expansions of the original Flickr30K(Young et al., 2014) dataset.", "The first expansion has five English descriptions and five German descriptions, and they are independent of each other.", "The second expansion has one of its English description manually translated to German by a professional translator, then expanded to French and Czech in the following shared task (El-liott et al., 2017; Barrault et al., 2018).", "We only apply the second expansion in our experiments, which has 29 , 000 instances for training, 1 , 014 for development, and 1 , 000 for evaluation.", "We present our results on English-German (En-De) English-French (En-Fr) Test2016 and Test2017.", "Ambiguous COCO is a small evaluation dataset collected in the WMT2017 multimodal machine translation challenge (Elliott et al., 2017), which collected and translated a set of image descriptions that potentially contain ambiguous verbs.", "It contains 461 images from the MS COCO(Lin et al., 2014) for 56 ambiguous vers in total.", "MS COCO is the widely used non-parallel text-image paired dataset in T2I and I2T generation.", "It contains 82 , 783 training images and 40 , 504 validation images with 91 different object types, and each image has 5 English descriptions.", "Our baseline is the conventional text-only Transformer (Vaswani et al., 2017).", "Specifically, each encoder-decoder has a 6-layer stacked Transformer network, eight heads, 512 hidden units, and the in-ner feed-forward layer filter size is set to 2048.", "The dropout is set to p = 0 .", "1 , and we use Adam optimizer (Kingma and Ba, 2015) to tune the parameter.", "The learning rate increases linearly for the warmup strategy with 8 , 000 steps and decreases with the step number's inverse square root.", "We train the model up to 10 , 000 steps, the early-stop strategy is adopted.", "We use the same setting as Vaswani et al. (2017).", "We use the metrics BLEU (Papineni et al., 2002) and METEOR (Denkowski and Lavie, 2014)to evaluate the translation quality.", "For the imagination network, the noise vector's dimension is 100 , and the generated visual feature is 128 128 .", "The upsampling and residual block in visual feature transformers consist of 3 3 stride 1 convolution, batch normalization, and ReLU activation.", "The training is early-stopped if the dev set BLEU score do not improve for 10 epochs, since Model En De En Fr Ambiguous COCO Ambiguous COCOBLEU METEOR BLEU METEOR Multimodal Neural Machine Translation Systems fusion-conv (Caglayan et al., 2017) 25.1 46.0 43.2 63.1 trg-mul (Caglayan et al., 2017) 26.4 47.4 43.5 63.2 VAG-NMT (Zhou et al., 2018) 28.3 48.0 45.0 64.7 ImagiT + ground truth 28.8 48.9 45.3 65.1 Text-only Neural Machine Translation Systems Transformer baseline (Vaswani et al., 2017) 27.9 47.8 44.9 64.2 ImagiT 28.7 48.8 45.3 65.0 Table 2: Experimental results on the Ambiguous COCO En De and En Fr translation task.", "the translation is the core task.", "The batch size is 64 , and the learning rate is initialized to be 2 e 4 and decayed to half of its previous value every 100 epochs.", "A similar learning schedule is adopted in Zhang et al. (2017).", "The margin size is set to 0 .", "1 , the balance weight 1 = 20 , 2 = 40 .", "Table 1 illustrates the results for the En-De Test2016, En-De Test2017, En-Fr Test2016 and En-Fr Test2017 tasks.", "Our text-only Transformer baseline (Vaswani et al., 2017) has similar results compared to most prior MNMT works, which is consistent with the previous findings (Caglayan et al., 2019), that is, textual modality is good enough to translate for Multi30K dataset.", "This finding helps to explain that it is already tricky for a MNMT model to ground visual modality even with the presence of annotated images.", "However, Our ImagiT gains improvements over the text-only Transformer baseline on four evaluation datasets, demonstrating that our model can effectively embed the visual semantics during the training time and guide the translation through imagination with the absence of annotated images during the inference time.", "We assume much of the performance improvement is due to ImagiT's strong ability to capture the interaction between text and image, generate semantic-consistent visual representations, and incorporate information from visual modality properly.", "We also observe that our approach surpasses the results of most MNMT systems by a noticeable margin in terms of BLEU score and METEOR score on four evaluation datasets.", "Our ImagiT is also competitive with ImagiT + ground truth, which is our translation decoder taking ground truth visual representations instead of imagined ones, and can be regarded as the upper boundary of imagiT.", "This proves imaginative ability of ImagiT.", "Ambiguous COCO.", "For Ambiguous COCO, which was purposely curated such that verbs have ambiguous meaning, demands more visual contribution for guiding the translation and selecting correct words.", "Our ImagiT benefits from visual imagination and substantially outperforms previous works on ambiguous COCO.", "and even gets the same performance as ImagiT + ground truth (45.3 BLEU).", "The hyper-parameter 1 in equation 15 is important.", "When 1 = 0 , there is no image captioning component, the BLEU score drops from 38.5 to 37.9, while this variant still outperforms the Transformer baseline.", "This indicates the effectiveness of image captioning module, since it will potentially prevent visual-textual mismatching, thus helps generator achieve better performance.", "When 1 increases from 5 to 20 , the BLEU and METEOR increase accordingly.", "Whereas 1 is set to equal to 2 , the BLEU score falls to 38.3.", "That's reasonable because 2 L trans is the main task of the whole model.", "Since the proposed model does not require images as input, one may ask how it uses visual information and where the information comes?", "We claim that ImagiT has already been embedded with visual semantics during the training phase, and in this section, we validate that ImagiT is able to generate visual grounded representation by performing the image retrieval task.", "For each source sentence, we generate the intermediate visual representation.", "Furthermore, we query the ground truth image features for each generated representation to find the closest image vectors around it based on the cosine similarity.", "Then we can measure the R @ K score, which computes the recall rate of the matched image in the top K nearest neighborhoods.", "Some previous studies on VSE perform sentence-to-image retrieval and image-to-sentence retrieval, but their results can not be directly compared with ours, since we are performing image-to-image retrieval in practical.", "However, from Table 4, especially for R @10 , the results demonstrate that our generated representation has excellent quality of shared semantics and have been grounded with visual semantic-consistency.", "Although we have validated the effectiveness of ImagiT on three widely used MNMT evaluation datasets.", "A natural question to ask is that how does the imagination guide the translation, and to which extent?", "When human beings confronting with complicate sentences and obscure words, we often resort to mind-picturing and mental visualization to assist us to auto-complete and fill the whole imagination.", "Thus we hypothesis that imagination could help recover and retrieve the missing and implicate textual information.", "Inspired by Ive et al. (2019); Caglayan et al. (2019), we apply degradation strategy to the input source language, and feed to the trained Transformer baseline, MNMT baseline, and ImagiT respectively, to validate if our proposed approach could recover the missing information and obtain better performance.", "And we conduct the analysing experiments on En-De Test2016 evaluation set.", "Color deprivation is to mask the source tokens that refers to colors, and replace them with a special token [M].", "Under this circumstance, text-only NMT model have to rely on source-side contextual information and biases, while for MNMT model, it can directly utilize the paired color-related information-rich images.", "But for ImagiT, the model will turn to imagination and visualization.", "Table 5 demonstrates the results of color deprivation.", "We implement a simple transformer-based MNMT baseline model using the multimodal self-attention approach (Yao and Wan, 2020).", "Thus the illustrated three models in Table 5 can be compared directly.", "We can observe that the BLEU score of text-only NMT decreases 1.3, whereas MNMT and ImagiT system only decreases 0.5.", "This result corroborates that our ImagiT has a similar ability to recover color compared to MNMT, but our ImagiT achieves the same effect through its own efforts, i.e., imagination.", "One possible explanation is that ImagiT could learn the correlation and co-occurrence of the color and specific entities during the training phase, thus imagiT could infer the color from the context and recover it by visualization.", "Visually depictable entity masking .", "Plummer et al. (2015) extend Flickr30K with cerefer-ence chains to tag mentions of visually depictable entities.", "Similar to color deprivation, we randomly replace 0% , 15% , 30% , 45% , 60% visually depictable entities with a special token [M].", "Figure 4 is the result of visually depictable entity masking.", "We observe a large BLEU score drop of text-only Transformer baseline with the increasing of masking proportion, while MNMT and ImagiT are relatively smaller.", "This result demonstrates that our ImagiT model can much more effectively infer and imagine missing entities compared to text-only Transformer, and have comparable capability over the MNMT model.", "Our ImagiT model also accepts external parallel text data or non-parallel image captioning data, and we can easily modify the objective function to train with out-of-domain non-triple data.", "To train with text-image paired image captioning data, we can pre-train our imagination model by ignoring L trans term (Yang et al., 2020).", "In other words, the T2I synthesis module can be solely trained with MS COCO dataset.", "We randomly split MS COCO in half, and use COCO half and COCO full to pretrain ImagiT.", "The MS COCO is processed using the same pipeline as in Zhang et al. (2017).", "Furthermore, the training setting of COCO half and COCO full are the same with batch size 64 and maximum epoch 600 .", "The results are: BLEU METEOR ImagiT 38.4 55.7 ImagiT + COCO half 38.6 56.3 ImagiT + COCO full 38.7 56.7 Table 6: Translation results when using out-of-domain non-parallel image captioning data.", "As is shown in Table 6, our ImagiT model pre-trained with half MS COCO gain 0.6 METEOR increase, and the improvement becomes more apparent when training with the whole MS COCO.", "We can contemplate that large-scale external data may further improve the performance of ImagiT, and we have not utilized parallel text data (e.g., WMT), even image-only and monolingual text data can also be adopted to enhance the model capability, and we leave this for future work.", "This work presents generative imagination-based machine translation model (ImagiT), which can", "effectively capture the source semantics and generate semantic-consistent visual representations for imagination-guided translation.", "Without annotated images as input, out model gains significant improvements over text-only NMT baselines and is comparable with the SOTA MNMT model.", "We analyze how imagination elevates machine translation and show improvement using external image captioning data.", "Further work may center around introducing more parallel and non-parallel, text, and image data for different training schemes.", "This work brings together text-to-image synthesis, image captioning, and neural machine translation (NMT) for an adversarial learning setup, advancing the traditional NMT to utilize visual information.", "For multimodal neural machine translation (MNMT), which possesses annotated images and can gain better performance, manual image annotation is costly, so MNMT is only applied on a small and specific dataset.", "This work tries to extend the applicability of MNMT techniques and visual information in NMT by imagining a semantic equivalent picture and making it appropriately utilized by visual-guided decoder.", "Compared to the previous multimodal machine translation approaches, this technique takes only sentences in the source languages as the usual machine translation task, making it an appealing method in low-resource scenarios.", "However, the goal is still far from being achieved, and more efforts from the community are needed for us to get there.", "One pitfall of our proposed model is that trained ImagiT is not applicable to larger-scale text-only NMT tasks, such as WMT'14, which is mainly related to economies and politics, since those texts are not easy to be visualized, containing fewer objects and visually depictable entities.", "We advise practitioners who apply visual information in large-scale text-to-text translation to be aware of this issue.", "In addition, the effectiveness of MNMT model largely depends on the quantity and quality of annotated images, likewise, our model performance also depends on the quality of generated visual representations.", "We will need to carefully study how the model balance the contribution of different modality and response to ambiguity and bias to avoid undesired behaviors of the learned models." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "method", "objective", "objective", "abstain", "objective", "objective", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain" ]
[ "Yu Lu 1,2 , Jiali Zeng 3 , Jiajun Zhang 1,2 , Shuangzhi Wu 3 and Mu Li 3 1 National Laboratory of Pattern Recognition, Institute of Automation, CAS, Beijing, China School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China 3 Tencent Cloud Xiaowei, Beijing, China { yu.lu, jjzhang } @nlpr.ia.ac.cn", "Abstract Confidence estimation aims to quantify the confidence of the model prediction, providing an expectation of success.", "A well-calibrated confidence estimate enables accurate failure prediction and proper risk measurement when given noisy samples and out-of-distribution data in real-world settings.", "However, this task remains a severe challenge for neural machine translation (NMT), where probabilities from softmax distribution fail to describe when the model is probably mistaken.", "To address this problem, we propose an unsupervised confidence estimate learning jointly with the training of the NMT model.", "We explain confidence as how many hints the NMT model needs to make a correct prediction, and more hints indicate low confidence.", "Specifically, the NMT model is given the option to ask for hints to improve translation accuracy at the cost of some slight penalty.", "Then, we approximate their level of confidence by counting the number of hints the model uses.", "We demonstrate that our learned confidence estimate achieves high accuracy on extensive sentence/word-level quality estimation tasks.", "Analytical results verify that our confidence estimate can correctly assess underlying risk in two real-world scenarios: (1) discovering noisy samples and (2) detecting out-of-domain data.", "We further propose a novel confidence-based instance-specific label smoothing approach based on our learned confidence estimate, which outperforms standard label smoothing 1 .", "Confidence estimation has become increasingly critical with the widespread deployment of deep neural networks in practice (Amodei et al., 2016).", "It aims to measure the model's confidence in the prediction, showing when it probably fails.", "A calibrated confidence estimate can accurately identify Work done while the author was an intern at Tencent.", "Src Ref elisa is played by zhang yan, a class-1 actress on the state level Ours zhang yan , a figure who loves to play , is a national class actor Prob Conf", "failure, further measuring the potential risk induced by noisy samples and out-of-distribution data prevalent in real scenarios (Nguyen and O'Connor, 2015; Snoek et al., 2019).", "Unfortunately, neural machine translation (NMT) is reported to yield poor-calibrated confidence estimate (Kumar and Sarawagi, 2019; Wang et al., 2020), which is common in the application of modern neural networks (Guo et al., 2017).", "It implies that the probability a model assigns to a prediction is not reflective of its correctness.", "Even worse, the model often fails silently by providing high-probability predictions while being woefully mistaken (Hendrycks and Gimpel, 2017).", "We take Figure 1 as an example.", "The mistranslations are produced with high probabilities (dark green blocks in the dashed box), making it problematic to assess the quality based on prediction probability when having no access to references.", "The confidence estimation on classification tasks is well-studied in the literature (Platt, 1999; Guo et al., 2017).", "Yet, researches on structured generation tasks like NMT is scarce.", "Existing researches only study the phenomenon that the generated probability in NMT cannot reflect the accuracy (Mller et al., 2019; Wang et al., 2020), while little is known about how to establish a well-calibrated confidence estimate to describe the predictive un-2353 certainty of the NMT model accurately.", "To deal with this issue, we aim to learn the confidence estimate jointly with the training process in an unsupervised manner.", "Inspired by Ask For Hints (DeVries and Taylor, 2018), we explain confidence as how many hints the NMT model needs to make a correct prediction.", "Specifically, we design a scenario where ground truth is available for the NMT model as hints to deal with tricky translations.", "But each hint is given at the price of some penalty.", "Under this setting, the NMT model is encouraged to translate independently in most cases to avoid penalties but ask for hints to ensure a loss reduction when uncertain about the decision.", "More hints mean low confidence and vice versa.", "In practice, we design a confidence network, taking multi-layer hidden states of the decoder as inputs to predict the confidence estimate.", "Based on this, we further propose a novel confidence-based label smoothing approach, in which the translation more challenging to predict has more smoothing to its labels.", "Recall the example in Figure 1.", "The first phrase a figure who loves to play is incorrect, resulting in a low confidence level under our estimation.", "We notice that the NMT model is also uncertain about the second expression a national class actor , which is semantically related but has inaccurate wording.", "The translation accuracy largely agrees with our learned confidence rather than model probabilities.", "We verify our confidence estimate as a well-calibrated metric on extensive sentence/word-level quality estimation tasks, which is proven to be more representative in predicting translation accuracy than existing unsupervised metrics (Fomicheva et al., 2020).", "Further analyses confirm that our confidence estimate can precisely detect potential risk caused by the distributional shift in two real-world settings: separating noisy samples and identifying out-of-domain data.", "The model needs more hints to predict fake or tricky translations in these cases, thus assigning them low confidence.", "Additionally, experimental results show the superiority of our confidence-based label smoothing over the standard label smoothing technique on different-scale translation tasks (WMT14 En De, NIST Zh En, WMT16 Ro En, and IWSLT14 De En).", "The contributions of this paper are three-fold: We propose the learned confidence estimate to predict the confidence of the NMT output, which is simple to implement without any degradation on the translation performance.", "We prove our learned confidence estimate as a better indicator of translation accuracy on sentence/word-level quality estimation tasks.", "Furthermore, it enables precise assessment of risk when given noisy data with varying noise degrees and diverse out-of-domain datasets.", "We design a novel confidence-based label smoothing method to adaptively tune the mass of smoothing based on the learned confidence level, which is experimentally proven to surpass the standard label smoothing technique.", "In this section, we first briefly introduce a mainstream NMT framework, Transformer (Vaswani et al., 2017), with a focus on how to generate prediction probabilities.", "Then we present an analysis of the confidence miscalibration observed in NMT, which motivates our ideas discussed afterward.", "The Transformer has a stacked encoder-decoder structure.", "When given a pair of parallel sentences x = { x 1 , x 2 , ...x S } and y = { y 1 , y 2 , ...y T } , the encoder first transforms input to a sequence of continuous representations h = (cid:8) h 01 , h 02 , ...h 0 S (cid:9) , which are then passed to the decoder.", "The decoder is composed of a stack of N identical blocks, each of which includes self-attention, cross-lingual attention, and a fully connected feed-forward network.", "The outputs of l -th block h lt are fed to the successive block.", "At the t -th position, the model produces the translation probabilities p t , a vocabulary-sized vector, based on outputs of the N -th layer: p t = softmax( W h Nt + b ) (1) During training, the model is optimized by minimizing the cross entropy loss: LNMT = T (cid:88) t =1 y t log( p t ) (2) where { W , b } are trainable parameters and y t is denoted as a one-hot vector.", "During inference, we implement beam search by selecting high-probability tokens from generated probability for each step.", "Modern neural networks have been found to yield a miscalibrated confidence estimate (Guo et al.,", "2017; Hendrycks and Gimpel, 2017).", "It means that the prediction probability, as used at each inference step, is not reflective of its accuracy.", "The problem is more complex for structured outputs in NMT.", "We cannot judge a translation as an error, even if it differs from the ground truth, as several semantically equivalent translations exist for the same source sentence.", "Thus we manually annotate each target word as OK or BAD on 200 Zh En translations.", "Only definite mistakes are labeled as BAD, while other uncertain translations are overlooked.", "Figure 2 reports the density function of prediction probabilities on OK and BAD translations.", "We observe severe miscalibration in NMT: overconfident problems account for 35.8% when the model outputs BAD translations, and 24.9% OK translations are produced with low probabilities.", "These issues make it challenging to identify model failure.", "It further drives us to establish an estimate to describe model confidence better.", "A well-calibrated confidence estimate should be able to tell when the NMT model probably fails.", "Ideally, we would like to learn a measure of confidence for each target-side translation, but this remains a thorny problem in the absence of ground truth for confidence estimate.", "Inspired by Ask For Hints (DeVries and Taylor, 2018) on the image classification task, we define confidence as how many hints the NMT model needs to produce the correct translation.", "More hints mean low confidence, and that is a high possibility of failure.", "Motivation.", "We assume that the NMT model can ask for hints (look at ground-truth labels) during training, but each clue comes at the cost of a slight penalty.", "Intuitively, a good strategy is to independently make the predictions that the model is confident about and then ask for clues when the model is uncertain about the decision.", "Under this assump-tion, we approximate the confidence level of each translation by counting the number of hints used.", "To enable the NMT model to ask for hints, we add a confidence estimation network (ConNet) in parallel with the original prediction branch, as shown in Figure 3. The ConNet takes hidden states of the decoder at t -th step ( h t ) as inputs and predicts a single scalar between 0 and 1.", "where c = { W , b } are trainable parameters.", "( ) is the sigmoid function.", "If the model is confident that it can translate correctly, it should output c t close to 1.", "Conversely, the model should output c t close to 0 for more hints.", "To offer the model hints during training, we adjust softmax prediction probabilities by interpolating the ground truth probability distribution y t (denoted as a one-hot vector) into the original prediction.", "The degree of interpolation is decided by the generated confidence c t : p t = c t p t + (1 c t ) y t (4) The translation loss is calculated using modified prediction probabilities.", "LNMT = T (cid:88) t =1 y t log( p t ) (5) To prevent the model from minimizing the loss by always setting c t = 0 (receiving all the ground truth), we add a log penalty to the loss function.", "L Conf = T (cid:88) t =1 log( c t ) (6) The final loss is the sum of the translation loss and the confidence loss, which is weighted by the hyper-parameter : L = LNMT + L Conf (7) Under this setting, when c 1 (the model is quite confident), we can see that p p and L Conf 0 , which is equal to a standard training procedure.", "In the case where c 0 (the model is quite unconfident), we see that p y (the model obtains correct labels).", "In this scenario, LNMT would approach 0, but L Conf becomes very large.", "Thus, the model can reduce the overall loss only when it successfully predicts which outputs are likely to be correct.", "Implementation Details.", "Due to the complexity of Transformer architecture, it requires several optimizations to prevent the confidence branch from degrading the performance of the translation branch.", "Do not provide hints at the initial stage.", "The early model is fragile, which lays the groundwork for the following optimization.", "We find that affording hints at an early period leads to a significant performance drop.", "To this end, we propose to dynamically control the value of (as in Equation 7) by the training step ( s ) as: ( s ) = 0 e s/ 0 (8) where 0 and 0 control the initial value and the declining speed of .", "We expect the weight of confidence loss to be large at the beginning ( c 1 ) and give hints during middle and later stages.", "Do not use high-layer hidden states to predict confidence.", "We find that it would add much burden to the highest layer hidden state if used to predict translation and confidence simultaneously.", "So we suggest using low-layer hidden states for the confidence branch and leaving the translation branch unchanged (here, the decoder has 6 layers): h t = AVE( h 1 t + h 2 t + h 3 t ) (9) where h lt is the l -th layer hidden state in the decoder.", "Besides, other combinations of low-layer hidden states are alternative, i.e., h t = AVE( h 1 t + h 3 t ) .", "Do not let the model lazily learn complex examples.", "We encounter the situation where the model frequently requests hints rather than learning from difficulty.", "We follow DeVries and Taylor (2018) to give hints with 50% probability.", "In practice, we apply Equation 4 to only half of the batch.", "Confidence-based Label Smoothing.", "Smoothing labels is a typical way to prevent the network from miscalibration (Mller et al., 2019).", "It has been used in many state-of-the-art models, which assigns a certain probability mass ( 0 ) to other non-ground-truth labels (Szegedy et al., 2016).", "Here we attempt to employ our confidence estimate to improve smoothing.", "We propose a novel instance-specific confidence-based label smoothing technique, where predictions with greater confidence receive less label smoothing and vice versa.", "The amount of label smoothing applied to a prediction ( t ) is proportional to its confidence level.", "where 0 is the fixed value for vanilla label smoothing, c is the batch-level average confidence level.", "This section first exhibits empirical studies on the Quality Estimation (QE) task, a primary application of confidence estimation.", "Then, we present experimental results of our confidence-based label smoothing, an extension of our confidence estimate to better smoothing in NMT.", "To evaluate the ability of our confidence estimate on mistake prediction, we experiment on extensive sentence/word-level QE tasks.", "Supervised QE task requires large amounts of parallel data annotated with the human evaluation, which is labor-intensive and impractical for low-resource languages.", "Here, we propose to address QE in an unsupervised way along with the training of the NMT model.", "We experiment on WMT2020 QE shared tasks 2 , including high-resource language pairs (English-German and English-Chinese) and mid-resource language pairs (Estonian-English and Romanian-English).", "This task provides source language sentences, corresponding machine translations, and 2 http://www.statmt.org/wmt20/quality-estimation-task.html 2356 NMT models used to generate translation.", "Each translation is annotated with direct assessment (DA) by professional translators, ranging from 0-100, according to the perceived translation quality.", "We can evaluate the performance of QE in terms of Pearson's correlation with DA scores.", "We compare our confidence estimate with four unsupervised QE metrics (Fomicheva et al., 2020): TP : the sentence-level translation probability normalized by length T .", "Softmax-Ent : the average entropy of softmax output distribution at each decoding step.", "Sent-Std : the standard deviation of word-level log-probability p ( y 1 ) , ..., p ( y T ) .", "D-TP : the expectation for the set of TP scores by running K stochastic forward passes through the NMT model with model parameters k perturbed by Monte Carlo (MC) dropout (Gal and Ghahramani, 2016).", "We also report two supervised QE models: Predictor-Estimator (Kim et al., 2017): a weak neural approach, which is usually set as the baseline system for supervised QE tasks.", "BERT-BiRNN (Kepler et al., 2019b): a strong QE model using a large-scale dataset for pretraining and quality labels for fine-tuning.", "We propose four confidence-based metrics: (1) Conf : the sentence-level confidence estimate averaged by length, (2) Sent-Std-Conf : the standard deviation of word-level log-confidence c 1 , ..., c T , (3) D-Conf : similar to D-TP, we compute the expectation of Conf by running K forward passes through the NMT model, and (4) D-Comb : the combination of D-TP and D-Conf: D-Comb = 1 KK (cid:88) k =1 (Conf k + TP k ) (10) Note that our confidence estimate is produced together with translations.", "It is hard to let our model generate exact translations as provided by WMT, even with a similar configuration.", "Thus, we train our model on parallel sentences as used to train provided NMT models.", "Then, we employ force decoding on given translations to obtain existing unsupervised metrics and our estimations.", "We do not use any human judgment labels for supervision.", "Our confidence-based metrics substantially surpass probability-based metrics (the first three lines in Table 1).", "Compared with dropout-based methods (D-TP), our metrics obtain comparable results on mid-resource datasets while yielding better performance on high-resource translation tasks.", "We note that the benefits brought from the MC dropout strategy are limited for our metrics, which is significant in probability-based methods.", "It also proves the stability of our confidence estimate.", "In addition, the predictive power of MC dropout comes at the cost of computation, as performing forward passes through the NMT model is time-consuming and impractical for the large-scale dataset.", "Our approach outperforms PredEst, a weak supervised method, on three tasks and further narrows the gap on Ro-En.", "Though existing unsupervised QE methods still fall behind with the strong QE model (BERT-BiRNN), the exploration of unsupervised metrics is also meaningful for real-world deployment with the limited annotated dataset.", "We also validate the effectiveness of our confidence estimate on QE tasks from a more fine-grained view.", "We randomly select 250 sentences from Zh En NIST03 and obtain NMT translations.", "Two graduate students are asked to annotate each target word as either OK or BAD.", "We assess the performance of failure prediction with standard metrics, which are introduced in Appendix A. 2357 Methods Zh En En De De En Ro En MT03 MT04 MT05 MT06 MT08 ALL Transformer w/o LS 48.77 48.50 47.45 46.65 35.93 45.50 26.98 34.27 29.71 + Standard LS 49.14 48.48 50.53 47.44 36.23 45.83 27.40 34.52 30.03 + Confidence-based LS 50 .", "Experimental results are given in Table 3. We implement competitive failure prediction approaches, including Maximum Softmax Probability (MSP) (Hendrycks and Gimpel, 2017) and Monte Carlo Dropout (MCDropout) (Gal and Ghahramani, 2016).", "We find that our learned confidence estimate yields a better separation of OK and BAD translation than MSP.", "Compared with MCDropout, our metrics achieve competing performance with significant advantages on computational expenses.", "Overall, the learned confidence estimate is a competitive indicator of translation precision compared with other unsupervised QE metrics.", "Moreover, the confidence branch added to the NMT system is a light component.", "It allows each translation to come with quality measurement without degradation of the translation accuracy.", "The performance with the confidence branch is in Appendix B. 4.2 Confidence-based Label Smoothing We extend our confidence estimate to improve smoothing and experiment on different-scale translation tasks: WMT14 English-to-German (En De), LDC Chinese-to-English (Zh En) 3 , WMT16 Romanian-to-English (Ro En), and IWSLT14 German-to-English (De En).", "We use the 4-gram BLEU (Papineni et al., 2002) to score the performance.", "More details about data processing and experimental settings are in Appendix C. 3 The corpora includes LDC2000T50, LDC2002T01, LDC2002E18, LDC2003E07, LDC2003E14, LDC2003T17, and LDC2004T07.", "As shown in Table 2, our confidence-based label smoothing outperforms standard label smoothing by adaptively tuning the amount of each label smoothing.", "For Zh En task, our method improves the performance over Transformer w/o LS by 1.05 BLEU, which also exceeds standard label smoothing by 0.72 BLEU.", "We find that improvements over standard label smoothing differ in other language pairs (0.35 BLEU in En De, 0.5 BLEU in De En, and 0.79 BLEU in Ro En).", "It can be attributed to that the seriousness of miscalibration varies in different language pairs and datasets (Wang et al., 2020).", "Experimental results with a larger search space (i.e. beam size=30) are also given in Appendix C to support the above findings.", "Confidence estimation is particularly critical in real-world deployment, where noisy samples and out-of-distribution data are prevalent (Snoek et al., 2019).", "Given those abnormal inputs, neural network models are prone to be highly confident in misclassification (Nguyen et al., 2015).", "Thus, we need an accurate confidence estimate to detect potential failures caused by odd inputs by assigning them low confidence.", "This section explores whether our confidence estimate can accurately measure risk under those two conditions.", "We expect that the model requires more hints to fit noisy labels by predicting low confidence.", "To test this point, we experiment on the IWSLT14 De En dataset containing 160k parallel sentences.", "We build several datasets with progressively increasing noisy samples by randomly replacing target-side words with others in the vocabulary.", "We train on each dataset with the same configuration and picture the learned confidence estimate in Figure 4. The learned confidence estimate appears to make 2358 Noise Rate AUROC AUPR EER DET The Model Probability / Our Confidence Estimate 20% 93.21 / 96.73 97.08 / 98.57 13.50 / 7.00 11.50 / 6.00 40% 94.89 / 95.73 95.22 / 95.50 11.88 / 9.50 10.58 / 7.69 60% 93.37 / 94.92 86.54 / 88.09 14.00 / 10.08 12.04 / 8.29 80% 91.63 / 95.44 64.15 / 76.67 16.06 / 10.13 13.41 / 8.13 Table 4: Separating clean and noisy data by the model probability and our confidence estimate with varying noisy rates.", "reasonable assessments.", "(1) It predicts low confidence on noisy samples but high confidence on clean ones.", "Specifically, the confidence estimate is much lower as a higher pollution degree in one example (darker in color).", "(2) With increasing noises in the dataset, the NMT model becomes more uncertain about its decision accordingly.", "Large num-bers of noises also raise a challenge for separating clean and noisy samples.", "We also compare ours with the model probability by giving the accuracy of separating clean and noisy examples under varying pollution rates.", "We set clean data as the positive example and use evaluation metrics listed in Appendix A. As shown in Table 4, our confidence estimate obtains better results in all cases, especially in a high noise rate.", "Our metric improves the area under the precision-recall curve (AUPR) from 64.15% to 76.76% and reduces the detection error (DET) from 13.41% to 8.13% at an 80% noise rate.", "It proves that our confidence estimate is more reliable for detecting potential risks induced by noisy data.", "For our in-domain examples, we train an NMT model on the 2.1M LDC Zh En news dataset and then sample 1k sentences from NIST2004 as the in-domain testbed.", "We select five out-of-domain datasets and extract 1k samples from each.", "Most of them are available for download on OPUS, specified in Appendix D. Regarding the unknown words (UNK) rate, the average length of input sentences, and domain diversity, the descending order based on distance with the in-domain dataset is WMT-news > Tanzil > Tico-19 > TED2013 > News-Commentary.", "Test sets closer to the in-domain dataset are intuitively harder to tell apart.", "We use sentence-level posterior probability and confidence estimate of the translation to separate inand out-of-domain data.", "Evaluation metrics are in Appendix A. Results are given in Table 5. We find that our approach performs comparably with the probability-based method on datasets with distinct domains (WMT-news and Tanzil).", "But 2359 Most Confident Words Most Uncertain Words", "when cross-domain knowledge is harder to detect (the last three lines in Table 5), our metric yields a better separation of inand out-of-domain ones.", "To better understand the behaviour of our confidence estimates on out-of-domain data, we visualize word clouds of the most confident/uncertain words ranked by model probability and our measurements on a medicine dataset (Tico-19) in Figure 5. The colors of words indicate their frequencies in the in-domain dataset.", "Our metrics correctly separate inand out-of-domain data from two aspects: (1) word frequency : the NMT model is certain about frequent words yet hesitates on rare words as seen in Figure", "5(b).", "But colors in Figure", "5(a) are relatively mixing.", "(2) domain relation : the most uncertain words ranked by our confidence estimate are domain-related, like patho and syndrome, while the most confident words are domain-unrelated (e.g., punctuations and prepositions).", "This phenomenon cannot be seen in Figure", "5(a), showing that probabilities from softmax fall short in representing model uncertainty for domain-shift data.", "The task of confidence estimation is crucial in real-world conditions, which helps failure prediction (Corbire et al., 2019) and out-of-distribution detection (Hendrycks and Gimpel, 2017; Snoek et al.,", "2019; Lee et al., 2018).", "This section reviews recent researches on confidence estimation and related applications on quality estimation for NMT.", "Only a few studies have investigated calibration in NMT.", "Mller et al. (2019) find that the NMT model is well-calibrated in training, which is proven severely miscalibrated in inference (Wang et al., 2020), especially when predicting the end of a sentence (Kumar and Sarawagi, 2019).", "Regarding the complex structures of NMT, the exploration for fixing miscalibration in NMT is scarce.", "Wang et al. (2019); Xiao et al. (2020) use Monte Carlo dropout to capture uncertainty in NMT, which is time-consuming and computationally expensive.", "Unlike them, we are the first to introduce learned confidence estimate into NMT.", "Our method is well-designed to adapt to Transformer architecture and NMT tasks, which is also simple but effective.", "QE is to predict the quality of the translation provided by an MT system at test time without standard references.", "Recent supervised QE models are resource-heavy and require a large mass of annotated quality labels for training (Wang et al., 2018; Kepler et al., 2019a; Lu and Zhang, 2020), which is labor-consuming and unavailable for low-resource languages.", "Exploring internal information from the NMT system to indicate translation quality is another alternative.", "Fomicheva et al. (2020) find that uncertainty quantification is competitive in predicting the translation quality, which is also complementary to supervised QE model (Wang et al., 2021).", "However, they rely on repeated Monte Carlo dropout (Gal and Ghahramani, 2016) to assess uncertainty at the high cost of computation.", "Our confidence estimate outperforms existing unsupervised QE metrics, which is also intuitive and easy to implement.", "In this paper, we propose to learn confidence estimates for NMT jointly with the training process.", "We demonstrate that learned confidence can better indicate translation accuracy on extensive sentence/word-level QE tasks and precisely measures potential risk induced by noisy samples or out-of-domain data.", "We further extend the learned confidence estimate to improve smoothing, outper-2360 forming the standard label smoothing technique.", "As our confidence estimate outlines how much the model knows, we plan to apply our work to design a more suitable curriculum during training and post-edit low-confidence translations in the future.", "This work is supported by the Natural Science Foundation of China under Grant No. 62122088, U1836221, and 62006224." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method", "objective", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "abstain", "abstain", "method", "objective", "abstain", "objective", "abstain", "abstain", "result", "method", "abstain", "result", "objective", "abstain", "objective", "objective", "abstain", "objective", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "objective", "abstain", "other", "other", "abstain", "other", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "objective", "method", "other", "other", "other", "other", "other", "abstain", "objective", "objective", "objective", "method", "other" ]
[ "High-quality phrase representations are essential to finding topics and related terms in documents (a.k.a. topic mining).", "Existing phrase representation learning methods either simply combine unigram representations in a context-free manner or rely on extensive annotations to learn context-aware knowledge.", "In this paper, we propose UCTOPIC , a novel unsupervised contrastive learning framework for context-aware phrase representations and topic mining.", "UCTOPIC is pretrained in a large scale to distinguish if the contexts of two phrase mentions have the same semantics.", "The key to pretraining is positive pair construction from our phrase-oriented assumptions.", "However, we find traditional in-batch negatives cause performance decay when finetuning on a dataset with small topic numbers.", "Hence, we propose cluster-assisted contrastive learning (CCL) which largely reduces noisy negatives by selecting negatives from clusters and further improves phrase representations for topics accordingly.", "UCTOPIC outperforms the state-of-the-art phrase representation model by 38 .", "2% NMI in average on four entity clustering tasks.", "Comprehensive evaluation on topic mining shows that UCTOPIC can extract coherent and diverse topical phrases.", "Topic modeling discovers abstract 'topics' in a collection of documents.", "A topic is typically modeled as a distribution over terms.", "High-quality phrase representations help topic models understand phrase semantics in order to find well-separated topics and extract coherent phrases.", "Some phrase representation methods (Wang et al., 2021; Yu and Dredze, 2015; Zhou et al., 2017) learn context-free representations by unigram embedding combination.", "Context-free representations tend to extract similar phrases mentions (e.g. great food and good food, see Section 4.3).", "Context-aware methods such as DensePhrase (Lee et al., The United States is a federation of 50 individual states.", ": The semantics of phrases are determined by their context.", ": Phrases that have the same mentions have the same semantics.", "2021) and LUKE (Yamada et al., 2020) need supervision from task-specific datasets or distant annotations with knowledge bases.", "Manual or distant supervision limits the ability to represent out-of-vocabulary phrases especially for domain-specific datasets.", "Recently, contrastive learning has shown effectiveness for unsupervised representation learning in visual (Chen et al., 2020) and textual (Gao et al., 2021) domains.", "In this work, we seek to advance state-of-the-art phrase representation methods and demonstrate that a contrastive objective can be extremely effective at learning phrase semantics in sentences.", "We present UCTOPIC , an Unsupervised Contrastive learning framework for phrase representations and TOPIC mining, which can produce superior phrase embeddings and have topic-specific finetuning for topic mining.", "To conduct contrastive learning for phrase representations, we first seek to produce contrastive pairs.", "Existing data augmentation methods for natural language processing (NLP) such as back translation (Xie et al., 2020), synonym replacement (Zhang et al., 2015) and text mix up (Zhang et al., 2018) are not designed for phrase-oriented noise, and thus cannot produce training pairs for phrase representation learning.", "In UCTOPIC , we propose two assumptions about phrase semantics to obtain contrastive pairs: 1. The phrase semantics are determined by their 6159 context.", "same semantics.", "As shown in Figure 1, given two sentences that contain the same phrase mentions (e.g., United States), we can mask the phrase mentions and the phrase semantics should stay the same based on assump-tion (1).", "Then, the phrase semantics from the two sentences are same as each other given assump-tion (2).", "Therefore, we can use the two masked sentences as positive pairs in contrastive learning.", "The intuition behind the two assumptions is that we expect the phrase representations from different sentences describing the same phrase should group together in the latent space.", "Masking the phrase mentions forces the model to learn representations from context which prevents overfitting and representation collapse (Gao et al., 2021).", "Based on the two assumptions, our context-aware phrase representations can be pre-trained on a large corpus via a contrastive objective without supervision.", "For large-scale pre-training, we follow previous works (Chen et al., 2017; Henderson et al., 2017; Gao et al., 2021) and adopt in-batch negatives for training.", "However, we find in-batch negatives undermine the representation performance as finetuning (see Table 1).", "Because the number of topics is usually small in the finetuning dataset, examples in the same batch are likely to have the same topic.", "Hence, we cannot use in-batch negatives for data-specific finetuning.", "To solve this problem, we propose cluster-assisted contrastive learning (CCL) which leverages clustering results as pseudo-labels and sample negatives from highly confident examples in clusters.", "Cluster-assisted negative sampling has two advantages: (1) reducing potential positives from negative sampling compared to in-batch negatives; (2) the clusters are viewed as topics in documents, thus, cluster-assisted contrastive learning is a topic-specific finetuning process which pushes away instances from different topics in the latent space.", "Based on the two assumptions and cluster-assisted negative sampling introduced in this paper, we pre-train phrase representations on a large-scale dataset and then finetune on a specific dataset for topic mining in an unsupervised way.", "In our experiments, we select LUKE (Yamada et al., 2020) as our backbone phrase representation model and pre-train it on Wikipedia 1 English corpus.", "To 1 https://dumps.wikimedia.org/ evaluate the quality of phrase representations, we conduct entity clustering on four datasets and find that pre-trained UCTOPIC achieves 53 .", "1% (NMI) improvement compared to LUKE.", "After learning data-specific features with CCL, UCTOPIC outperforms LUKE by 73 .", "2% (NMI) in average.", "We perform topical phrase mining on three datasets and comprehensive evaluation indicates UCTOPIC extracts coherent and diverse topical phrases.", "Overall, our contributions are three-fold: We propose UCTOPIC which produces superior phrase representations by unsupervised contrastive learning based on positive pairs from our phrase-oriented assumptions.", "To finetune on topic mining datasets, we propose a cluster-assisted negative sampling method for contrastive learning.", "This method reduces false negative instances caused by in-batch negatives and further improves phrase representations for topics accordingly.", "We conduct extensive experiments on entity type clustering and topic mining.", "Objective metrics and a user study show that UCTOPIC can largely improve the phrase representations, then extracts more coherent and diverse topical phrases than existing topic mining methods.", "In this section, we introduce background knowledge about contrastive learning and our phrase encoder LUKE (Yamada et al., 2020).", "Contrastive learning aims to learn effective representations by pulling semantically close neighbors together and pushing apart non-neighbors in the latent space (Hadsell et al., 2006).", "Assume that we have a contrastive instance { x, x + , x 1 , . . . , x N 1 } including one positive and N 1 negative instances and their representations { h , h + , h 1 , . . . , h N 1 } from the encoder, we follow the contrastive learning framework (Sohn, 2016; Chen et al., 2020; Gao et al., 2021) and take cross-entropy as our objective function: l = log e sim( h , h + ) / e sim( h , h + ) / + (cid:80) N 1 i =1 e sim( h , h i ) / (1) where is a temperature hyperparameter and sim( h 1 , h 2 ) is the cosine similarity h (cid:62) 1 h 2 (cid:107) h 1 (cid:107)(cid:107) h 2 (cid:107) .", "In this paper, our phrase encoder E is transformer-based model LUKE (Yamada et al., 2020).", "LUKE is a pre-trained language model that can directly output the representations of tokens and spans in sentences.", "Our phrase instance x = ( s, [ l, r ]) includes a sentence s and a character-level span [ l, r ] ( l and r are left and right boundaries of a phrase).", "E encodes the phrase x and output the phrase representation h = E ( x ) = E ( s, [ l, r ]) .", "Although LUKE can output span representations directly, we will show that span representations from LUKE are not able to represent phrases well (see Section 4.2).", "Different from LUKE, which is trained by predicting entities, UCTOPIC is trained by contrastive learning on phrase contexts.", "Hence, the phrase presentations from UCTOPIC are context-aware and robust to different domains.", "UCTOPIC is an unsupervised contrastive learning method for phrase representations and topic mining.", "Our goal is to learn a phrase encoder as well as topic representations, so we can represent phrases effectively for general settings and find topics from documents in an unsupervised way.", "In this section, we introduce UCTOPIC from two aspects: (1) constructing positive pairs for phrases; (2) cluster-assisted contrastive learning.", "One critical problem in constrastive learning is to how to construct positive pairs ( x, x + ) .", "Previous works (Wu et al., 2020; Meng et al., 2021) apply augmentation techniques such as word deletion, reordering, and paraphrasing.", "However, these methods are not suitable for phrase representation learning.", "In this paper, we utilize the proposed assumptions introduced in Section 1 to construct positive instances for contrastive learning.", "Consider an example to understand our positive instance generation process: In Figure 2", "(a), phrase United States appears in two different sentences He lived on the east coast of the United States and How much does it cost to fly to the United States .", "We expect the phrase ( United States ) representations from the two sentences to be similar to reflect phrase semantics.", "To encourage the model to learn phrase semantics from context and prevent the model from comparing phrase mentions in contrastive learning, we mask the phrase mentions with [MASK] token.", "The two masked sentences are used as positive instances.", "To decrease the inconsistency caused by masking between training and evaluation, in a positive pair, we keep one phrase mention unchanged in probability p .", "Formally, suppose we have phrase instance x = ( s, [ l, r ]) and its positive instance x + = ( s (cid:48) , [ l (cid:48) , r (cid:48) ]) where s denotes the sentence and [ l, r ] are left and right boundaries of a phrase in s , we obtain the phrase representations h and h + by encoder E and apply in-batch negatives for pre-training.", "The training objective of UCTOPIC becomes: l = log e sim( h , h + ) / (cid:80) Ni =1 e sim( h , h i ) / , (2) for a mini-batch of N instances, where h i is an instance in a batch.", "We find that contrastive learning with in-batch negatives on small datasets can undermine the phrase representations (see Section 4.2).", "Different from pre-training on a large corpus, in-batch negatives usually contain instances that have similar semantics as positives.", "For example, one document has three topics and our batch size is 32 .", "Thus, some instances in one batch are from the same topic but in-batch method views these instances as negatives with each other.", "In this case, contrastive learning has noisy training signals and then results in decreasing performance.", "To reduce the noise in negatives while optimizing phrase representations according to topics in documents, we propose cluster-assisted contrastive learning (CCL).", "The basic idea is to utilize prior knowledge from pre-trained representations and clustering to reduce the noise existing in the negatives.", "Specifically, we first find the topics in documents with a clustering algorithm based on pre-trained phrase representations from UCTOPIC .", "The centroids of clusters are considered as topic representations for phrases.", "After computing the cosine distance between phrase instances and centroids, we select t percent of instances that are close to centroids and assign pseudo labels to them.", "Then, the label of a phrase mention p m 2 is determined by the majority vote of instances { x m 0 , x m 1 , . . . , x mn } that contain p m , where n is the number of sentences assigned pseudo labels.", "In this way, we get some prior knowledge of phrase mentions for the following contrastive learning.", "See Figure 2", "(b); three phrase mentions ( London , James Gunn and Apple ) which belong to three different clusters are labeled by different topic categories.", "Suppose we have a topic set C in our documents, with phrases and their pseudo labels, we construct positive pairs ( x c i , x + c i ) by method introduced in Section 3.1 for topic c i where c i C .", "To have contrastive instances, we randomly select phrases p mc j and instances x mc j from topic c j as negative instances x c j in contrastive learning, where c j C c j (cid:54) = c i .", "As shown in Figure 2", "(b), we construct positive pairs for phrase London , and use two phrases James Gunn and Apple from the other two clusters to randomly select negative instances.", "With pseudo labels, our method can avoid instances that have similar semantics as London .", "The training objective of finetuning is:", "As for the masking strategy in pre-training, we conduct masking for all training instances but keep x + c i and x c j unchanged in probability p .", "To infer the topic y of phrase instance x , we compute the cosine similarity between phrase representation h and topic representations h c i , c i C .", "The nearest neighbor topic of x is used as phrase topic.", "Formally, y = argmax c i C (sim( h , h c i )) (4) 4 Experiments In this section, we evaluate the effectiveness of UCTOPIC pre-training by contrastive learning.", "We start with entity clustering to compare the phrase representations from different methods.", "For topic modeling, we evaluate the topical phrases from three aspects and compare UCTOPIC to other topic modeling baselines.", "To generate the training corpus, we use English Wikipedia 3 and extract text with hyper links as phrases.", "Phrases have the same entity ids from Wikidata 4 or have the same mentions are considered as the same phrases (i.e., phrases have the same semantics).", "We enumerate all sentence pairs containing the same phrase as positive pairs in contrastive learning.", "After processing, the pre-training dataset has 11 .", "6 million sentences and 108 .", "8 million training instances.", "For pre-training, we start from a pretrained LUKE-BASE model (Yamada et al., 2020).", "We follow previous works (Gao et al., 2021; Soares et al., 2019) and two losses are used concurrently: the masked language model loss and the contrastive learning loss with in-batch negatives.", "Our pretraining learning rate is 5e-5, batch size is 100 and our model is optimized by AdamW in 1 epoch.", "The probability p of keeping phrase mentions unchanged is 0 .", "5 and the temperature in the contrastive loss is set to 0 .", "05 .", "To test the performance of phrase representations under objective tasks and metrics, we first apply UCTOPIC on entity clustering and compare to", "other representation learning methods.", "Datasets .", "We conduct entity clustering on four datasets with annotated entities and their semantic categories are from general, review and biomedical domains: (1) CoNLL2003 (Sang and Meul-der, 2003) consists of 20,744 sentences extracted from Reuters news articles.", "We use Person, Location, and Organization entities in our experiments.", "5 (2) BC5CDR (Li et al., 2016) is the BioCreative V CDR task corpus.", "It contains 18,307 sentences from PubMed articles, with 15,953 chemical and 13,318 disease entities.", "(3) MIT Movie (MIT-M) (Liu et al., 2013) contains 12,218 sentences with Title and Person entities.", "(4) W-NUT 2017 (Derczynski et al., 2017) focuses on identifying unusual entities in the context of emerging discussions and contains 5,690 sentences and six kinds of entities 6 .", "Finetuning Setup .", "The learning rate for finetuning is 1e-5.", "We select t (percent of instances) from { 5 , 10 , 20 , 50 } .", "The probability p of keeping phrase mentions unchanged and temperature in contrastive loss are the same as in pre-training settings.", "We apply K-Means to get pseudo labels for all experiments.", "Because UCTOPIC is an unsupervised method, we use all data to finetune and evaluate.", "All results for finetuning are the best results during training process.", "We follow previous clustering works (Xu et al., 2017; Zhang et al., 2021) and adopt Accuracy (ACC) and Normalized Mutual Information (NMI) to evaluate different approaches.", "Compared Baseline Methods .", "To demonstrate the effectiveness of our pre-training method and finetuning with cluster-assisted contrastive learning (CCL), we compare baseline methods from two aspects: (1) Pre-trained token or phrase representations: Glove (Pennington et al., 2014).", "Pre-trained word embeddings on 6B tokens and dimension is 300 .", "We use averaging word embeddings as the representations of phrases.", "BERT (Devlin et al., 2019).", "Obtains phrase representations by averaging token representations (BERT-Ave.) or following CGEx-pan (Zhang et al., 2020) to substitute phrases with the [MASK] token, and use [MASK] representations as phrase embeddings (BERT-MASK).", "LUKE (Yamada et al., 2020).", "Use as backbone model to show the effectiveness of our contrastive learning for pre-training and finetuning.", "DensePhrase (Lee et al., 2021).", "Pre-trained phrase representation learning in a supervised way for question answering problem.", "We use a pre-trained model released from the authors to get phrase representations.", "Phrase-BERT (Wang et al., 2021).", "Context-agnostic phrase representations from pretraining.", "We use a pre-trained model from the authors and get representations by phrase mentions.", "Ours w/o CCL .", "Pre-trained phrase representations of UCTOPIC without cluster-assisted contrastive finetuning.", "(2) Fine-tuning methods based on pre-trained representations of UCTOPIC .", "Classifier .", "We use pseudo labels as supervision to train a MLP layer and obtain a classifier of phrase categories.", "In-Batch Contrastive Learning .", "Same as contrastive learning for pre-training which uses in-batch negatives.", "Autoencoder .", "Widely used in previous neural topic and aspect extraction models (He et al., 2017; Iyyer et al., 2016; Tulkens and van Cra-nenburgh, 2020).", "We follow ABAE (He et al., 2017) to implement our autoencoder model for phrases.", "Experimental Results.", "We report evaluation results of entity clustering in Table 1. Overall, UCTOPIC achieves the best results on all datasets and metrics.", "Specifically, UCTOPIC improves the state-of-the-art method (Phrase-BERT) by 38 .", "2% NMI in average, and outperforms our backbone model (LUKE) by 73 .", "2% NMI.", "When we compare different pre-trained representations, we find that our method (Ours w/o CCL) outperforms the other baselines on three datasets except MIT-M.", "There are two reasons: (1) All words in MIT-M dataset are lower case which is inconsistent with our pretraining dataset.", "The inconsistency between training and test causes performance to decay.", "(2) Sentences from MIT-M are usually short ( 10 . 16 words in average) compared to other datasets (e.g., 17 . 9 words in W-NUT2017).", "Hence, UCTOPIC can obtain limited contextual information with short sentences.", "However, the performance decay caused by the two reasons can be eliminated by our CCL finetuning on datasets since on MIT-M UCTOPIC achieves better results (0.661 NMI) than Phrase-BERT (0.575 NMI) after CCL.", "On the other hand, compared to other finetuning methods, our CCL finetuning can further improve the pre-trained phrase representations by capturing data-specific features.", "The improvement is up to 50% NMI on the MIT-M dataset.", "Ours w/ Class.", "performs worse than our pre-trained UCTOPIC in most cases which indicates that pseudo labels from clustering are noisy and cannot directly be used as supervision for representation learning.", "Ours w/ In-B.", "is similar as Ours w/ Class.", "which verifies our motivation on using CCL instead of in-batch negatives.", "An autoencoder can improve pre-trained representations on three datasets but the margins are limited and the performance even drops on W-NUT2017.", "Compared Model UCTopic LUKE Metric ACC NMI ACC NMI Context+Mention 0.44 0.29 0.39 0.21 Mention 0.32 (-27%) 0.15 (-48%) 0.28 (-28%) 0.10 (-52%) Context 0.43(-3%) 0.16 (-44%) 0.27 (-31%) 0.07 (-67%) Table 2: Ablation study on the input of phrase instances of W-NUT 2017.", "to other finetuning methods, our CCL finetuning consistently improves pre-trained phrase representations on different domains.", "Context or Mentions .", "To investigate the source of UCTOPIC phrase semantics (i.e., phrase mentions or context), we conduct an ablation study on the type of input and compare UCTOPIC to LUKE.", "To eliminate the influence of repeated phrase mentions on clustering results, we use only one phrase instance (i.e., sentence and position of a phrase) for each phrase mention.", "As shown in Table 2, there are three types of inputs: (1) Context+Mention: The same input as experiments in Table 1 including the whole sentence that contains the phrase.", "(2) Mention: Use only phrase mentions as inputs of the two models.", "(3) Context: We mask the phrase mentions in sentences and models can only get information from the context.", "We can see that UCTOPIC gets more information from con-6164 text ( 0 . 43 ACC, 0 . 16 NMI) than mentions ( 0 . 32 ACC, 0 . 15 NMI).", "Compared to LUKE, UCTOPIC is more robust to phrase mentions (when predicting on only context, UCTOPIC 3% ACC and 44% NMI vs. LUKE 31% ACC and 67% NMI).", "In this section, we apply UCTOPIC on topical phrase mining and conduct human evaluation to show our model outperforms previous topic model baselines.", "Experiment Setup .", "To find topical phrases in documents, we first extract noun phrases by spaCy 7 noun chunks and remove single pronoun words.", "Before CCL finetuning, we obtain the number of topics for each dataset by computing the Silhouette Coefficient (Rousseeuw, 1987).", "Specifically, we randomly sample 10K phrases from the dataset and apply K-Means clustering on pre-trained UCTOPIC phrase representations with different numbers of cluster.", "We compute Silhouette Coefficient scores for different topic numbers; the number with the largest score will be used as the topic number in a dataset.", "Then, we conduct CCL on the dataset with the same settings as described in Section 4.2.", "Finally, after obtaining topic distribution z x R |C| for a phrase instance x in a sentence, we get context-agnostic phrase topics by using averaged topic distribution z p m = 1 n (cid:80) 1 i n z x mi , where phrase instances { x mi } in different sentences have the same phrase mention p m .", "The topic of a phrase mention has the highest probability in z p m .", "Dataset .", "We conduct topical phrase mining on three datasets from news, review and computer science domains.", "Gest .", "We collect restaurant reviews from Google Local 8 and use 100K reviews containing 143,969 sentences for topical phrase mining.", "KP20k (Meng et al., 2017) is a collection of titles and abstracts from computer science papers.", "500K sentences are used in our experiments.", "KPTimes (Gallina et al., 2019) includes news articles from the New York Times from 2006 to 2017 and 10K news articles from the Japan 7 https://spacy.io/ 8 https://www.google.com/maps Datasets Gest KP20k KPTimes # of topics 22 10 16 Table 3: The numbers of topics in three datasets.", "The number of topics determined by Silhouette Coefficient is shown in Table 3. Compared Baseline Methods .", "Phrase-LDA (Mimno, 2015).", "LDA model incorporates phrases by simply converting phrases into unigrams (e.g., city view to city_view).", "TopMine (El-Kishky et al., 2014).", "A scalable pipeline that partitions a document into phrases, then uses phrases as constraints to ensure all words are placed under the same topic.", "PNTM (Wang et al., 2021).", "A topic model with Phrase-BERT by using an autoencoder that reconstructs a document representation.", "The model is viewed as the state-of-the-art topic model.", "We do not include topic models such as LDA (Blei et al., 2003), PD-LDA (Lindsey et al., 2012), TNG (Wang et al., 2007), KERT (Danilevsky et al., 2014) as baselines, because these models are compared in TopMine and PNTM.", "For Phrase-LDA and PNTM, we use the same phrase list produced by UCTOPIC .", "TopMine uses phrases produced by itself.", "separation ; (2) phrase coherence ; (3) phrase informativeness and diversity .", "To evaluate topical separation , we perform the phrase intrusion task following previous work (El-Kishky et al., 2014; Chang et al., 2009).", "The phrase intrusion task involves a set of questions asking humans to discover the intruder' phrase from other phrases.", "In our experiments, each question has 6 phrases and 5 of them are randomly sampled from the top 50 phrases of one topic and the remaining phrase is randomly chosen from another topic (top 50 phrases).", "Annotators are asked to select the intruder phrase.", "We sample 50 questions for each method and each dataset ( 600 questions in total) and shuffle all questions.", "Because these questions are sampled independently, we asked 4 annotators to answer these questions and each annotator answers 150 questions on average.", "Results of the task evaluate how well the phrases are separated by topics.", "The evaluation results are shown in Figure 3. UCTOPIC outperforms other baselines on three datasets, which means our model can find well-separated topics in documents.", "To evaluate phrase coherence in one topic, we follow ABAE (He et al., 2017) and ask annotators to evaluate if the top 50 phrases from one topic are coherent (i.e., most phrases represent the same topic).", "3 annotators evaluate four models on Gest and KP20k datasets.", "Numbers of coherent topics are shown in Table 4. We can see that UCTOPIC , PNTM and TopMine can recognize similar numbers of coherent topics, but the numbers of Phrase-LDA are less than the other three models.", "For a coherent topic, each of the top phrases will be la-Datasets Gest KP20k Metrics tf-idf word-div.", "beled as correct if the phrase reflects the related topic.", "Same as ABAE, we adopt precision@n to evaluate the results.", "Figure 4 shows the results; we can see that UCTOPIC substantially outperforms other models.", "UCTOPIC can maintain high precision with a large n when the precision of other models decreases.", "Finally, to evaluate phrase informativeness and diversity , we use tf-idf and word diversity (word-div.) to evaluate the top topical phrases.", "Basically, informative phrases cannot be very common phrases in a corpus (e.g., good food in Gest) and we use tf-idf to evaluate the importance of a phrase.", "To eliminate the influence of phrase length, we use averaged word tf-idf in a phrase as the phrase tf-idf.", "Specifically, tf-idf ( p, d ) = 1 m (cid:80) 1 i m tf-idf ( w pi ) , where d denotes the document and p is the phrase.", "In our experiments, a document is a sentence in a review.", "In addition, we hope that our phrases are diverse enough in a topic instead of expressing the same meaning (e.g., good food and great food).", "To evaluate the diversity of the top phrases, we calculate the ratio of distinct words among all words.", "Formally, given a list of phrases [ p 1 , p 2 , . . . , p n ] , we tokenize the phrases into a word list w = [ w p 1 1 , w p 1 2 , . . . , w p n m ] ; w (cid:48) is the set of unique words in w .", "The word diversity is computed by | w (cid:48) | | w | .", "We only evaluate coherent topics labeled in phrase coherence ; the coherent topic numbers of Phrase-LDA are smaller than others, hence we evaluate the other three models.", "We compute the tf-idf and word-div.", "on the top 10 phrases and use the averaged value on topics as final scores.", "Results are shown in table 5. PNTM and UCTOPIC achieve similar tf-idf scores, because the two methods use the same phrase lists extracted from spaCy.", "UCTOPIC extracts the most diverse phrases in a topic, because our phrase representations are more context-aware.", "In contrast, since PNTM gets representations dependent on phrase mentions, the phrases from PNTM contain 6166 Gest KP20k Drinks Dishes Programming UCTOPICPNTM UCTOPICPNTM TopMine UCTOPIC TopMine lager drinks cauliflower fried rice great burger mac cheese markup language software development whisky bar drink chicken tortilla soup great elk burger ice cream scripting language software engineering vodka just drink chicken burrito great hamburger potato salad language construct machine learning whiskey alcohol fried calamari good burger french toast java library object oriented rum liquor roast beef sandwich good hamburger chicken sandwich programming structure open source own beer booze grill chicken sandwich awesome steak cream cheese xml syntax design process ale drink order buffalo chicken sandwich burger joint fried chicken module language design implementation craft cocktail ok drink pull pork sandwich woody 's bbq fried rice programming framework programming language booze alcoholic beverage chicken biscuit excellent burger french fries object-oriented language source code tap beer beverage tortilla soup beef burger bread pudding python module support vector machine Table 6: Top topical phrases on Gest and KP20k and the minimum phrase frequency is 3. the same words and hence are less diverse.", "Case Study .", "We compare top phrases from UCTOPIC , PNTM and TopMine in Section 4.3.", "From examples, we can see the phrases are consistent with our user study and diversity evaluation.", "Although the phrases from PNTM are coherent, the diversity of phrases is less than others (e.g., drinks, bar drink, just drink from Gest) because context-agnostic representations let similar phrase mentions group together.", "The phrases from TopMine are diverse but are not coherent in some cases (e.g., machine learning and support vector machine in the programming topic).", "In contrast, UCTOPIC can extract coherent and diverse topical phrases from documents.", "Many attempts have been made to extract topical phrases via LDA (Blei et al., 2003).", "Wallach (2006) incorporated a bigram language model into LDA by a hierarchical dirichlet generative probabilistic model to share the topic across each word within a bigram.", "TNG (Wang et al., 2007) applied additional latent variables and word-specific multino-mials to model bi-grams and combined bi-grams to form n-gram phrases.", "PD-LDA (Lindsey et al., 2012) used a hierarchical Pitman-Yor process to share the same topic among all words in a given n-gram.", "Danilevsky et al. (2014) ranked the resultant phrases based on four heuristic metrics.", "TOPMine (El-Kishky et al., 2014) proposed to restrict all constituent terms within a phrase to share the same latent topic and assign a phrase to the topic of its constituent words.", "Compared to previous topic mining methods, UCTOPIC builds on the success of pre-trained language models and unsupervised contrastive learning on a large-scale dataset.", "Therefore, UCTOPIC provides high-quality pre-trained phrase representations and state-of-the-art finetuning for topic mining.", "Early works in phrase representation build upon a composition function that combines component word embeddings together into simple phrase embedding.", "Yu and Dredze (2015) implemented the function by rule-based composition over word vectors.", "Zhou et al. (2017) applied a pair-wise GRU model and datasets such as PPDB (Pavlick et al., 2015) to learn phrase representations.", "Phrase-BERT (Wang et al., 2021) composed token embeddings from BERT and pretrained on positive instances produced by GPT-2-based diverse paraphrasing model (Krishna et al., 2020).", "Lee et al. (2021) learned phrase representations from the supervision of reading comprehension tasks and applied representations on open-domain QA.", "Other works learned phrase embeddings for specific tasks such as semantic parsing (Socher et al., 2011) and machine translation (Bing et al., 2015).", "In this paper, we present unsupervised contrastive learning method for pre-training phrase representations of general purposes and for finetuning to topic-specific phrase representations.", "In this paper, we propose UCTOPIC , a contrastive learning framework that can effectively learn phrase representations without supervision.", "To finetune on topic mining datasets, we propose cluster-assisted contrastive learning which reduces noise by selecting negatives from clusters.", "During finetuning, our phrase representations are optimized for topics in the document hence the representations are further improved.", "We conduct comprehensive experiments on entity clustering and topical phrase mining.", "Results show that UCTOPIC largely improves phrase representations.", "Objective metrics and a user study indicate UCTOPIC can extract coherent and diverse topical phrases." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "objective", "abstain", "objective", "abstain", "result", "abstain", "method", "abstain", "abstain", "objective", "method", "result", "abstain", "method", "objective", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "method", "objective", "objective", "objective", "method", "objective", "abstain", "other", "other", "abstain", "other", "abstain", "other", "abstain", "other", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "objective", "objective", "result", "method", "abstain", "abstain" ]
[ "Grammatical Error Correction (GEC) aims to correct writing errors and help language learners improve their writing skills.", "However, existing GEC models tend to produce spurious corrections or fail to detect lots of errors.", "The quality estimation model is necessary to ensure learners get accurate GEC results and avoid misleading from poorly corrected sentences.", "Well-trained GEC models can generate several high-quality hypotheses through decoding, such as beam search, which provide valuable GEC evidence and can be used to evaluate GEC quality.", "However, existing models neglect the possible GEC evidence from different hypotheses.", "This paper presents the Neural Verification Network (VERNet) for GEC quality estimation with multiple hypotheses.", "VERNet establishes interactions among hypotheses with a reasoning graph and conducts two kinds of attention mechanisms to propagate GEC evidence to verify the quality of generated hypotheses.", "Our experiments on four GEC datasets show that VERNet achieves state-of-the-art grammatical error detection performance, achieves the best quality estimation results, and significantly improves GEC performance by reranking hypotheses.", "All data and source codes are available at https://github.com/ thunlp/VERNet .", "Grammatical Error Correction (GEC) systems primarily aim to serve second-language learners for proofreading.", "These systems are expected to detect grammatical errors, provide precise corrections, and guide learners to improve their language ability.", "With the rapid increase of second-language learners, GEC has drawn growing attention from numerous researchers of the NLP community.", "Existing GEC systems usually inherit the seq2seq architecture (Sutskever et al., 2014) to correct grammatical errors or improve sentence fluency.", "These systems employ beam search decoding to generate correction hypotheses and rerank hypotheses with quality estimation models from K best decoding (Kiyono et al., 2019; Kaneko et al., 2020) or model ensemble (Chollampatt and Ng, 2018a) to produce more appropriate and accurate grammatical error corrections.", "Such models thrive from edit distance and language models (Chollam-patt and Ng, 2018a; Chollampatt et al., 2019; Yannakoudakis et al., 2017; Kaneko et al., 2019, 2020).", "Chollampatt and Ng (2018b) further consider the GEC accuracy in quality estimation by directly predicting the official evaluation metric, F 0 .", "5 score.", "The K -best hypotheses from beam search usually derive from model uncertainty (Ott et al., 2018).", "These uncertainties of multi-hypotheses come from model confidence and potential ambiguity of", "lin-(a) CoNLL2014 (ann. 1).", "guistic variation (Fomicheva et al., 2020), which can be used to improve machine translation performance (Wang et al., 2019b).", "Fomicheva et al. (2020) further leverage multi-hypotheses to make convinced machine translation evaluation, which is more correlated with human judgments.", "Their work further demonstrates that multi-hypotheses from well-trained neural models have the ability to provide more hints to estimate generation quality.", "For GEC, the hypotheses from the beam search decoding of well-trained GEC models can provide some valuable GEC evidence.", "We illustrate the reasons as follows.", "Beam search can provide better GEC results.", "The GEC performance of the top-ranked hypothesis and the best one has a large gap in beam search.", "For two existing GEC systems, Zhao et al. (2019) and Kiyono et al. (2019), the F 0 .", "5 scores of these systems are 58.99 and 62.03 on the CoNLL2014 dataset.", "However, the F 0 .", "5 scores of the best GEC results of these systems can achieve 73.56 and 76.82.", "Beam search candidates are more grammatical.", "As shown in Figure 1, the hypotheses from well-trained GEC models with beam search usually win the favor of language models, even for these hypotheses ranked to the rear.", "It illustrates these hypotheses are usually more grammatical than source sentences.", "Beam search candidates can provide valuable GEC evidence.", "As shown in Figure 2, the hypotheses of different beam ranks have almost the same Recall score, which demonstrates all hypotheses in beam search can provide some valuable GEC evidence.", "Existing quality estimation models (Chollampatt and Ng, 2018b) for GEC regard hypotheses independently and neglect the potential GEC evidence from different hypotheses.", "To fully use the valuable GEC evidence from GEC hypotheses, we propose the Neural Verification Network (VERNet) to estimate the GEC quality with modeled interactions from multi-hypotheses.", "Given a source sentence and K hypothesis sentences from the beam search decoding of the basic GEC model, VERNet establishes hypothesis interactions by regarding (cid:104) source, hypothesis (cid:105) pairs as nodes, and constructing a fully-connected reasoning graph to propagate GEC evidence among multi-hypotheses.", "Then VERNet proposes two kinds of attention mechanisms on the reasoning graph, node interaction attention and node selection attention , to summarize and aggregate necessary GEC evidence from other hypotheses to estimate the quality of tokens.", "Our experiments show that VERNet can pick up necessary GEC evidence from multi-hypotheses provided by GEC models and help verify the quality of GEC hypotheses.", "VERNet helps GEC models to generate more accurate GEC results and benefits most grammatical error types.", "The GEC task is designed for automatically proofreading.", "Large-scale annotated corpora (Mizumoto et al., 2011; Dahlmeier et al., 2013; Bryant et al., 2019) bring an opportunity for building fully data-driven GEC systems.", "Existing neural models regard GEC as a natural language generation (NLG) task and usually use sequence-to-sequence architecture (Sutskever et al., 2014) to generate correction hypotheses with beam search decoding (Yuan and Briscoe, 2016; Chollampatt and Ng, 2018a).", "Transformer-based architectures (Vaswani et al., 2017) show their effectiveness in NLG tasks and are also employed to achieve convinced correction results (Grundkiewicz et al., 2019; Kiyono et al., 2019).", "The copying mechanism is also introduced for GEC models (Zhao et al., 2019) to better align tokens from source sentence to hypothesis sentence.", "To further accelerate the generation process, some work also comes up with non-autoregressive GEC models and leverages a single encoder to parallelly detect and correct grammatical errors (Awasthi et al., 2019; Malmi et al., 2019; Omelianchuk et al., 2020).", "improve GEC systems.", "The first one treats GEC as a low-resource language generation problem and focuses on data augmentation for a grammar sensitive and language proficient GEC system (Junczys-Dowmunt et al., 2018; Kiyono et al., 2019).", "Various weak-supervision corpora have been leveraged, such as Wikipedia edit history (Lichtarge et al., 2019), Github edit history (Hagiwara and Mita, 2020) and confusing word set (Grundkiewicz et al., 2019).", "Besides, lots of work generates grammatical errors through generation models or round-trip translation (Ge et al., 2018; Wang et al., 2019a; Xie et al., 2018).", "Kiyono et al. (2019) further consider different data augmentation strategies to conduct better GEC pretraining.", "Reranking GEC hypotheses from K -best decoding or GEC model ensemble (Hoang et al., 2016; Chollampatt and Ng, 2018b) with quality estimation models provides another promising direction to achieve better GEC performance.", "Some methods evaluate if hypotheses satisfy linguistic and grammatical rules.", "For this purpose, they employ language models (Chollampatt and Ng, 2018a; Chollampatt et al., 2019) or grammatical error detection (GED) models to estimate hypothesis quality.", "GED models (Rei, 2017; Rei and Sgaard, 2019) estimate the hypothesis quality on both sentence level (Kaneko et al., 2019) and token level (Yan-nakoudakis et al., 2017).", "Chollampatt and Ng (2018b) further estimate GEC quality by considering correction accuracy.", "They establish source-hypothesis interactions with the encoder-decoder architecture and learn to directly predict the official evaluation score F 0 .", "5 .", "The pre-trained language model BERT (Devlin et al., 2019) has proven its effectiveness in producing contextual token representations, achieving better quality estimation (Kaneko et al., 2019; Chollampatt et al., 2019) and improving GEC performance by fuse BERT representations (Kaneko et al., 2020).", "However, existing quality estimation models regard each hypothesis independently and neglect the interactions among multi-hypotheses, which can also benefit the quality estimation (Fomicheva et al., 2020).", "This section describes Neural Verification Network (VERNet) to estimate the GEC quality with multi-hypotheses, as shown in Figure 3.", "ing hypotheses C = { c 1 , . . . , c k , . . . , c K } generated by a GEC model, we first regard each source-hypothesis pair (cid:104) s, c k (cid:105) as a node and fully connect all nodes to establish multi-hypothesis interactions.", "Then VERNet leverages BERT to get the representation of each token in (cid:104) s, c k (cid:105) pairs (Sec. 3.1) and conducts two kinds of attention mechanisms to propagate and aggregate GEC evidence from other hypotheses to verify the token quality (Sec. 3.2).", "Finally, VERNet estimates hypothesis quality by aggregating token level quality estimation scores (Sec. 3.3).", "Our VERNet is trained end-to-end with supervisions from golden labels (Sec. 3.4).", "Pre-trained language models, e.g. BERT (Devlin et al., 2019), show their advantages of producing contextual token representations for various NLP tasks.", "Hence, given a source sentence s with m tokens and the k -th hypothesis c k with n tokens, we use BERT to encode the source-hypothesis pair (cid:104) s, c k (cid:105) and get its representation H k : H k = BERT ( [CLS] s [SEP] c k [SEP] ) .", "VERNet conducts two kinds of attention mechanisms, node interaction attention and node selection attention , to verify the token quality with the verification representation V k of k -th node, which learns the supporting evidence towards estimating token quality from multi-hypotheses.", "The node interaction attention first summarizes useful GEC evidence from the l -th node for the fine-grained representation V l k (Sec. 3.2.1).", "Then node selection attention further aggregates fine-grained representation V l k with score l according to each node's confidence (Sec. 3.2.2).", "Finally, we can calculate the verification representation V k to verify the token's quality of each node.", "The node interaction attention l k attentively reads tokens in the l -th node and picks up supporting evidence towards the k -th node to build fine-grained node representations V l k .", "For the p -th token in the k -th node, w kp , we first calculate the node interaction attention weight l k q according to the relevance between w kp and the q -th token in the l -th node, w lq : l k q = softmax q (( H kp ) T W H lq ) , (2) where W is a parameter.", "H kp and H lq are the representations of w kp and w lq .", "Then all token representations of l -th node are aggregated: V l k p = m + n +2 (cid:88) q =1 ( l k q H lq ) .", "The node selection attention measures node importance and is used to aggregate supporting evidence from the fine-grained node representation V l k of the l -th node.", "We leverage attention-over-attention mechanism (Cui et al., 2017) to conduct source h ls and hypotheses h lh representations to calculate the l -th node selection attention score l .", "Then we get the node verification representation V kp with the node selection attention l .", "To calculate the node selection attention l , we establish an interaction matrix M l between the source and hypothesis sentences of the l -th node.", "Each element M lij in M l is calculated with the relevance between i -th source token and j -th hypothesis token (include [SEP] tokens): M lij = ( H li ) T W H lm +1+ j , (4) where W is a parameter.", "Then we calculate attention scores lsi and lhj along the source dimension and hypothesis dimension, respectively: lsi = 1 n + 1 n +1 (cid:88) j =1 softmax i ( M lij ) , (5) lhj = 1 m + 1 m +1 (cid:88) i =1 softmax j ( M lij ) .", "(6) Then the representations of source sentence and hypothesis sentence are calculated: h ls = m +1 (cid:88) i =1 lsi H li , h lh = n +1 (cid:88) j =1 lhj H lm +1+ j .", "(7) Finally, the node selection attention l of l -th node is calculated for the evidence aggregation: l = softmax l ( Linear (( h ls h lh ); h ls ; h lh )) , (8) where is the element-wise multiplication operator and ; is the concatenate operator.", "where V = { V 1 , . . . , V kp , . . . , V km + n +2 } is k -th node verification representation.", "For the p -th token w kp in the k -th node, the probability P ( y | w kp ) of quality label y is calculated with the verification representation V kp :", "where is the element-wise multiplication and ; is the concatenate operator.", "We average all probability P ( y = 1 | w kp ) of token level quality estimation as hypothesis quality estimation score f ( s, c k ) for the pair (cid:104) s, c k (cid:105) : f ( s, c k ) = 1 n + 1 m + n +2 (cid:88) p = m +2 P ( y = 1 | w kp ) .", "We conduct joint training with token-level supervision.", "The source labels and hypothesis labels are used, which denote the grammatical quality of source sentences and GEC accuracy of hypotheses.", "The cross entropy loss for the p -th token w kp in the k -th node is calculated: L ( w kp ) = CrossEntropy ( y , P ( y | w kp )) , (12) using the ground truth token labels y .", "Then the training loss of VERNet is calculated: L = 1 K 1 m + n + 2 K (cid:88) k =1 m + n +2 (cid:88) p =1 L ( w kp ) .", "Datasets.", "We use FCE (Yannakoudakis et al., 2011), BEA19 (Bryant et al., 2019) and NUCLE (Dahlmeier et al., 2013) to construct training and development sets.", "Four testing scenarios, FCE, BEA19 (Restrict), CoNLL-2014 (Ng et al., 2014) and JFLEG (Napoles et al., 2017), are leveraged to evaluate model performance.", "Detailed data statistics are presented in Table 1.", "We do not incorporate additional training corpora for fair comparison.", "Basic GEC Model .", "To generate correction hypotheses, we take one of the state-of-the-art autoregressive GEC systems (Kiyono et al., 2019) as our basic GEC model and keep the same setting.", "The beam size of our baseline model is set to 5 (Kiyono et al., 2019), and all these beam search hypotheses are reserved in our experiments.", "We generate quality estimation labels for tokens in both source sentences and hypothesis sentences with ERRANT (Bryant et al., 2017; Felice et al., 2016), which indicate grammatical correctness and GEC accuracy, respectively.", "As shown in Table 2, ERRANT annotates edit operations (delete, insert, and replace) towards the ground truth corrections.", "In terms of such annotations, each token is labeled with correct (1) or incorrect (0).", "Evaluation Metrics.", "We introduce the evaluation metrics in three tasks: token quality estimation, sentence quality estimation, and GEC.", "To evaluate the model performance of token-level quality estimation, we employ the same evaluation metrics from previous GED models (Rei, 2017; Rei and Sgaard, 2019; Yannakoudakis et al., 2017), including Precision, Recall, and F 0 .", "5 .", "F 0 .", "5 is our primary evaluation metric.", "For the evaluation of sentence-level quality estimation, we employ the same evaluation metrics from the previous quality estimation model (Chol-lampatt and Ng, 2018b), including two evaluation scenarios: (1) GEC evaluation metrics for the hypothesis that reranked top-1 and (2) Pearson Correlation Coefficient (PCC) between reranking scores and golden scores (F 0 . 5 ) for all hypotheses.", "To evaluate GEC performance, we adopt GLEU (Napoles et al., 2015) to evaluate model performance on the JFLEG dataset.", "The official tool ERRANT of the BEA19 shared task (Bryant et al., 2019) is used to calculate Precision, Recall, and F 0 .", "5 scores for other datasets.", "For the CoNLL2014 dataset, the M 2 evaluation (Dahlmeier and Ng, 2012) is also adopted as our main evaluation.", "Baselines.", "BERT-fuse (GED) (Kaneko et al., 2020) is compared in our experiments, which trains BERT with the GED task and fuses BERT representations into the Transformer.", "For quality estimation, we consider two groups of baseline models in our experiments, and more details of these models can be found in Appendices A.1.", "(1) BERT based language models.", "We employ three BERT based language models to estimate the quality of hypotheses.", "BERT-LM (Chol-lampatt et al., 2019) measures hypothesis quality with the perplexity of the language model.", "BERT-GQE (Kaneko et al., 2019) is trained with annotated GEC data and estimates if the hypothesis has grammatical errors.", "We also conduct BERT-GED (SRC) that predicts token level grammar indicator labels, which is inspired by GED models (Yan-nakoudakis et al., 2017).", "BERT shows significant improvement compared to LSTM based models for the GED task (Appendices A.2).", "Hence the LSTM based models are neglected in our experiments.", "(2) GEC accuracy estimation models.", "These models further consider the source-hypothesis interactions to evaluate GEC accuracy.", "We take a strong baseline NQE (Chollampatt and Ng, 2018b) in experiments.", "NQE employs the encoder-decoder (pre-dictor) architecture to encode source-hypothesis pairs and predicts F 0 .", "5 score with the estimator architecture.", "All their proposed architectures, NQE (CC), NQE (RC), NQE (CR), and NQE (RR) are compared.", "For NQE (XY), X indicates the predictor architecture, and Y indicates the estimator architecture.", "X and Y can be recurrent (R) or convolutional (C) neural networks.", "In addition, we also employ BERT to encode source-hypothesis pairs and then predict the F 0 .", "5 score to implement the BERT-QE model.", "We also come up with two baselines, BERT-GED (HYP) and BERT-GED (JOINT).", "They leverage BERT to encode source-hypothesis pairs and are supervised with the token-level quality estimation label.", "BERT-GED (HYP) is trained with the supervision of hypotheses, and BERT-GED (JOINT) is supervised with labels from both source and hypothesis sentences.", "Implementation Details.", "In all experiments, we use the base version of BERT (Devlin et al., 2019) and ELECTRA (Clark et al., 2020).", "BERT is a widely used pretrained language model and trained with the mask language model task.", "ELECTRA is trained with the replaced token detection task and aims to predict if the token is original or replaced by a BERT based generator during pretraining.", "ELECTRA is a discriminator based pretrained language model and is more like the GED task.", "We regard BERT as our main model for text encoding and leverage ELECTRA to evaluate the generalization ability of our model.", "Both BERT and ELECTRA inherit hugging-face's PyTorch implementation (Wolf et al., 2020).", "Adam (Kingma and Ba, 2015) is utilized for parameter optimization.", "We set the max sentence length to 120 for source and hypothesis sentences, learning rate to 5e-5, batch size to 8, and accumulate step to 4 during training.", "For hypothesis reranking, we leverage the learning-to-rank method, Coordinate Ascent (CA) (Metzler and Croft, 2007), to aggregate the ranking features and basic GEC score to conduct the ranking score.", "We assign the hypotheses with the highest F 0 .", "5 score as positive instances and the others as negative ones.", "The Coordinate Ascent method is implemented by RankLib 1 .", "We conduct experiments to study the performance of VERNet from three aspects: token-level quality estimation, sentence-level quality estimation, and the VERNet's effectiveness in GEC models.", "Then we present the case study to qualitatively analyze the effectiveness of the proposed two types of attention in VERNet.", "We first evaluate VERNet's effectiveness on token-level quality estimation.", "BERT-GED (SRC) is the previous state-of-the-art GED model (Kaneko and Komachi, 2019).", "Additional two variants, HYP and JOINT, of BERT-GED are conducted as baselines by considering the first-ranked GEC hypothesis in beam search decoding.", "As shown in Table 3, there are two scenarios, source and hypothesis , are conducted to evaluate model performance.", "The source scenario evaluates the ability of grammaticality quality estimation, which is the same as GED models (Rei and S-gaard, 2019).", "The hypothesis scenario tests the quality estimation ability on GEC accuracy.", "For the source scenario , BERT-GED (JOINT) outperforms BERT-GED (SRC) and illustrates that the GEC result can help estimate the grammaticality quality of source sentences.", "For the hypothesis scenario , BERT-GED (JOINT) shows better performance than BERT-GED (HYP), which thrives from the supervisions from source sentences.", "For both scenarios , BERT-VERNet shows further improvement compared with BERT-GED (JOINT).", "Such improvements demonstrate that various GEC evidence from multiple hypotheses benefits the token-level quality estimation.", "Moreover, the detection style pre-trained model ELECTRA (Clark et al., 2020) is also used as our sentence encoder.", "VERNet is boosted a lot on all scenarios and datasets, which illustrates the strong ability of ELECTRA in token-level quality estimation and the generalization ability of VERNet.", "In this part, we evaluate VERNet's performance on sentence-level quality estimation by reranking hypotheses from beam search decoding.", "Baselines can be divided into two groups: language model based and GEC accuracy based quality estimation models.", "The former focuses on grammaticality and fluency, including BERT-LM, BERT-GQE and BERT-GED (SRC).", "The others focus on estimating the GEC accuracy, including NQE, BERT-QE, BERT-GED (HYP)/(JOINT).", "As shown in Table 4, we find that language model based quality estimation prefers higher recall but lower precision, which leads to more redundant corrections.", "Only considering grammaticality is insufficient since such unnecessary correction suggestions may mislead users.", "By contrast, GEC accuracy based quality estimation models get much better Precision and F 0 .", "5 , and provide more precise feedback for users.", "Furthermore, BERT-GED (HYP) outperforms BERT-QE, manifesting that token-level supervisions provide finer-granularity signals to help the model better distinguish subtle differences among hypotheses.", "VERNet outperforms all baselines, which supports our claim that multi-hypotheses from beam search provide valuable GEC evidence and help conduct more effective quality estimation for generated GEC hypotheses.", "This part explores the effectiveness of VERNet on improving GEC models.", "We conduct VERNet by aggregating scores from the basic GEC model and VERNet for hypothesis reranking.", "As shown in Table 5, two baseline models are compared in our experiments, Basic GEC (Kiyono et al., 2019) and BERT-fuse (GED) (Kaneko et al., 2020).", "Compared to BERT-fuse (GED), BERT-VERNet achieves comparable performance on CoNLL-2014 and more improvement on BEA19.", "It demonstrates that reranking hypotheses with VER-Model CoNLL-2014 (M 2 ) CoNLL-2014 FCE BEA19 JFLEG P R F 0 .", "Net provides an effective way to improve basic GEC model performance without changing the Transformer architecture.", "R2L models incorporate four right-to-left Transformer models to improve GEC performance.", "However, these R2L models are not available.", "ELECTRA-VERNet incorporates only one model and achieves comparable performance on BEA19 and JFLEG.", "Figure 4 presents VERNet 's performance on different grammatical error types.", "We plot the F 0 .", "5 scores of both basic GEC model and VERNet on BEA19.", "VERNet achieves improvement on most types and performs significantly better for word morphology and word usage errors, such as Noun Inflection (NOUN:INFL) and Pronoun (PRON).", "Such results illustrate that VERNet is able to leverage clues learned from multi-hypotheses to verify the GEC quality.", "However, we also find that VERNet discounts GEC performance on a few error types, e.g. , Contraction (CONTR).", "The annotation biases may cause such a decrease in CONTR errors.", "For example, for both n't and not, they are both right according to grammaticality, but annotators usually come up with different corrections with different GEC standards.", "We select one case from CoNLL-2014 and visualize node interaction and node selection attention weights to study what VERNet learns from multi-hypotheses of beam search, as shown in Figure 5.", "Given a source sentence, Do one who suffered from this disease keep it a secret of infrom their relatives ?, and its five hypotheses from the Basic GEC Model, we plot the node interaction attention weights towards the word suffers in the hypothesis of node 2, which is assigned more higher score by BERT-VERNet.", "The word usage suffers is Figure 5: Visualization of Attention Weight.", "The node interaction attention accurately picks up the associated tokens Does from nodes 1, 3, and 4, and suffers from node 5.", "Does and suffers indicate the present tense and provide suf-ficient evidence to verify the quality of suffers in node 2.", "For node selection attention, the hypothesis (node 2) shares more attention than other nodes, which is more appropriate than other hypotheses.", "It demonstrates that the node attention is effective to select high-quality corrections with the source-hypothesis interactions.", "The attention patterns are intuitive and effective, which further demonstrates VERNet's ability to well model the interactions of multi-hypotheses for better quality estimation.", "This paper presents VERNet for GEC quality estimation with multi-hypotheses.", "VERNet models the interactions of multiple hypotheses by building a reasoning graph, and then extracts clues with two kinds of attention: node selection attention and node interaction attention .", "They summarize and aggregate GEC evidence from multi-hypotheses to verify the quality of tokens.", "Experiments on four datasets show that VERNet achieves the state-of-the-art GED and quality estimation performance, and improves one published state-of-the-art GEC system.", "In the future, we will explore the impact of different kinds of hypotheses used in VERNet.", "We thank the reviewers and Shuo Wang for their valuable comments and advice.", "This research is mainly supported by Science & Tech Innovation 2030 Major Project New Generation AI (Grant no. 2020AAA0106500) as well as supported in part by a project from Shanghai-Tsinghua International Innovation Center and the funds of Beijing Advanced Innovation Center for Language Resources under Grant TYZ19005." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "abstain", "objective", "abstain", "abstain", "result", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "objective", "other", "other" ]
[ "We propose a method to learn contextualized and generalized sentence representations using contrastive self-supervised learning.", "In the proposed method, a model is given a text consisting of multiple sentences.", "One sentence is randomly selected as a target sentence.", "The model is trained to maximize the similarity between the representation of the target sentence with its context and that of the masked target sentence with the same context.", "Simultaneously, the model minimize the similarity between the latter representation and the representation of a random sentence with the same context.", "We apply our method to discourse relation analysis in English and Japanese and show that it outperforms strong baseline methods based on BERT, XLNet, and RoBERTa.", "Understanding the meaning of a sentence is one of the main interests of natural language processing.", "In recent years, distributed representations are considered to be promising to capture the meaning of a sentence flexibly (Conneau et al., 2017; Arora et al., 2017; Kiros et al., 2015).", "One typical way to obtain distributed sentence representations is to learn a task that is somehow related to sentence meaning.", "For example, sentence representations trained to solve natural language inference (Bowman et al., 2015; Williams et al., 2018) are known to be helpful for many language understanding tasks such as sentiment analysis and semantic textual similarity (Conneau et al., 2017; Wieting and Gimpel, 2018; Cer et al., 2018; Reimers and Gurevych, 2019).", "However, there is an arbitrariness in the choice of tasks used for training.", "Furthermore, there is a size limitation on manually annotated data, which makes it hard to learn a wide range of language expressions.", "A solution to these problems is self-supervised learning, which has been used with great success (Mikolov et al., 2013; Peters et al., 2018; Devlin et al., 2019).", "For example, inspired by skip-grams (Mikolov et al., 2013), Kiros et al. (2015) proposed to train a sequence-to-sequence model to generate sentences before and after a sentence, and use the encoder to compute sentence representations.", "Inspired by masked language modeling in BERT, Zhang et al. (2019) and Huang et al. (2020) presented methods to learn contextualized sentence representations through the task of restoring a masked sentence from its context.", "In self-supervised sentence representation learning, sentence generation is typically used as its objective.", "Such an objective aims to learn a sentence representation specific enough to restore the sentence, including minor details.", "On the other hand, in case we would like to handle the meaning of a larger block such as paragraphs and documents (which is often called context analysis) and consider sentences as a basic unit, a more abstract and generalized sentence representation would be helpful.", "We propose a method to learn contextualized and generalized sentence representations by contrastive self-supervised learning (van den Oord et al., 2019; Chen et al., 2020).", "In the proposed method, a model is given a text consisting of multiple sentences and computes their contextualized sentence representations.", "During training, one sentence is randomly selected as a target sentence .", "The model is trained to maximize the similarity between the representation of the target sentence with its context, to which we refer as s pos , and the representation of the masked target sentence with the same context, to which we refer as s anc .", "Simultaneously, the model is trained to minimize the similarity between the latter representation s anc and the representation of a random sentence with the same context as the target sentence, to which we refer as s neg .", "From the viewpoint of optimizing s anc , this can be seen as a task to capture a generalized meaning that contextually valid sentences commonly have, utilizing s pos and s neg as clues.", "From the viewpoint of optimizing s pos , this can be seen as a task to generalize the meaning of a sentence to the level of s anc .", "We show the effectiveness of the proposed method using discourse relation analysis as an example task of context analysis.", "Our experiments on English and Japanese datasets show that our method outperforms strong baseline methods based on BERT (Devlin et al., 2019), XLNet (Yang et al., 2019), and RoBERTa (Liu et al., 2019).", "Figure 1 illustrates the overview of our method.", "The encoder takes an input text consisting of T ( > 1 ) sentences and computes their contextualized sentence representations.", "The encoder is trained by contrastive self-supervised learning.", "The encoder is a Transformer (Vaswani et al., 2017) with the same architecture as BERT (Devlin et al., 2019).", "Following Liu and Lapata (2019), we insert the CLS and SEP tokens at the beginning and the end of each sentence, respectively.", "The representation of the CLS token is used as the sentence representation of its following sentence.", "We propose a contrastive objective to learn contextualized sentence representations, aiming to capture sentences' generalized meaning.", "We first randomly select one sentence from the input text as a target sentence .", "In Figure 1, the k -th sentence ( 1 k T ) is selected as a target sentence.", "We refer to the representation of the target sentence as s pos .", "We then create another input text by masking the target sentence with the SENT-MASK token.", "We refer to the representation of the masked sentence as s anc .", "We finally create yet another input text by replacing the target sentence with a random sentence.", "We refer to the representation of the replaced random sentence as s neg .", "Our contrastive objective is to maximize the similarity between s pos and s anc while minimizing the similarity between s neg and s anc .", "We use the dot product as the similarity measure.", "When using N random sentences per input text, the contrastive loss L is calculated as follows: L = log exp( (cid:104) s pos , s anc (cid:105) ) (cid:80) s S exp( (cid:104) s, s anc ) (cid:105) , (1) where (cid:104) , (cid:105) is the dot product and S = { s pos , s 1 neg , , s N neg } .", "To optimize s anc , the model needs to capture a generalized meaning that contextually valid sentences commonly have, using s pos and s neg as clues.", "On the other hand, to optimize s pos , the model needs to generalize the meaning of a sentence to the level of s anc .", "For comparison, we train the encoder through the task of generating a masked sentence from its context.", "We first mask a sentence in the input text with the SENT-MASK token.", "Given the text, the encoder computes the representation of the masked sentence.", "Then, given the representation, a decoder generates the masked sentence in an autoregressive manner.", "The decoder's architecture is almost the same as the encoder, but it has an additional layer on the top to predict a probability distribution over words.", "We use teacher forcing and compute the generative loss by summing cross-entropy at each generation step.", "The encoder and decoder are trained by optimizing the generative loss and the standard masked language modeling loss jointly.", "We use an English Wikipedia dump and BookCorpus (Zhu et al., 2015) 1 to create input texts.", "We first split texts into sentences using spacy (Honnibal et al., 2020).", "We then extract as many consecutive sentences as possible so that the length does not exceed the maximum input length of 128.", "When a sentence is so long that an input text including the sentence cannot be created while meeting the length constraint, we give up using the sentence.", "The number of sentences in an input text T was 4.91 on average.", "After creating input texts, we assign random sentences to each of them.", "Random sentences are extracted from the same document.", "We assigned three random sentences per input text, i.e., N = 3 .", "We initialize the encoder's parameters using the weights of RoBERTa BASE (Liu et al., 2019).", "The other parameters are initialized randomly.", "We train the model for 10,000 steps with a batch size of 512.", "We use the Adam optimizer (Kingma and Ba, 2015) with a learning rate of 2e-5, 1 = 0 .", "9 , 2 = 0 .", "999 , linear warmup of the learning rate over the first 1,000 steps, and linear decay of the learning rate.", "We use a Japanese Wikipedia dump to create input texts.", "We split the texts into clauses using KNP, a widely used Japanese syntactic parser (Kawahara and Kurohashi, 2006).", "We create input texts and assign random sentences to them in the same way as in Section 2.4.1.", "The number of sentences (clauses) in an input text T was 6.42 on average.", "1 Because the original BookCorpus is no longer available, we used a replica created by a publicly available crawler ( https://github.com/soskek/bookcorpus ).", "We initialize the encoder's parameters with BERTBASE , pretrained on a Japanese Wikipedia dump 2 .", "The other details are the same as in Section 2.4.1.", "We show the effectiveness of the proposed method using discourse relation analysis as a concrete example of context analysis.", "Discourse relation analysis is a task to predict the logical relation between two arguments.", "An argument roughly corresponds to a sentence or a clause.", "We conduct experiments on English and Japanese datasets.", "PDTB 3.0 is a corpus of English newspaper with discourse relation labels (Prasad et al., 2018).", "We focus on implicit discourse relation analysis, where no explicit discourse marker exists.", "Following Kim et al. (2020), we use the Level-2 labels with more than 100 examples and use 12-fold cross-validation.", "KWDLC is a Japanese corpus consisting of leading three sentences of web documents with discourse relation labels (Kawahara et al., 2014; Kishi-moto et al., 2018).", "As KWDLC does not discriminate between implicit discourse relations and explicit discourse relations, we target both.", "KWDLC has seven types of discourse relations, including NORELATION .", "The evaluation protocol is 5-fold cross-validation.", "Following Kim et al. (2020), each fold is split at the document level rather than the individual example level.", "When a model uses context, the model is given the paragraph that contains arguments of interest.", "In this setting, first, the paragraph is split into sentences.", "Arguments are treated as a single sentence, and their context is split in the way described in Section 2.4.", "Then, an encoder computes the representation of each sentence in the same manner as 2 Available at https://alaginrc.nict.go.jp/ nict-bert/index.html .", "in Section 2.1.", "Given the concatenation of the argu-ments' representations, a relation classifier predicts the discourse relation.", "As a relation classifier, we employ a multi-layer perceptron with one hidden layer and ReLU activation.", "When a model does not use context, the model is given arguments of interest only.", "In this setting, we use the sentence pair classification method proposed by Devlin et al. (2019).", "Our proposed method is introduced to a context-using model by initializing its encoder's parameters using our sentence encoder.", "In experiments, we report a difference in performance depending on models used for initialization.", "Input texts are truncated to the maximum input length of 512, which is long enough to hold almost all inputs.", "We train models for up to 20 epochs.", "At the end of each epoch, we compute the performance for the development data and adopt the model with the best performance.", "If the performance does not improve for five epochs, we stop the training.", "We use the Adam optimizer with a learning rate of 2e-5, 1 = 0 .", "9 , 2 = 0 .", "999 .", "We update all the parameters in models, i.e., pretrained sentence encoders are fine-tuned to solve discourse relation analysis.", "Table 1 shows the result for PDTB 3.0.", "The evaluation metric is accuracy.", "The highest performance was achieved by the proposed method.", "To our knowledge, this is the state-of-the-art performance among models with the same parameter size as BERTBASE .", "The model that optimized the generative objective was inferior not only to the proposed method but also to vanilla RoBERTa with context.", "Table 2 shows the result for KWDLC.", "The evaluation metrics are accuracy and micro-averaged precision, recall, and F1 3 .", "The highest performance was again achieved by the proposed method.", "The decrease in performance by optimizing the generative objective is consistent with the experimental results on PDTB 3.0.", "We show an example of discourse relation analysis in KWDLC.", "(1) (cid:104) Arg1 (cid:105) (cid:104) Arg2", "(cid:105) (cid:104) Arg1 I want to go to a government-managed park in Niigata Prefecture for an overnight visit, (cid:105) (cid:104) Arg2 I came up with that.", "(cid:105)", "Label : NORELATION Arguments are enclosed in (cid:104) and (cid:105) .", "The models except ours erroneously predicted the discourse relation of PURPOSE between Arg1 and Arg2.", "This is probably because the Japanese postpositional particle can be a discourse marker of PURPOSE .", "For example, if Arg2 was (I started packing), the prediction would be correct.", "However, in this case, the postpositional particle is used to construct a sentential complement.", "That is, Arg1 is the object of Arg2.", "It is not possible to distinguish between the two usages from its surface form.", "Our model correctly predicted the discourse relation of NORELATION , which implies that our method understood that Arg1 is a sentential complement.", "(2) (cid:104) Arg1 (cid:105) (cid:104) Arg2", "(cid:105) (cid:104) Arg1 I was able to launch the website that I had planned for a while, (cid:105) (cid:104) Arg2 I'm happy.", "(cid:105)", "Label : CAUSE /R EASON While most models predicted the discourse relation of NORELATION between Arg1 and Arg2, the 3 As examples with the discourse relation of NORELATION accounts for more than 80% of the dataset, precision, recall, and F1 are calculated without examples with NORELATION to make performance difference intelligible.", "proposed model correctly recognized the discourse relation of CAUSE /R EASON .", "We speculate that the models other than ours failed to understand Arg1 at the level of a happy event occurred. 4 Sentence Retrieval To investigate what is learned by our contrastive objective, we did sentence retrieval based on the similarity between sentence representations.", "For targets, we randomly sampled 500,000 sentences with context from input texts used for training.", "For a query, we used a sentence with context in a Wikipedia article.", "Computing the sentence representations for the targets and query, we searched the closest sentences based on their cosine similarity.", "Table 3 shows an example.", "In addition to the top-ranked sentences, we also picked up some highly-ranked sentences.", "The top two sentences were very similar to the query sentence regarding the topic, meaning, and context.", "While the sentences of lower rank had different topics from the query sentence, they all described a positive aspect of an entity and had a similar context in terms of that an entity is introduced in their preceding sentences.", "We con-firmed that almost the same results were obtained in Japanese.", "We leave quantitative evaluation of sentence retrieval for future work.", "We proposed a method to learn contextualized and generalized sentence representations using contrastive self-supervised training.", "Experiments showed that the proposed method improves the performance of discourse relation analysis both in English and Japanese.", "We leave an in-depth analysis of the level of abstraction trained by the proposed method for future work." ]
[ "objective", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective" ]
[ "Question-answer driven Semantic Role Labeling (QA-SRL) was proposed as an attractive open and natural flavour of SRL, potentially attainable from laymen.", "Recently, a large-scale crowdsourced QA-SRL corpus and a trained parser were released.", "Trying to replicate the QA-SRL annotation for new texts, we found that the resulting annotations were lacking in quality, particularly in coverage, making them insufficient for further research and evaluation.", "In this paper, we present an improved crowdsourcing protocol for complex semantic annotation, involving worker selection and training, and a data consolidation phase.", "Applying this protocol to QA-SRL yielded high-quality annotation with drastically higher coverage, producing a new gold evaluation dataset.", "We believe that our annotation protocol and gold standard will facilitate future replicable research of natural semantic annotations.", "Semantic Role Labeling (SRL) provides explicit annotation of predicate-argument relations.", "Common SRL schemes, particularly PropBank (Palmer et al., 2005) and FrameNet (Baker et al., 1998), rely on predefined role inventories and extensive predicate lexicons.", "Consequently, SRL annotation of new texts requires substantial efforts involving expert annotation, and possibly lexicon extension, limiting scalability.", "Aiming to address these limitations, Question-Answer driven Semantic Role Labeling (QA-SRL) (He et al., 2015) labels each predicate-argument relationship with a question-answer pair, where natural language questions represent semantic roles, and answers correspond to arguments (see Table 1).", "This approach follows the colloquial perception of semantic roles as answering questions about the predicate (Who did What to Whom, When, Where and How, with, e.g., Who corresponding to the agent role).", "QA-SRL carries two attractive promises.", "First, using a question-answer format makes the annotation task intuitive and easily attainable by laymen, as it does not depend on linguistic resources (e.g. role lexicons), thus facilitating greater annotation scalability.", "Second, by relying on intuitive human comprehension, these annotations elicit a richer argument set, including valuable implicit semantic arguments not manifested in syntactic structure (highlighted in Table 1).", "The importance of implicit arguments has been recognized in the literature (Cheng and Erk, 2018; Do et al., 2017; Gerber and Chai, 2012), yet they are mostly overlooked by common SRL formalisms and tools.", "Overall, QA-SRL largely subsumes predicate-argument information captured by traditional SRL schemes, which were shown beneficial for complex downstream tasks, such as dialog modeling (Chen et al., 2013), machine comprehension (Wang et al., 2015) and cross-document coreference (Barhom et al., 2019).", "At the same time, it contains richer information, and is easier to understand and col-lect.", "Similarly to SRL, one can utilize QA-SRL both as a source of semantic supervision, in order to achieve better implicit neural NLU models, as done recently by He et al. (2020), as well as an explicit semantic structure for downstream use, e.g. for producing Open Information Extraction propositions (Stanovsky and Dagan, 2016).", "1 1 Indeed, making direct use of QA-SRL role questions might seem more challenging than with categorical semantic roles, as in traditional SRL.", "In practice, however, when a model embeds QA-SRL questions in context, we would expect Around 47 people could be arrested , including the councillor.", "Previous attempts to annotate QA-SRL initially involved trained annotators (He et al., 2015) but later resorted to crowdsourcing (Fitzgerald et al., 2018) for scalability.", "Naturally, employing crowd workers is challenging when annotating fairly demanding structures like SRL.", "As Fitzgerald et al. (2018) acknowledge, the main shortage of their large-scale dataset is limited recall, which we estimate to be in the lower 70s (see 4).", "Unfortunately, such low recall in gold standard datasets hinders proper research and evaluation, undermining the current viability of the QA-SRL paradigm.", "Aiming to enable future QA-SRL research, we present a generic controlled crowdsourcing annotation protocol and apply it to QA-SRL.", "Our process addresses worker quality by performing short yet efficient annotator screening and training.", "To boost coverage, we employ two independent workers per task, while an additional worker resolves inconsistencies, similar to conventional expert annotation.", "These steps combined yield 25% more roles than Fitzgerald et al. (2018), without sacrificing precision and at a comparable cost per verb.", "This gain is especially notable for implicit arguments, which we show in a comparison to PropBank (Palmer et al., 2005).", "Overall, we show that our annotation protocol and dataset are of high quality and coverage, enabling subsequent QA-SRL research.", "To foster such research, including easy production of additional QA-SRL datasets, we release our annotation protocol, software and guidelines along with a high-quality dataset for QA-SRL evaluation (dev and test).", "2 We also re-evaluate the existing parser (Fitzgerald et al., 2018) against our test set, setting the baseline for future developments.", "Finally, we propose that our systematic and replicable controlled crowdsourcing protocol could also be effective for other complex annotation tasks.", "3 similar embeddings for semantically similar questions.", "These embeddings may be leveraged downstream in the same way as embeddings of traditional categorical semantic roles.", "2 https://github.com/plroit/qasrl-gs 3 A previous preprint version of this paper can be found at https://arxiv.org/abs/1911.03243 .", "Specifications In QA-SRL, a role question adheres to a 7-slot template, with slots corresponding to a WH-word, the verb, auxiliaries, argument placeholders (SUBJ, OBJ), and prepositions, where some slots are optional (He et al., 2015), as exem-plified in Table 2.", "Such a question captures its corresponding semantic role with a natural, easily understood expression.", "All answers to the question are then considered as the set of arguments associated with that role, capturing both traditional explicit arguments and implicit ones.", "Corpora The original 2015 QA-SRL dataset (He et al., 2015) was annotated by hired non-expert workers after completing a short training procedure.", "They annotated 7.8K verbs, reporting an average of 2.4 QA pairs per verb.", "Even though multiple annotators were shown to produce greater coverage, their released dataset was produced by a single annotator per verb.", "In subsequent work, Fitzgerald et al. (2018) employed untrained crowd workers to construct a large-scale corpus ( 2018 ) and used it to train a parser.", "In their protocol, a single worker (genera-tor) annotated a set of questions along with their answers.", "Two additional workers (validators) validated each question and, in the valid case, independently annotated their own answers.", "In total, 133K verbs were annotated with 2.0 QA pairs per verb on average.", "In a subset of the corpus (10%) reserved for parser evaluation, verbs were densely validated by 5 workers (termed the Dense set).", "4 Yet, adding val-idators accounts only for precision errors in question annotation, while role coverage solely relies upon the output of the single generator.", "For this 4 Fitzgerald et al. (2018) also produced an expanded version of their dataset, incorporating questions that were automatically generated by their parser and then validated by crowd workers.", "While this may achieve higher recall, using modelgenerated data biases the evaluation with respect to existing models and is not suitable for evaluation datasets.", "For that reason, in our work we consider only the non-expanded version of the Dense set.", "reason, both the 2015 and 2018 datasets struggle with coverage.", "Also, while traditional SRL annotations contain a single authoritative and non-redundant annotation (i.e., a single role and span for each argument), the 2018 dataset provides raw annotations from all annotators.", "These include many redundant overlapping argument spans, without settling on consolidation procedures to provide a single gold reference, which complicates models' evaluation.", "These limitations of the current QA-SRL datasets impede their utility for future research and evaluation.", "Next, we describe our method for creating a viable high quality QA-SRL dataset.", "Screening and Training We first release a preliminary crowd-wide annotation round, and then contact workers who exhibit reasonable performance.", "They are asked to review our short guidelines, 5 which highlight a few subtle aspects, and then annotate two qualification rounds, of 15 predicates each.", "Each round is followed by extensive feedback via email, pointing at errors and missed arguments, identified by automatic comparison to expert annotation.", "Total worker effort for the training phase is about 2 hours, and is fully compensated, while requiring about half an hour of an in-house trainer time per participating worker.", "We trained 30 participants, eventually selecting 11 well-performing ones.", "Annotation We reuse and extend the annotation machinery of Fitzgerald et al. over Amazon's Mechanical Turk.", "First, two workers independently generate questions about a verb, and highlight answer spans in the sentence.", "Then, a third worker reviews and consolidates their annotations based on targeted guidelines, producing the gold standard data.", "At this step, the worker validates questions, merges, splits or modifies answers for the same role, and removes redundant questions.", "6 Table 3 depicts examples from the consolidation task.", "We monitor the annotation process by sampling (1%) and reviewing.", "5 Publicly available in our repository.", "6 Notice that while the validator from Fitzgerald et al. (2018) viewed only the questions of a single generator, our consolidator views two full QA sets, promoting higher coverage.", "Data & Cost We annotated a sample of the Dense evaluation set, comprising of 1000 sentences from each of the Wikinews and Wikipedia domains, equally split to dev and test.", "Annotators are paid 5 per predicate for QA generation, with an additional bonus for every question beyond the first two.", "The consolidator is rewarded 5 per verb and 3 per question.", "Per predicate, on average, our cost is 54.2, yielding 2.9 roles, compared to reported 2.3 valid roles with approximately 51 per predicate for the Dense annotation protocol.", "Evaluation in QA-SRL involves, for each verb, aligning its predicted argument spans to a reference set of arguments, and evaluating question equivalence, i.e., whether predicted and gold questions for aligned spans correspond to the same semantic role.", "Since detecting question equivalence is still an open challenge, we propose both unlabeled and labeled evaluation metrics.", "The described procedure is used to evaluate both the crowd-workers' annotations (4) and the QA-SRL parser (5).", "Unlabeled Argument Detection ( UA ) Inspired by the method presented in (Fitzgerald et al., 2018), argument spans are matched using a token-based matching criterion of intersection over union (IOU) 0 .", "5 .", "To credit each argument only once, we employ maximal bipartite matching 7 between the two sets of arguments, drawing an edge for each pair that passes the above mentioned criterion.", "The resulting maximal matching determines the true-positive set, while remaining non-aligned arguments become false positives or false negatives.", "Labeled Argument Detection ( LA ) All aligned arguments from the previous step are inspected for label equivalence, similar to the joint evaluation reported in Fitzgerald et al. (2018).", "There may 7 The previous approach aligned arguments to roles.", "We measure argument detection, whereas Fitzgerald et al. (2018) measure role detection.", "be many correct questions for a role.", "For example, What was given to someone?", "and What has been given by someone?", "both refer to the same semantic role but diverge in grammatical tense and argument place holders.", "Aiming to avoid judging non-equivalent roles as equivalent, we propose STRICT-MATCH to be an equivalence on the following template slots: WH , SUBJ , OBJ , as well as on negation, voice, and modality 8 extracted from the question.", "Final reported numbers on labelled argument detection rates are based on bipartite aligned arguments passing STRICT-MATCH .", "As this matching criterion significantly underestimates question equivalence, we later manually assess the actual rate of correct role equivalences.", "Evaluating Redundant Annotations We extend our metric for evaluating manual or automatic redundant annotations, exhibited in the Dense dataset (2) as well as the output of the Fitzgerald et al. (2018) parser (5).", "To that end, we ignore redundant true-positives, and collapse false-positive errors (see Appendix for details).", "Inter-Annotator Agreement (IAA) To estimate dataset consistency across different annotations, we measure F1 using our UA metric.", "10 individual worker-vs-worker experiments yield 79.8 F1 agreement over 150 predicates, indicating high consistency across our annotators, in line with agreement rates in other structured semantic annotations, e.g. Abend and Rappoport (2013).", "Overall consistency of the dataset is assessed by measuring agreement between different consolidated annotations, obtained by disjoint triplets of workers, which achieves F1 of 84.1 , averaged over 4 experiments, 35 predicates each.", "Notably, consolidation boosts agreement, indicating its necessity.", "For LA agreement, averaged F1 was 67.8; however, it is likely that the drop from UA is mainly due to falsely rejecting semantically equivalent questions under the STRICT-MATCH criterion, given that we found equal LA and UA scores in a manual evaluation of our dataset (see Table 4 below).", "Dataset Assessment and Comparison We assess our gold standard, as well as the recent Dense set, against an integrated expert set of 100 predicates.", "To construct the expert set, we first merged 8 Presence of factuality-changing modal verbs such as should , might and can .", "the annotations from the Dense set with our workers' annotations.", "Then, three of the authors blindly (i.e., without knowing the origin of each QA pair) selected, corrected and added annotations, resulting in a high-coverage unbiased expert set.", "We further manually corrected the evaluation decisions, accounting for some automatic evaluation mistakes introduced by the span-matching and question equivalence criteria.", "As seen in Table 4, our gold set yields comparable precision with drastically higher recall, in line with our 25% higher yield.", "9 This work Dense (2018) P R F1 P R F1 UA Auto.", "Examining disagreements between our gold and Dense , we observe that our workers successfully produced more roles, both implicit and explicit.", "To a lesser extent, they split more arguments into independent answers, as emphasized by our guidelines, an issue that was left under-specified in previous annotation guidelines.", "Agreement with PropBank Data It is illuminating to observe the agreement between QA-SRL and PropBank (CoNLL-2009) annotations (Hajic et al., 2009).", "In Table 5, we replicate the experiments in He et al. (2015, Section 3.4) for both our gold set and theirs, over a sample of 200 sentences from the Wall Street Journal (evaluation is automatic and the metric is similar to our UA ).", "We report macro-averaged (over predicates) precision and recall for all roles, including core and adjuncts, 10 while considering the PropBank data as the reference set.", "Our recall of PropBank roles is notably high, reconfirming the coverage obtained by our annotation protocol.", "The measured precision with respect to PropBank is low for adjuncts, but this is due to the fact 9 The UA and LA measures ended up equal for our dataset after manual inspection since we found that all correctly classified unlabeled arguments were annotated with a correct question role label.", "10 Core roles are A0-A5 in PropBank (recall) and QAs having what and who WH-words in QA-SRL (precision).", "that QA-SRL captures many correct implicit arguments, which fall out of PropBank's scope (where arguments are directly syntactically linked to the predicate).", "To examine this, we analyzed 100 arguments in our dataset not found in PropBank (false positives).", "We found that only 32 were due to wrong or incomplete QA annotations, while most others were valid implicit arguments, stressing QA-SRL's advantage in capturing those inherently.", "Extrapolating from this analysis estimates our true precision (on all roles) to be about 91%, consistent with the 88% precision in Table 4, while yielding about 15% more valid arguments than PropBank (mostly implicit).", "Compared with 2015 , our QA-SRL gold yielded 1593 QA pairs (of which, 604 adjuncts), while theirs yielded 1315 QAs (336 ad-juncts).", "Overall, the comparison to PropBank reinforces the quality of our gold dataset and shows its better coverage relative to the 2015 dataset.", "We evaluate the parser from Fitzgerald et al. (2018) on our dataset, providing a baseline for future work.", "As we previously mention, unlike typical SRL systems, the parser outputs overlapping arguments, often with redundant roles (Table 7).", "Hence, we employ our metric variant for evaluating redundant annotations.", "Results are reported in Table 6, demonstrating reasonable performance along with substantial room for improvement, especially with respect to coverage.", "As expected, the parser's recall against our gold is substantially lower than the 84.2 recall reported in (Fitzgerald et al., 2018) against Dense , due to the limited recall of Dense relative to our gold set.", "Error Analysis Through manual evaluation of 50 sampled predicates, we detect correctly predicted arguments and questions that were rejected by the IOU and STRICT-MATCH criteria.", "Based on this inspection, out of the 154 gold roles (128 explicit and 26 implicit), the parser misses 23%, Test Dev ( Wikinews ) Automatic Automatic Manual P R F1 P R F1 P R F1 UA 87.1 50.2 63.7 86.6 58.8 70.1 87.8 66.5 75.5 LA 67.8 39.1 49.6 65.0 44.2 52.6 83.9 64.3 72.8 Table 6: Automatic parser evaluation against our test set, complemented by automatic and manual evaluations on the Wikinews part of the dev set (manual evaluation is over 50 sampled predicates).", "covering 82% of the explicit roles but only half of the implicit ones.", "Applying our proposed controlled crowdsourcing protocol to QA-SRL successfully attains truly scalable high-quality annotation by laymen, facilitating future research of this paradigm.", "Exploiting the open nature of the QA-SRL schema, our nonexpert annotators produce rich argument sets with many valuable implicit arguments.", "Indeed, thanks to effective and practical training over the crowdsourcing platform, our workers' annotation quality, and particularly its coverage, are on par with expert annotation.", "We release our data, software and protocol, enabling easy future dataset production and evaluation for QA-SRL, as well as possible extensions of the QA-based semantic annotation paradigm.", "Finally, we suggest that our simple yet rigorous controlled crowdsourcing protocol would be effective for other challenging annotation tasks, which often prove to be a hurdle for research projects.", "This work was supported in part by an Intel Labs grant, the Israel Science Foundation grant 1951/17 and the German Research Foundation through the German-Israeli Project Cooperation (DIP, grant DA 1600/1-1)." ]
[ "abstain", "abstain", "objective", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "result", "abstain", "objective", "method", "method", "abstain", "result", "result", "abstain", "method", "objective", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "objective", "method", "abstain", "abstain", "method", "other" ]
[ "Historical linguists have identified regularities in the process of historic sound change.", "The comparative method utilizes those regularities to reconstruct proto-words based on observed forms in daughter languages.", "Can this process be efficiently automated?", "We address the task of proto-word reconstruction, in which the model is exposed to cognates in contemporary daughter languages, and has to predict the proto word in the ancestor language.", "We provide a novel dataset for this task, encompassing over 8,000 comparative entries, and show that neural sequence models outperform conventional methods applied to this task so far.", "Error analysis reveals a variability in the ability of neural model to capture different phonological changes, correlating with the complexity of the changes.", "Analysis of learned embeddings reveals the models learn phonologically meaningful generalizations, corresponding to well-attested phonological shifts documented by historical linguistics.", "Historical linguists seek to identify and explain the various ways in which languages change through time.", "Research in historical linguistics has revealed that groups of languages (language families) can often be traced into a common, ancestral language, a proto-language.", "Large-scale lexical comparison of words across different languages enables linguists to identify cognates : words sharing a common proto-word.", "Comparing cognates makes it possible to identify rules of phonetic historic change, and by back-tracing those rules one can identify the form of the proto-word, which is often not documented.", "That methodology is called the comparative method (Anttila, 1989), and is the main tool used to reconstruct the lexicon and phonology of extinct languages.", "Inferring the form of proto-words from existing cogEqual contribution nates in daughter languages is possible since historical sound changes within a language family are not random.", "Rather, the phonological change is characterized by regularities that are the result of constraints imposed by the human articulatory and cognitive faculties (Millar, 2013).", "For example, we can find such regular changecommonly called systematic correspondenceby looking at the evolution of the first phoneme of Latin's word for sky: 1 Figure 1: the evolution of Latin word for sky is several Romance languages.", "The Spanish word's first sound is [T] , while the Italian word begins with [tS] , the French word with [s] , Romansh with [ts] and Sardinian with [k] .", "This pattern is systematic, and will be found throughout the languages.", "Working this way, historical linguists reconstruct words in the protolanguage from existing cognates in the daughter languages, and determine how words in the protolanguage may have sounded.", "To what extent can a machine-learning model learn to reconstruct proto-words from examples in this way?", "And what generalizations of phonetic change will it learn?", "We focus on the task of proto-word reconstruction: the model is trained on sets of cognates and their known proto-word, and is then tasked with predicting the proto-word for an unseen set of cognates.", "Our study concentrate on the romance language family 2 and the model is trained to reconstruct the Latin origin.", "We show 1 The words as transcribed with International Phonetic Alphabet (IPA) characters.", "2 All the languages that derived from Latin.", "that a recurrent neural-network model can learn to perform this task well (outperforming previous at-tempts).", "3 More interesting than the raw performance numbers are the learned generalizations and error patterns.", "The Romance languages are widely studied (Ernst (2003); Ledgeway and Maiden (2016); Holtus et al. (1989) among others), and their phonological evolution from Latin is well mapped.", "The existence of this comprehensive knowledge allows exploring to what extent neural models internalize and capture the documented rules of language change, and where do they deviate from it.", "We provide an extensive error analysis, relating errors patterns to knowledge in historical linguistics.", "This is often not possible in common NLP tasks, such as parsing or semantic inference, in which the rules governing linguistic phenomena or even the suitable framework to describe them are still in dispute among linguists.", "Contributions Inspection of existing datasets of cognates in Romance languages has revealed inherent problems.", "We thus have collected a new comprehensive dataset for performing the reconstruction task ( 4).", "Besides the dataset, our main contribution is the extensive analysis of what is being captured by the models, both on orthographic and phonetic versions of the dataset ( 6).", "We find that the error patterns are not random, and they correlate with the relative opacity of the historic change.", "These patterns were divided in different categories, each one motivated by a sound phonological explanation.", "Moreover, in order to further evaluate the learning of rules of phonetic change, we evaluated models on a synthetic dataset ( 6.3), showing that the model is able to correctly capture several phonological change rules.", "Finally, we analyze the learned inner representations of the model, and show it learns phonologically meaningful properties of phonemes ( 6.4) and attributes different importance to different daughter languages ( 6.5).", "The related task of cognates detection has been extensively studied.", "In this task, a set of cognates should be extracted from word lists in different languages.", "Most effort in Machine learn-3 We note that the role of the ML model is easier than that of the historical linguist, as it is trained on sets of words that it took the historical linguistics discipline a considerable effort to acquire.", "ing approaches to this task has been focused on distance-based methods, which quantify the distance (according to some metric), or the similarity, between a given candidate of cognates.", "The similarity can be either static (e.g. Levenshtein distance) or learned.", "Once the metric is established, a classification can be performed either based on hard-decision (words below a certain threshold are considered cognates) or by learning a clas-sifier over the distance measures and other features (Kondrak, 2001; Mann and Yarowsky, 2001; Inkpen et al., 2005; Ciobanu and Dinu, 2014a; List et al., 2016); Mulloni and Pekar (2006) have evaluated an alternative approach, in which explicit rules of transformation are derived based on edit operations.", "See Rama et al. (2018) for a recent evaluation of the performance of several cognates detection algorithms.", "Several studies have gone beyond the stage of cognates extraction, and used resulted list of cognates to reconstruct the lexicon of proto-languages.", "Most studies in this direction borrowed techniques from computational phylogeny, drawing a parallel between the hypothesized branching of (latent) proto words into their (observed) current forms and the gradual change of genes during evolution.", "Bouchard-Cote et al. (2007) has applied such a model to the development of the Romance languages, based on a dataset composed of aligned-translations.", "Bouchard-Cote et al. (2009, 2013) used an extensive dataset of Austronesian languages and their reconstructed proto-languages, and built a parameterized graphical model which models the probability of a phonetic change between a word and its ancestral form; the probability is branch-dependent, allowing for the learning of different trends of change across lineages.", "While achieving impressive performance, even without necessitating a cognates lists as an input, their model is based on a given phylogeny tree that accurately represents the development of the languages in question.", "Wu and Yarowsky (2018) have automatically constructed cognate datasets for several languages, including Romance languages, and used a character-level NMT system to complete missing entries (not necessarily the proto-form).", "Several works studied the induction of multilingual dictionaries from partial data in related languages.", "Wu et al. (2020) reconstruct cognates in Austronesian languages (where the proto-language is not attested).", "Lewis et al. (2020) employ a mixture-of-experts approach for lexical translation induction, combining neural and probabilistic methods, and Nishimura et al. (2020) translate from a multi-source input that contains partial translations to different languages, concatenated.", "Finally, Ciobanu and Dinu (2018) have applied a CRF model with alignment to a dataset of Romance cognates, created from automatic alignment of translations (Ciobanu and Dinu, 2014b).", "The researchers also applied RNNs on the same dataset, but reported negative results.", "Our proto-word reconstruction is as follows: the training set is composed of pairs ( x i , y i ), where each x i = c (cid:96) 1 i , ..., c (cid:96) n i is a set of cognate words, each tagged with a language (cid:96) j , and y i is the proto-word (Latin word) of that set.", "We consider an orthographic task , where the cognates and proto-words are spelled out as written.", "As the orthography is often arbitrary and more conservative than spoken language, we consider also a phonetic task , in which the cognates and proto-words are represented as their phonetic transcriptions into IPA.", "An example of a training instance ( x, y ) for the orthographic task is: x = lapte RM , lait FR , latte IT , leche SP , leite PT y = lactem and for the phonetic task is: x = lapte RM , lE FR , latte IT , letSe SP , l5jt1 PT y = laktEm A cognate in one of the languages may be missing, in which case we represent it by a dash.", "Here, we are missing the Italian and Romanian cognates: x = RM , tKavaj FR , IT , tRabaxo SP , tR5BaLu PT y = trIpalEm At test time, we are given a set of cognates and are asked to predict their proto-word.", "The different experiments described in the paper were performed on a large dataset of our creation, which contained cognates and their proto-words in both orthographic and phonetic (IPA) forms.", "The dataset's departure point is Ciobanu and Dinu (2014b), which consists of 3,218 complete cognate sets in six different languages: French, Italian, Spanish, Portuguese, Romanian and Latin 4 .", "We augmented the dataset's items with a 4 We thank Ciobanu and Dinu for sharing their data with us.", "freely available resource, Wiktionary, whose data were manually checked against DIEZ and Donkin (1864) to ensure their etymological relatedness with the Latin source.", "The entries were transcribed into IPA using the transcription module of the eSpeak library 5 , which offers transcriptions for all languages in our dataset, including Latin.", "The final dataset contains 8,799 cognate sets (not all of them complete), which were randomly splitted into train, evaluation and test sets: 7,038 cognate sets (80%) were used for training, 703 (8%) for evaluation and 1,055 (12%) for testing.", "Overall, the dataset contains 41,563 distinct words for a total of 83,126 words counting both the orthographic and the phonetic datasets.", "Vowel lengths were found to be difficult to recover (see Table 1), hence we created the following variations of the dataset: with and without vowel length (for both the orthographic and phonetic datasets), and without a contrast (for the phonetic dataset); see section 6 for further discussion.", "A detailed description of the dataset collection process is available at the appendix A.1.", "We make our additions to the dataset of Ciobanu and Dinu (2014b) publicly available 6 .", "Our proto-word reconstruction setup follows an encoder-decoder with attention architecture, similar to contemporary neural machine translation (NMT) systems (Bahdanau et al., 2015; Cho et al., 2014).", "We use a standard character-based encoder-decoder architecture with attention (Bahdanau et al., 2015).", "Both encoder and decoder are GRU networks with 150 cells.", "The encoder reads the forms of the words in the daughter languages, and output a contextualized representation of each character.", "At each decoding step, the decoder attends to the encoder's representations via a dot-product attention.", "The output of the attention is then fed into a MLP with 200 hidden units, which outputs the next Latin character to generate.", "Input representation Each character (a letter in the orthographic case, and a phoneme in the phonetic case) is represented by an embedding vector 5 https://github.com/espeak-ng/espeak-ng 6 https://github.com/shauli-ravfogel/ Latin-Reconstruction-NAACL .", "of size 100.", "While all Romance languages are orthographically similar, the same letters represent different sounds, and thus convey different kinds of information for the task of Latin reconstruction.", "A possible approach would encode each lan-guage's characters using a unique embedding table.", "We instead share the character embedding table across all languages (including Latin), but concatenate to each character vector also a language-embedding vector.", "The final representation of a character c in language (cid:96) is then W E [ c ] + UE [ (cid:96) ] where E is a shared embedding matrix, c is a character id, (cid:96) is a language id, and W and U are a linear projection layers.", "Our main quantitative metric for evaluation is the edit distance between the reconstructed word and the gold Latin word.", "We use the standard edit distance with equal weight of 1 for deletion, insertion and substitution.", "We report test set average edit distance and average normalized edit distance (di-vided by word length), as well as the percentage of instances with less than k edit operations between the reconstruction and the gold, for k = 0 to 4 .", "Table 1 summarizes our main quantitative results.", "Orthographic, added vowel lengths and IPA, added vowel lengths refer to variations of the datasets that include explicit marking of vowel length in Latin words, marked by < : > after long vowels.", "The models performance on the orthographic dataset demonstrates a substantial improvement over previously reported results.", "Our method has achieved average edit distance of 0.65, average normalized edit distance of 0.064, and 64.1% complete reconstruction rate (edit distance of 0).", "These numbers compare favorably with the edit distance of 1.07, normalized edit distance of 0.13 and 50% complete reconstruction reported by Ciobanu and Dinu (2018).", "We note, however, that as our method is different both in the training corpus and in the type of model we employ, it is not clear whether this improvement should be attributed to the quality of the data, to the model, or to both of them.", "7 The performances on the phonetic dataset were lower than those derived from the orthographic one: in the phonetic dataset the average edit distance was of 1.022, and the average normalized edit distance of 0.1, with 50.0% complete reconstruction rate.", "This disparity can be explained at least partially by a peculiarity of the phonetic dataset: it implicitly encodes vowel length, which was neutralized in the orthographic dataset.", "The reason for this difference is that length contrast in Latin co-occurred with quality differences: short vowels tended to be more open than their long counterparts, a contrast also called tense-lax (Allen and Allen, 1989).", "This contrast is not present in Latin orthography, but it appears in its phonetic transcription.", "This results in a noticeable gap between the results of the orthographic dataset with vowel lengths and without vowel lenghts (0.064 average normalized edit distance vs. 0.119), while the differences between the phonetic IPA dataset with vowel lenghts and without vowels lengths are much smaller.", "When the contrast tense-lax is manually neutralized 8 , the performances achieved are similar to the ones on the orthographic dataset (as it is possible to see from the performances on IPA, no contrast, whose Latin entries do not contain a tense-lax 7 When we train a smaller version of our model (75-dimensional GRU) on the original dataset of Ciobanu and Dinu (2014b) we achieve average edit distance of 0.881, average normalized edit distance of 0.103, and complete reconstruction rate of 59.1%.", "Training a similar model on their dataset after cleaning resulted in average edit distance of 0.612, average normalized edit distance of 0.062 and complete reconstruction rate of 68.8%.", "8 We achieved that by respectively changing the characters < U > , < O > , < I > , < E > to < u > , < o > , < i > , < e > in the Latin words Error type Orthographic Phonetic High-mid 18% 8% Deletion 14% 6% Consonant 13% 15% Cluster 12% 3% Morphology 11% 10% Vowel 7% 8% Length 26% Orthography 5% Other 20% 24% Table 2: Error type distribution based on 650 orthographic and 650 phonological errors.", "The following subsections focus on the model performances on the orthographic and phonetic datasets without explicit vowel length marking.", "A thorough analysis of both datasets reveals that the model's errors are not arbitrary, but rather tend to correspond to one of a few well-defined linguistic phenomena characterizing the evolution of Latin to its daughter languages.", "From an analysis of about 1300 errors, equally divided between the orthographic and the phonetic datasets, we find that 80% of the errors of the model on the orthographic dataset, and 75% on the phonetic one can be grouped into one of the following groups: high-mid vowel alternations, segment deletion, segment changes, cluster changes, morphological changes and other vowel changes.", "Additionally, one error category is unique to the phonetic dataset, tense-lax errors, and one is unique to the orthographic dataset, orthography errors.", "Table 2 summarizes the results, and Figure 2 visualizes the vowels error patterns on the phonetic dataset.", "We briefly discuss each of these groups.", "9 High-mid alternation.", "The largest number of errors on the orthographic dataset, 18%, can be attributed to confusion between high and mid-high vowels (correspondingly < i > , < u > and < e > , < o > ), as shown by the reconstruction < pescarium > instead of the Latin < piscarium > (alternation between < e > and < i > ).", "That error is much rarer in the phonetic dataset, accounting only for 8% of all the errors.", "The reason of this error can be attributed to the origin of the mid-vowels in the daughter languages: while Latin 9 The orthographic characters will be displayed between two angle brackets, while phonetic characters between two square brackets.", "long vowels [i:] , [e:] , [o:] and [u:] , always evolved into [i] , [e] , [o] and [u] in the daughter languages (with minor changes related to syllable structure), Latin short vowels [I] , [E] , [O] and [U] are not deterministically mapped into corresponding vowels in the daughter languages: Latin [I] and [E] both usually became Romance [e] (with alternations related to syllable structure, as diphthongization to [je]), while Latin [O] and [U] have different reflexes in the daughter languages as [u], [o], [O] , [] or as diphthongs.", "Because of this complex evolution, which merges different Latin phonemes into the same one in the daughter languages, the model is unable of unequivocally predicting the Latin vowel.", "Nonetheless, it seems that the tense-lax contrast present in the phonetic dataset eases the task of distinguishing the different phonemes, and enables the network to reconstruct their origin more often.", "Segment deletion.", "examples of these errors are the reconstruction of < aspargum > instead of Latin < asparagum > , and the reconstruction of [abIlItatEm] instead of Latin [habIlItatEm] .", "During the evolution from Latin to Romance languages, unstressed syllables tended to be dropped.", "This phenomenon was not systematic, and occurred in different ways among and within the languages.", "Such process could affect either whole syllables (consonant + vowel) or only the vowel, creating new consonant clusters.", "Because of the erratic nature of this process, it seems that the network struggles with the exact reconstruction of segments eliminated in the daughter languages.", "A special kind of deletion is that of the consonant [h] .", "This consonant did not survive in any Romance languages (although it may be represented orthographically), and hence many times the network does not reconstruct it.", "Segment changes.", "this category encompasses errors in the reconstruction of consonants such as voicing changes (reconstructing < faculdadem > vs. Latin < facultatem > ), assimilation ( [wessarE] vs. [weksarE] ) and gemination ( [agrEgatIonEm] vs. [aggrEgatIonEm] ).", "All these errors reflect processes that took place in all of the daughter languages, that obscures the original form of the proto-word.", "Cluster changes.", "These are changes that occur with two contiguous consonants.", "Consider, for example, the reconstruction of [rEatIonEm] instead of Latin [rEaktIonEm] , and of < sennorem > instead of Latin < seniorem > .", "The former is an instance of cluster simplification, while the latter is an instance of cluster palatalization.", "In many of the daughter languages clusters of two different sounds underwent simplification, either by the dropping of one of the sound or the assimilation of one of them.", "Palatalization is the process by which certain sounds tend to be pronounced more closely to the palate, usually because of an adjacent front vowel.", "This change occurred in all Romance languages, even though its orthographic representation may vary among them.", "Morphological changes.", "Latin had a very developed morphology, with several classes of special conjugations and irregular forms.", "The network struggles to reconstruct correctly irregular forms, as these forms were mostly lost in the daughter languages.", "An instance of such irregular verbs is < praeferre > , reconstructed as < praeferire > by the network.", "Moreover, other special morphological classes, such as Latin neuters, tend to be reconstructed as more usual forms.", "Another interesting class of errors is change of morphological category: some nouns have suffixes reminiscent of those of verbs, and hence are wrongly reconstructed as such.", "A separated case is that of Greek words: Latin contained several Greek loanwords that conserved their original morphology, different from the Latin one.", "Since these peculiarities were, for most part, not retained in the daughter languages, the network reconstructs them with normal Latin suffixes.", "For example, the greek [syn-taksIs] was reconstructed as [syntaksEm] , with the normal Latin suffix.", "Other vowel changes.", "Latin contained several diphthongs, among them the diphthongs [aI] and [OI] .", "These sounds did not survive in any of the daughter languages (although in some rare cases they may be represented in the orthography), and both changed into [e] in the different Romance languages.", "This lef to reconstruction errors such as reconstructing < egrum > instead of Latin < aegrum > .", "Some changes also occurred with the vowel [a] , which was reconstructed as a different vowel.", "Greek orthography.", "some Latin words from Greek origin retained some orthographic conventions alien to Latin, such as the use of < y > , < ph > , < th > , < rh > etc.", "These conventions were only partially retained in the daughter languages, which creates some inconsistencies in their reconstruction by the network.", "Tense-Lax alternation.", "this is the largest cate-Figure 2: Phonological mistakes resulting from alternations between vowels , on the phonetic dataset.", "The numbers signify the number of errors, excluding sin-gleton errors.", "gory found in the networks errors on the phonetic dataset up to 26% of all errors.", "As said previously, the tense-lax contrast reflects vowel length in Latin, which is not entirely predictable based on the daughter languages.", "The network tends to confuse between the lax and the tense vowels.", "Figure 2 shows clearly that the network's errors are internally consistent and not random: all the vowel errors fit neatly in one of the aforementioned categories, while other possible errors do not occur.", "Orthographic vs. Phonetic Importantly, the phonetic and orthographic tasks differ in their error distributions: while the performance of the network on the orthographic task displays many syllable changes changes that alter the structure of the syllable (mostly changes in consonant clusters and deletion of segments) on the phonetic tasks the model tends to retain syllable structure, but perform more segment-related errors (i.e., changing a specific vowel or consonant for another one).", "The IPA performance contained more idiosyncratic errors that could not be categorized in one of the main categories.", "Such errors tended to occur when the network had only one or two cognates from the daughter languages.", "Even though the orthographic performance also exhibited poorer reconstructions in these cases, it seems that the IPA performance was even more affected by the singular words, leading to more erratic reconstructions.", "This section will focus on the phonetic dataset.", "A closer inspection of the errors made by the model, and of those that do not occur in the data, can shed light on the processes of phonological change learnt by the model.", "We will first focus on the vowels.", "The Latin vowel [a] is quite resilient to changes, and most of the daughter languages retain it without change (only in French and Romanian some phonological changes occur, in certain phonological environments).", "Indeed, the network has almost no mistakes in recovering it, apart from some isolated cases that derives from insuffi-cient cognates in the daughter languages.", "The network also makes virtually no errors regarding the reconstruction of vowel backness here also the only few cases are caused by the paucity of cognates and by assimilation processes in the daughter languages that make the Latin source opaque (metaphony processes).", "All in all, the network learns correctly the phonological changes that occurred in Latin vowels, and the main errors are a result of changes that cannot be fully reverted from the daughter languages.", "The model learnt well the mapping of consonants between Latin and its daughter languages, and vowel reconstruction errors are considerably more prevalent.", "Focusing on one type of errors, palatalization, shows that the network failed to reconstruct the original consonant in opaque contexts, that is, when phonological cues crucial for the right reconstruction were lacking.", "Specifi-cally, the network confused between the consonants [t] and [k] in the Latin reconstruction, since they palatalize to the same segments in Spanish and French.", "Without the other daughter languages, it is impossible to reconstruct correctly the original sound in Latin.", "Finally, the network correctly generalized the occurrence of nasals in Latin clusters.", "Latin nasal tended to assimilate to the place of articulation of the adjacent consonant, deriving clusters such [Nk] , [mp] and [nt] .", "When the network deleted a consonant in a cluster containing a nasal, or changed a consonant adjacent to a nasal, the nasal consonant always changed to match the place of articulation of the following consonant.", "Hence, by deleting [k] in the cluster [Nkt] , the network reconstructs [nt] .", "Similarly, by changing [p] to [t] in the cluster [mp] , the nasal consonant accordingly: [nt] .", "To what extent did the model learn known rules of phonetic change?", "The evolution of the Romance languages is well studied and linguists documented the set of phonological transformations that underwent between Latin and its daughter languages.", "We collected 33 of these phonological change rules, and used them to create a synthetic test set, containing syllable examples each focusing on a different phonological change.", "An example of a row in this dataset, corresponding to the rule of change of Latin [j] at word initial, is: x = Za RM , Za FR , dZa IT , xa SP , Za PT y = ja Since the model was trained on complete words, isolated syllables tended to be unnatural for the network, and the output often contained additional consonants (usually morphological end-ings).", "When evaluating the model output we focus on the specific phonemes involved in the phonological change, and we ignore additional phonological material.", "Results The complete list of synthetic examples and predictions is available at Table 3.", "The network correctly predicted 22 out of the 33 phonological rules (66.67% of the changes).", "The results are compatible with the results of the main reconstruction experiment: In both experiments, the network correctly reconstructed phonemes retained with little or no changes in all languages (e.g. [a] in different -phonological environments).", "Another class of phonemes correctly reconstructed in both cases are those which changed in a predictable way in each one of the daughter languages.", "Thus, [w] was correctly reconstructed since it predictably changed to [v] in all the daughter languages (apart from Spanish, which merged it with [b] ).", "Phonemes that tended to change differently, but consistently, were also faithfully recovered: even though Latin [k] tended to change differently depending on the daughter language ( [s] in French and Portuguese, [T] in Spanish and [tS] in Italian and Romanian), it was reconstructed correctly because of the consistence of the change in each daughter language.", "The phonemes wrongly reconstructed tended to be those whose phonological change was opaque.", "The opaque-ness of their change can be ascribed to the fact that they were neutralized in the daughter languages, making it impossible to recover them without additional information.", "Relevant to this case are mostly vowels and diphthongs, as Latin [e] and [I] , which both became [e] in all the different daughter languages (with variants influenced by the phonological environment).", "Does training on proto-word reconstructions implicitly encourage the model to acquire phonologically-meaningful representations?", "We visualize the representation learned by network on the phonetic task by performing hierarchical clustering on the characters embedding vectors using the sklearn (Pedregosa et al., 2011) implementation of Ward variance minimization algorithm (Ward Jr, 1963).", "Here we will briefly discuss the learned French phoneme representations (Figure 3).", "For all other languages, see appendix 5.", "As can be seen, the primary division that the network performs is between vowels and consonants, displayed on two different branches of the tree.", "On a lower level other phonologically motivated groupings are found: the network tends to place under the same node pairs of voiced and unvoiced consonants (as [S] and [Z] , [d] and [t] ), allophones ( [] and [] ) or phonemes of the same category (as the glides [j] and [w] ).", "To conclude, the results demonstrate the learning of a phonologically meaningful taxonomy of phonemes, without explicit supervision.", "Since different languages can diverge to a varying extent from their proto-language, we hypothesize that the 5 daughter languages we use in this work would be of different importance for the model.", "To test this hypothesis, we inspect the learned attention weights.", "We focus on the most attended input character at each time step (the character having the largest attention weight) and count the number of times each of the 5 input languages is the most attended language, as a function of the location in the output and of the identity of the Latin character produced in that time step.", "We normalize the count with respect to time step, letter frequency and language frequency in the corpus.", "Results The results for the phonetic and orthographic tasks are presented in Figure 4.", "In both cases, Italian is the most attended language.", "There are some differences between the settings, however.", "For the orthographic task, the network focuses noticeably more on French than in the phonetic task.", "This tendency can be attributed to the very conservative orthography of French, that masks the phonological innovations that occurred in the language.", "Indeed, the network focuses exclusively on French for the reconstruction of the characters < h > and < y > , which are consistently represented only in French orthography, disappearing from the written form of the other Romance languages.", "The comparison to the attention of the phonetic dataset shows that the network tends to actually ignore French, favoring other sources instead.", "Similarly, in the orthographic dataset, French is favored in the initial positions, a tendency that disappears in the phonetic dataset.", "Finally, an interesting trend in the phonetic dataset is a tendency to attend to Romanian at the initial positions and to Portuguese at later ones.", "In this work, we introduce a new dataset for the task of proto-word reconstruction in the Romance language family, and used it to evaluate the ability of neural networks to capture the regularities of historic language change.", "We have shown that neural methods outperform previously suggested models for this task.", "Analysis of the linguistic generalizations the model acquires during training demonstrated that the mistakes are related to the complexity of the phonetic change.", "A controlled experiment on a set of rules for phonetic alternations between Latin and its daughter languages demonstrated the model internalizes some of the systematic processes that Latin had undergone during the evolution of the Romance languages.", "Visualizing the learned phoneme-embedding vectors has revealed a hierarchical division of phonemes that reflects phonological realities, and inspection of attention patterns demonstrated the model attributes different importance to different languages, in a position-dependent manner.", "While the task examined in this paper is commonly called proto-word reconstruction, in practice the task the model faces is considerably less challenging than the work of historical linguists, as the model is trained in a supervised setting.", "A future line of work we suggest is applying neural models for the end task of proto word reconstruction, without relying on cognates lists, in a way that would more naturally model the historical linguistic methodology.", "We thank Arya McCarthy for pointing out to relevant references.", "This project received funding from the Europoean Research Council (ERC) under the Europoean Union's Horizon 2020 research and innovation programme, grant agreement No. 802774 (iEXTRACT)." ]
[ "abstain", "abstain", "abstain", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "objective", "result", "abstain", "result", "result", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "method", "method", "other", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "method", "method", "other", "other" ]
[ "Transformer has achieved great success in the NLP field by composing various advanced models like BERT and GPT.", "However, Transformer and its existing variants may not be optimal in capturing token distances because the position or distance embeddings used by these methods usually cannot keep the precise information of real distances, which may not be beneficial for modeling the orders and relations of contexts.", "In this paper, we propose DA-Transformer, which is a distance-aware Transformer that can exploit the real distance.", "We propose to incorporate the real distances between tokens to re-scale the raw self-attention weights, which are computed by the relevance between attention query and key.", "Concretely, in different self-attention heads the relative distance between each pair of tokens is weighted by different learnable parameters, which control the different preferences on longor short-term information of these heads.", "Since the raw weighted real distances may not be optimal for adjusting self-attention weights, we propose a learnable sigmoid function to map them into re-scaled coefficients that have proper ranges.", "We first clip the raw self-attention weights via the ReLU function to keep non-negativity and introduce sparsity, and then multiply them with the rescaled coefficients to encode real distance information into self-attention.", "Extensive experiments on five benchmark datasets show that DA-Transformer can effectively improve the performance of many tasks and outperform the vanilla Transformer and its several variants.", "Transformer (Vaswani et al., 2017) has achieved huge success in the NLP field in recent years (Kobayashi et al., 2020).", "It serves as the basic architecture of various state-of-the-art models like BERT (Devlin et al., 2019) and GPT (Rad-ford et al., 2019), and boosts the performance of many tasks like text generation (Koncel-Kedziorski et al., 2019), machine translation (Vaswani et al., 2017), and reading comprehension (Xu et al., 2019).", "Thus, the improvement on the Transformer architecture would be beneficial for many NLP-related fields (Wu et al., 2020a).", "A core component of Transformer is multi-head self-attention, which is responsible for modeling the relations between contexts (Yang et al., 2019; Guo et al., 2019).", "However, self-attention is position-agnostic since it does not distinguish the orders of inputs.", "Thus, in the vanilla Transformer, position encoding is applied to the input to help Transformer capture position information.", "However, in contrast to recurrent and convolutional neural networks, it is difficult for vanilla Transformers to be aware of the token distances (Shaw et al., 2018), which are usually important cues for context modeling.", "Thus, several works explored to incorporate token distance information into Transformer.", "For example, Shaw et al. (2018) proposed to combine the embeddings of relative positions with attention key and value in the self-attention network.", "They restricted the maximum relative distance to only keep the precise relative position information within a certain distance.", "Yan et al. (2019) proposed a variant of self-attention network for named entity recognition, which incorporates sinusoidal embeddings of relative position to compute attention weights in a directionand distance-aware way.", "However, the distance or relative position embeddings used by these methods usually cannot keep the precise information of the real distance, which may not be beneficial for the Transformer to capture word orders and the context relations.", "In this paper, we propose a d istancea ware Transformer (DA-Transformer), which can explicitly exploit real token distance information to enhance context modeling by leveraging the relative distances between different tokens to re-scale the raw attention weights before softmax normalization.", "More specifically, since global and local context modeling usually have different distance preferences, we propose to learn a different parameter in different attention heads to weight the token distances, which control the preferences of attention heads on long or short distances.", "In addition, since the weighted distances may not have been restricted to a proper range, we propose a learnable sigmoid function to map the weighted distances into rescaled coefficients.", "They are further multiplied with the raw attention weights that are clipped by the ReLU function for keeping the non-negativity and introducing sparsity.", "We conduct extensive experiments on five benchmark datasets for different tasks, and the results demonstrate that our approach can effectively enhance the performance of Transformer and outperform its several variants with distance modeling.", "The main contributions of this paper include: We propose a distance-aware Transformer that uses the real token distances to keep precise distance information in adjusting attention weights for accurate context modeling.", "We propose to use different parameters to weight real distances in different attention heads to control their diverse preferences on short-term or long-term information.", "We propose a learnable sigmoid function to map the weighted distances into re-scaled coefficients with proper ranges for better adjusting the attention weights.", "We conduct extensive experiments on five benchmark datasets and the results validate the effectiveness of our proposed method.", "To make this paper self-contained, we first briefly introduce the architecture of Transformer, which was initially introduced to the machine translation task (Vaswani et al., 2017).", "It has become an important basic neural architecture of various state-of-the-art NLP models like BERT (Devlin et al., 2019) and GPT (Radford et al., 2019).", "The core component of Transformer is multi-head self-attention.", "It has h attention heads, where the parameters in each head are independent.", "For the i -th attention head, it takes a matrix H as the input.", "It first uses three independent parameter matrices W ( i ) Q , W ( i ) K , and W ( i ) V to respectively transform the input matrix H into the input query Q ( i ) , key K ( i ) and value V ( i ) , which is formulated as follows: Q ( i ) , K ( i ) , V ( i ) = HW ( i ) Q , HW ( i ) K , HW ( i ) V .", "(2) where d is the dimension of the vectors in the query and key.", "The outputs of the h attention heads are concatenated together and the final output is a linear projection of the concatenated representations, which is formulated as follows: Multihead( Q , K , V ) = Concat(head 1 , ..., head h ) WO , where head i = Attention( Q ( i ) , K ( i ) , V ( i ) ) , (3) where WO is an output projection matrix.", "In the standard Transformer, a position-wise feed-forward neural network is further applied to the output of multi-head self-attention network.", "Its function is formulated as follows: F F N ( x ) = max (0 , xW 1 + b 1 ) W 2 + b 2 , (4) where W 1 , W 2 , b 1 , b 2 are kernel and bias parameters.", "Transformer also employs layer normalization (Ba et al., 2016) and residual connection (He et al., 2016) techniques after the multi-head self-attention and feed-forward neural networks, which are also kept in our method.", "Since self-attention network does not distinguish the order and position of input tokens, Transformer adds the sinusoidal embeddings of positions to the input embeddings to capture position information.", "However, position embeddings may not be optimal for distance modeling in Transformer because distances cannot be precisely recovered from the dot-product between two position embeddings.", "Instead of directly using the sinusoidal position embedding (Vaswani et al., 2017) or the absolute position embedding (Devlin et al., 2019), several variants of the Transformer explore to use the relative positions to better model the distance between contexts (Shaw et al., 2018; Wang et al., 2019; Dai et al., 2019; Yan et al., 2019).", "For example, Shaw et al. (2018) proposed to add the embeddings of relative positions to the attention key and value to capture the relative distance between two tokens.", "They only kept the precise distance within a certain range by using a threshold to clip the maximum distance to help generalize to long sequences.", "Dai et al. (2019) proposed Transformer-XL, which uses another form of relative positional encodings that integrate content-dependent positional scores and a global positional score into the attention weights.", "Yan et al. (2019) proposed direction-aware sinusoidal relative position embeddings and used them in a similar way with Transformer-XL.", "In addition, they proposed to use the un-scaled attention to better fit the NER task.", "However, relative position embeddings may not be optimal for modeling distance information because they usually cannot keep the precise information of real token distances.", "Different from these methods, we propose to directly re-scale the attention weights based on the mapped relative distances instead of using sinusoidal position embeddings, which can explicitly encode real distance information to achieve more accurate distance modeling.", "In this section, we introduce our proposed d istance-a ware Transformer (DA-Transformer) approach, which can effectively exploit real token distance information to enhance context modeling.", "It uses a learnable parameter to weight the real distances between tokens in each attention head, and uses a learnable sigmoid function to map the weighted distances into re-scaled coefficients with proper ranges, which are further used to adjust the raw attention weights before softmax normalization.", "The details of DA-Transformer are introduced in the following sections.", "Similar with the standard Transformer, the input of our model is also a matrix that contains the representation of each token, which is denoted as H = [ h 1 , h 2 , ..., h N ] , where N is the length of the sequence.", "We denote the real relative distance between the i -th and j -th positions as R i,j , which is computed by R i,j = | i j | .", "We can then obtain the relative distance matrix R RN N that describes the relative distance between each pair of positions.", "In each attention head, we use a learnable parameter w i to weight the relative distance Figure 1: The curves of our learnable sigmoid function under different v i .", "by R ( i ) = w i R , which will be further used to adjust the self-attention weights.", "In our method, we stipulate that a more positive R ( i ) will amplify the attention weights more strongly while a more negative R ( i ) will diminish them more intensively.", "Thus, a positive w i means that this attention head prefers to capture long-distance information, while a negative w i means that it focuses more on local contexts.", "By learning different values of w i , different attention heads may have different preferences on capturing either short-term or long-term contextual information with different intensity.", "Since the raw weighted distances may not be in the proper range for adjusting the attention weights, we need to map them into the re-scaled coefficients via a function R ( i ) = f ( R ( i ) ) that is suitable for adjusting the self-attention weights.", "However, it is not a trivial task to design the function f ( ) because it needs to satisfy the following requirements: (1) f (0) = 1 .", "We stipulate that zero distances do not influence the self-attention weights.", "(2) The value of f ( R ( i ) ) should be zero when R ( i ) .", "This requirement is to guarantee that if an attention head prefers to capture local information ( w i < 0 ), the long-distance information should be surpassed.", "1 (3) The value of f ( R ( i ) ) should be limited when R ( i ) + .", "This requirement is to ensure that the model is able to process long sequences without over-emphasize distant contexts.", "(4) The scale of f ( ) needs to be tunable.", "This aims to help the model better adjust the intensity of distance information.", "(5) The function f ( ) needs to be mono-1 Although the raw negative attention weights may be raised to 0 by f ( ) , the model can still surpass these attention weights after softmax by increasing the scale of other attention weights.", "tone.", "To satisfy the five requirements above, we propose a learnable sigmoid function to map the weighted relative distances R ( i ) , which is formulated as follows: f ( R ( i ) ; v i ) = 1 + exp( v i ) 1 + exp( v i R ( i ) ) , (5) where v i is a learnable parameter in this head that controls the upperbound and ascending steepness of this function.", "The curves of our learnable sigmoid function under several different values of v i are plotted in Fig. 1. We can see that the proposed function satisfies all the requirements above.", "In addition, from this figure we find that if v i is larger, the upperbound of the curve is higher, which means that distance information is more intensive.", "When v i = 0 , it is in fact identical to the standard sigmoid function except for the scaling factor of 2. By mapping the weighted distances R ( i ) via the function f ( ) , we can obtain the final re-scaled coefficients R ( i ) in a learnable way.", "Several illustrative examples of the re-scaled coefficients under w i = 1 and v i = 1 are respectively shown in Figs.", "2(a)-2(d).", "We can see that if w i is positive, long-distance contexts are preferred while short-term contexts are surpassed.", "The situation is reversed if w i turns to negative.", "In addition, the coefficients in Fig.", "2(c) have larger dynamic ranges than the coefficients in Fig.", "2(a), indicating that long-distance information is more dominant in Fig.", "2(c).", "Moreover, the coefficients in Fig.", "2(d) are sharper than those in Fig.", "2(b), which indicates that the model tends to capture shorter distances.", "Then, we use the re-scaled coefficients to adjust the raw attention weights that are computed by the dot-product between the query and key, i.e., Q ( i ) K ( i ) (cid:62) d .", "Different from existing methods that add the query-key dot-product with position or distance representations, in our approach we propose to multiply the re-scaled coefficients with the query-key dot-product.", "This is because for the tokens whose relations are very weak, if their re-scaled coefficients are large, their final attention weights will be over-amplified if we simply add the re-scaled coefficients to their raw attention weights.", "This is not optimal for modeling contextual information because the attention weights of irrelevant contexts cannot be fully surpassed.", "However, there are also some problems if we directly multiply the 0 2 4 6 8 10 0 2 4 6 8 10 1.0 1.1 1.2 1.3", "re-scaled coefficients R ( i ) and the raw attention weights Q ( i ) K ( i ) (cid:62) d .", "This is because the sign of attention weights Q ( i ) K ( i ) (cid:62) d is indefinite and the multiplied results cannot accurately reflect the influence of distance information.", "Thus, we propose to add a ReLU (Glorot et al., 2011) activation function to the raw attention weights to keep non-negativity.", "In this way, the final output O ( i ) of an attention head can be formulated as follows: O ( i ) = softmax(ReLU( Q ( i ) K ( i ) (cid:62) ) R ( i ) d ) V ( i ) , (6) where represents element-wise product.", "The ReLU function can also introduce sparsity to the self-attention because only the positive attention weights can be amplified by the re-scaled coefficients, which makes the attention weights in our method sharper.", "We concatenate the output from the h independent attention heads, and project it into a unified output.", "In addition, we keep the same layer normalization and residual connection strategy as the standard Transformer.", "Compared with the standard Transformer, the ma-jor additional time cost is brought by computing the re-scaled coefficients R ( i ) and using them to adjust the attention weights.", "The theoretical time complexity of the two operations in each head is O ( N 2 ) , which is much smaller than the time complexity of computing the attention weights, i.e., O ( N 2 d ) .", "In addition, both Eq.", "(5) and Eq.", "(6) in our approach can be computed in a vectorized manner.", "Thus, the additional time consumption of our method is very light.", "Besides, the increase of parameters is also minimal because we only introduce 2 h additional parameters, which are usually ignorable compared with the projection matrices like W ( i ) Q .", "Thus, our approach inherits the effi-ciency of the Transformer architecture.", "Our experiments are conducted on five benchmark datasets for different tasks.", "Four of them are benchmark NLP datasets.", "The first one is AG's News 2 (denoted as AG ), which is a news topic classification dataset.", "The second one is Amazon Electronics (He and McAuley, 2016) (denoted as Amazon ), which is a dataset for review rating prediction.", "The third one is Stanford Sentiment Treebank (Socher et al., 2013) (denoted as SST ).", "We use the binary classification version of this dataset.", "The fourth one is Stanford Natural Language Inference (Bowman et al., 2015) ( SNLI ) dataset, which is a widely used natural language inference dataset.", "The detailed statistics of these datasets are summarized in Table 1. In addition, we also conduct experiments on a benchmark news recommendation dataset named MIND (Wu et al., 2020c), aiming to validate the effectiveness of our approach in both text and user modeling.", "It contains the news impression logs of 1 million users from Microsoft News 3 from October 12 to November 22, 2019.", "The training set contains the logs in the first five weeks except those on the last day which are used for validation.", "The rest logs are used for test.", "The key statistics of this dataset are summarized in Table 2. Dataset # Train # Dev.", "In our experiments, we use the 300-dimensional Glove (Pennington et al., 2014) embeddings for word embedding initialization.", "4 The number of attention head is 16, and the output dimension of each attention is 16.", "We use one Transformer layer in all experiments.", "On the AG , SST and SNLI datasets, we directly apply Transformer-based methods to the sentences.", "On the Amazon dataset, since reviews are usually long documents, we use Transformers in a hierarchical way by learning sentence representations from words via a word-level Transformer first and then learning document representations from sentences via a sentence-level Transformer.", "On the MIND dataset, following (Wu et al., 2019, 2020b) we also use a hierarchical model architecture that first learns representations of historical clicked news and candidate news from their titles with a word-level Transformer, then learns user representations from the representations of clicked news with a news-level Transformer, and final matches user and candidate news representations to compute click scores.", "5 We use the same model training strategy with negative sampling techniques as NRMS (Wu et al., 2019).", "On all datasets we use Adam (Kingma and Ba, 2015) as the optimization algorithm and the learning rate is 1e-3.", "On the AG , Amazon , SST and SNLI datasets, accuracy and macro-Fscore are used as the performance metric.", "On the MIND dataset, following (Wu et al., 2019) we use the average AUC, MRR, nDCG@5 and nDCG@10 scores of all ses-sions as the metrics.", "Each experiment is repeated 5 times independently and the average results with standard deviations are reported.", "We compare our proposed DA-Transformer method with several baseline methods, including: (1) Transformer (Vaswani et al., 2017), the vanilla Transformer architecture, where sinusoidal positional embeddings are used.", "(2) Transformer-RPR (Shaw et al., 2018), a variant of Transformer with relative position representations.", "(3) Transformer-XL (Dai et al., 2019), a variant of Transformer that consists of a segment-level recurrence mechanism and a sinusoidal relative position encoding scheme.", "(4) Adapted Transformer (Yan et al., 2019), a variant 4 We do not use contextualized embeddings generated by language models like BERT because we mainly focus on validating the effectiveness of our Transformer architecture.", "of Transformer that uses directionand distance-aware position encoding.", "The results of our approach and these methods on the five datasets are respectively shown in Tables 4 and 5. From the results, we have several observations.", "First, compared with the vanilla Transformer, the compared methods that consider distance information consistently achieve better performance.", "It shows that distance information is very important in context modeling.", "Second, among the methods with distance information, the performance of Transformer-RPR is lower than the others.", "This may be because Transformer-RPR does not keep the precise long-distance information.", "Third, by comparing Transformer-XL and Adapted Transformer , we find that the performance of Adapted Transformer is better on the SST dataset, while Transformer-XL is better on other datasets.", "This is probably because Adapted Transformer is more suitable for modeling local contexts and the sentences in the SST dataset are usually short, while Transformer-XL may be more appropriate for modeling long sequences.", "Fourth, our method consistently achieves better performance on the five datasets, and its improvement over the second best method is statistically significant (t-test p<0.05).", "This is because our method can explicitly encode real distance information rather than using positional encoding, making the modeling of distance more accurate.", "We further compare the performance of different methods in a rating regression task on the Amazon dataset.", "The results are shown in Fig. 3. From Fig. 3 we observe similar patterns with the results in classification tasks, which validate the generality of our DA-Transformer in different genres of tasks.", "Next, we study the influence of using different mapping functions f ( ) for computing the re-scaled coefficients.", "We compare the performance of our method w.r.t. several different f ( ) , including: (1) RMSE MAE 0.5 0.7 0.9 1.1 1.3 1.5 Transformer Transformer-RPR Transformer-XL Adapted Transformer DA-Transformer Figure 3: Performance comparison of rating regression on Amazon .", "f ( x ) = min ( x, T ) (clip), using a threshold T to clip the weighted distance; (2) f ( x ) = k i x + b i (lin-ear), using a linear transformation to the weighted distance; (3) f ( x ) = exp( x ) (exponent), using an exponent function to map the weighted distance; (4) f ( x ) = 1 1+exp( x ) (sigmoid), using the sigmoid function to activate the weighted distance; and (5) f ( x ; v i ) = 1+exp( v i ) 1+exp( v i x ) , our learnable sigmoid function.", "Due to space limitation, we only present the results on the AG , Amazon and MIND datasets in Fig. 4. From these results, we find that clip is not optimal for mapping the weighted distance.", "This is because it cannot keep the precise distance information beyond a certain range.", "In addition, simply using the linear transformation is also insufficient.", "This may be because our attention adjustment method requires f ( ) to be positive, but linear transformation cannot guarantee.", "Besides, we find that the sigmoid function and our proposed function are better than the exponential function.", "This may be because long sequences will lead to the problem of exponent explosion, which is harmful to context modeling.", "Moreover, our proposed learnable sigmoid function is better than the standard sigmoid function.", "It shows that adjusting the activation function in a learnable way can better map the raw distances into re-scaled coefficients.", "Then, we explore the influence of different methods for adjusting the raw attention weights.", "We consider four different kinds of methods, including: (1) adding the re-scaled coefficients to the attention weights normalized by softmax (late add); (2) multiplying the re-scaled coefficients with the attention weights normalized by softmax (late mul-tiply); (3) adding the re-scaled coefficients to the raw attention weights before normalization (early add), which is widely used in existing methods like Transformer-XL ; (4) multiplying the re-scaled coefficients with the raw attention weights activated by ReLU, which is the method used in our approach (early multiply).", "The results on the AG , Amazon and MIND datasets are shown in Fig. 5. According to these results, we find that early adjustment is better than late adjustment.", "This may be because the late adjustment methods will change the total amount of attention, which may not be optimal.", "In addition, we find that multiplying is better than adding for both early and late adjustment.", "This may be because adding large re-scaled coefficients may over-amplify some attention weights.", "For example, if a raw attention weight is relatively small, it is not suitable to add large re-scaled coefficients to it because the corresponding contexts may not have close relations.", "In contrast, multiplying the re-scaled coefficients will not over-amplify the low attention weights.", "Moreover, in our early multiply method we further propose to use the ReLU function to introduce sparsity to make the Transformer more focused.", "Thus, our method is better than the existing early add method in adjusting the attention weights.", "Finally, we interpret our proposed method by visualizing its key parameters and the attention weights.", "we first visualize the parameters w i and v i in our method, which control the preferences of attention heads on long-term or short-term information and the shape of the learnable sigmoid function, respectively.", "The visualization results on the AG and MIND datasets are respectively shown in Figs.", "6 and 7.", "6 From Fig. 6, we find it is very interesting that half of the parameters w i are positive and the rest of them are negative.", "It indicates that half of the attention heads mainly aim to capture local contexts, while the rest ones are responsible for modeling long-distance contexts.", "It may be because both short-term and long-term contexts are useful for understanding news topics.", "In addition, we find that most attention heads have negative v i while the rest are positive.", "It shows that on the AG dataset the intensity of attention adjustment is mild in most attention heads.", "From Fig.", "7(a), we find long-term information is somewhat more important than local information in modeling news texts for 6 We show the average results of 5 runs.", "head has a strong negative w i while the values of w i in all the rest heads are positive.", "It means that only one attention head tends to capture short-term user interests while all the other heads prefer to capture long-term user interests.", "This is intuitive because users usually tend not to intensively click very similar news and their long-term interests may have more decisive influence on their news clicks.", "In addition, we find it is interesting that on MIND all values of v i are positive.", "It may indicate that distance information has a strong impact on the attention weights.", "These visualization results show that DA-Transformer can flexibly adjust its preference on short-term or long-term information and the intensity of attention adjustment by learning different values of w i and v i according to the task characteristics.", "7 We then visualize the attention weights produced by the vanilla Transformer and the distance-aware attention weights in our DA-Transformer method.", "The attention weights of a sentence in the AG dataset computed by four different attention heads are respectively shown in Figs.", "8(a) and", "8(b).", "From Fig.", "8(a), we find it is difficult to interpret the self-attention weights because they are too soft.", "In addition, it is difficult for us to understand the dif-7 We do not observe significant correlations between the sequence length and the signs of w i .", "This may indicate that the values of w i depend more on the task characteristics rather than text lengths.", "(b) DA-Transformer.", "The first two heatmaps are produced by heads with w i < 0 and others are produced by heads with w i > 0 .", "Figure 8: The self-attention weights learned by the vanilla Transformer and our proposed DA-Transformer method.", "ferences between the information captured by different attention heads.", "Different from the vanilla Transformer, from Fig.", "8(b) we find that the attention weights obtained by our method are more sparse, indicating that the attention mechanism in our method is more focused.", "In addition, it is easier for us to interpret the results by observing the attention heatmap.", "For example, the first two heatmaps in Fig.", "8(b) are produced by the two attention heads with preferences on short-term contexts.", "We can see that they mainly capture the relations among local contexts, such as the relations between biotech and sector.", "Differently, in the latter two heatmaps obtained by the two attention heads that prefer long-term contexts, we can observe that the model tends to capture the relations between a word (e.g., biotech) with the global contexts.", "These results show that different attention heads in our method are responsible for capturing different kinds of information, and their differences can be directly observed from the self-attention weights.", "Thus, our method can be better interpreted than vanilla Transformers.", "In this paper, we propose a distance-aware Transformer, which can leverage the real distance between contexts to adjust the self-attention weights for better context modeling.", "We propose to first use different learnable parameters in different attention heads to weight the real relative distance between tokens.", "Then, we propose a learnable sigmoid function to map the weighted distances into re-scaled coefficients with proper ranges.", "They are further multiplied with the raw attention weights that are activated by the ReLU function to keep non-negativity and produce sharper attention.", "Extensive experiments on five benchmark datasets show that our approach can effectively improve the performance of Transformer by introducing real distance information to facilitate context modeling.", "This work was supported by the National Natural Science Foundation of China under Grant numbers U1936208 and U1936216." ]
[ "abstain", "abstain", "objective", "objective", "abstain", "objective", "objective", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "abstain", "objective", "objective", "objective", "objective", "objective", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "other", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "result", "result", "abstain", "objective", "objective", "objective", "abstain", "result", "other" ]
[ "Existing dialogue corpora and models are typically designed under two disjoint motives: while task-oriented systems focus on achieving functional goals (e.g., booking hotels), open-domain chatbots aim at making socially engaging conversations.", "In this work, we propose to integrate both types of systems by Adding Chit-Chat to ENhance Task-ORiented dialogues ( ACCENTOR ), with the goal of making virtual assistant conversations more engaging and interactive.", "Specifically, we propose a Human AI collaborative data collection approach for generating diverse chitchat responses to augment task-oriented dialogues with minimal annotation effort.", "We then present our new chit-chat-based annotations to 23 .", "8 K dialogues from two popular task-oriented datasets (Schema-Guided Dialogue and MultiWOZ 2.1) and demonstrate their advantage over the originals via human evaluation.", "Lastly, we propose three new models for adding chit-chat to task-oriented dialogues, explicitly trained to predict user goals and to generate contextually relevant chit-chat responses.", "Automatic and human evaluations show that, compared with the state-of-the-art task-oriented baseline, our models can code-switch between task and chit-chat to be more engaging, interesting, knowledgeable, and humanlike, while maintaining competitive task performance.", "With modeling innovations, increasing computing power, and a growing number of datasets, recent years have witnessed significant improvements in the performance of both task-oriented dialogue systems and chit-chat systems (Adiwardana et al., 2020; Roller et al., 2020; Hosseini-Asl et al., 2020; Peng et al., 2020a).", "Most research on dialogue Work done as a research intern at Facebook.", "systems focuses on a particular type of dialogue system.", "Work on task-oriented dialogue systems typically aims to track user goals with higher accuracy to better achieve functional goals (Rastogi et al., 2020) with the sacrifice of not paying explicit attention to user experience, such as making the conversation more engaging, while the latter is usually the target of research on chit-chat systems (Li et al., 2019).", "In this work, we step forward and propose to integrate both types of systems by A dding C hitC hat to EN hance T askOR iented dialogues ( ACCENTOR ), aiming to have a virtual assistant capable not only of performing various complex tasks such as checking the weather, booking hotels, and finding restaurants, but also incorporating casual and contextually relevant chit-chat.", "We hypothesize that the added chit-chat can make the assistant appear more social, personable, and engaging, without being misleading or inappropriate, compared with existing task-oriented dialogue systems.", "To show the feasibility of ACCENTOR and gather supervisory data for follow-up research, we propose a Human AI collaborative data construction approach that can effectively add suitable chit-chat to the beginning or end of system responses in existing task-oriented dialogue datasets.", "Specifi-cally, we first generate chit-chat candidates for augmentation using off-the-shelf pre-trained language models and open-domain chatbots (Section 2.1).", "Next, we automatically filter out candidates that are unlikely to be of good quality using a filter model (Section 2.2).", "Finally, human annotators label each of the remaining candidates as good or bad, with justifications (Section 2.3).", "We augment the Schema-Guided Dialogue (SGD) (Rastogi et al., 2020) and MultiWOZ 2.1 (Eric et al., 2020) corpora using the proposed approach.", "(See Figure 1 or Appendix A.4 for examples.)", "We employ ACUTE-Eval (Li et al., 2019) to compare the augmented versions with the originals along four axes: engagingness, interestingness, knowledge, and humanness.", "We find that the augmented dialogues are consistently preferred by human judges across the four axes for both datasets (Section 4.1).", "In addition, we propose and evaluate three models for adding chit-chat to task-oriented dialogues, including an end-to-end model and two code-switcher models built upon off-the-shelf task-oriented and chit-chat systems (Section 3).", "Compared with the baseline model trained with the original unaugmented data, our models trained with the augmented version can generate significantly higher-rated responses in terms of human preference while maintaining competitive task performance in goal tracking accuracy and action decision F1 (Section 4.2).", "Our main contributions are: we propose (1) a data augmentation approach for generating diverse chit-chat supervisory data for task-oriented dialogues, leveraging pre-trained generative models and a custom filter model to minimize human annotation effort; (2) new versions of the popular task-oriented datasets, SGD and MultiWOZ 2.1, with newly added chit-chat annotations to 23 .", "8 K dialogues; and (3) three integrated chit-chat and task-oriented neural dialogue models for the above, substantially outperforming the state-of-the-art approach in terms of human evaluation of engagingness, interestingness, knowledge, and humanness.", "To our knowledge, we are the first to propose an annotated dataset and models that study explicit code-switching between full-stack task-oriented dialogues and free-form chit-chat responses.", "In this section, we describe an approach to gather supervisory data for adding contextually relevant chit-chat to task-oriented dialogues.", "Our approach needs minimal annotation effort to augment suitable and diverse chit-chat add-ons that are not available in existing task-oriented datasets (Sec-... ...", "tion 5.1).", "We primarily report results based on dialogues from the SGD dataset in this study, because it is the largest task-oriented dialogue dataset and is generally cleaner compared with most other task-oriented dialogue datasets.", "However, our approach is flexible and thus not limited to dialogues from a particular task-oriented dataset (Section 4.1).", "Figure 2 shows the overview of our approach.", "Given a task-oriented dialogue D = { u 1 , s 1 , u 2 , s 2 , . . . , u n , s n } , where u 1 ...n and s 1 ...n represent user turns and system turns, respectively, we generate chit-chat candidates for augmenting s i in two ways:", "(i) pass u 1 , s 1 , . . . , u i , s i to an off-the-shelf pre-trained model (a language model or a chit-chat chatbot) and let the model add tokens to the end of s i ;", "(ii) pass u 1 , s 1 , . . . , u i to a pre-trained model and let the model generate a turn.", "We regard the output of", "(i) and", "(ii) as a chit-chat candidate to be appended and prepended to s i , respectively.", "If a chit-chat candidate consists of multiple sentences, we also regard each individual sentence as a chit-chat candidate.", "We run differently sized GPT-2 (Radford et al., 2019) and BlenderBot (Roller et al., 2020) with various decoding parameters as the pre-trained model and generate an average of 175 .", "5 candidates for each of the dialogues from the SGD dataset.", "See Appendix A.1 for configuration details.", "We examine the quality of the model-generated candidates from Section 2.1 by performing a pilot annotation ourselves on a small proportion of the", "Who is the virtual assistant?", "This digital assistant is more than just a bot that spits out facts.", "It has access to a wide range of information which can express not only as factual commentaries but also as opinions and preferences.", "However, it is not a person and should not pretend to have real experiences or be capable of physical actions.", "It should be personable and personlike, without appearing counterfeit.", "candidates.", "The annotation results show that only about 1 / 10 of the candidates are suitable.", "Therefore, instead of directly sending the candidates to crowd workers for annotation, we propose to build a filter model to automatically filter out candidates that are unlikely to be of good quality first to reduce potential annotation workload.", "The filter is a hybrid model that consists of a RoBERTa-based binary classifier (Liu et al., 2019) and a rule-based ranker.", "The classifier takes as input an augmented dialogue, in which we explicitly surround the added chit-chat candidate with a pair of special tokens to help the model locate the candidate.", "We train the classifier with 1 .", "7 K candidates that are labeled as good/bad from the pilot annotation.", "The rule-based ranker ranks each candidate based on", "(i) the posterior probability output by the binary classifier,", "(ii) whether the candidate matches a list of bad patterns (e.g., containing an URL),", "(iii) the frequency of appearances of the candidate among all generated candidates,", "(iv) the similarity to the other candidates for the dialogue, and", "(v) the similarity to the system response being augmented.", "While", "(i) and", "(ii) directly help evaluate the quality of the candidate,", "(iii),", "(iv), and", "(v) additionally help create more variety (e.g., punishing high-frequency candidates such as You're welcome ).", "We keep the top ten candidates for each of the dialogues.", "We present more details in Appendix A.2.", "We ask annotators (crowd workers) to label each of the remaining candidates from Section 2.2 as good or bad .", "Additionally, to guide the annotation process, improve the potential quality, and facilitate the candidate distribution analysis, we also ask annotators to choose from four justifications that we come up with based on our pilot annotation experience to support their annotations.", "Annotators can choose one, both, or neither of the following justifications for a bad candidate: Inappropriate: The candidate does not fit into the context (e.g., repeating, unnatural), or it contradicts the context or the role of the assistant (Table 1).", "This category comprises most of the commonly found bad cases such as improper switching, providing opinions or comments that are incompatible with the context, and misusing verbal routine.", "Misleading: The candidate provides additional information that is false or cannot be verified immediately.", "For example, the underlined candidate in the two-turn dialogue U : I want to book a hotel room in San Diego with a check in on Thursday. A : There are over 10 hotels in San Diego. I would stay at Arlo NoMad if I were you. should be marked as misleading because Arlo NoMad is newly introduced information, which the annotator would have to look up to verify that a hotel by this name exists in San Diego, even though the information may be true.", "Annotators can choose one, both, or neither of the following justifications for a good candidate: Social: The candidate keeps the conversation flowing smoothly by appropriately switching to relevant topics, asking casual follow up questions, or engaging in social pleasantries.", "The design of this subcategory is inspired by the line of research that studies different social and discourse strategies in chit-chat dialogue systems (Yu et al., 2016).", "Useful: The candidate enhances the conversation by appropriately offering opinions, commentaries, or pertinent and truthful information.", "Truthfulness should be established by conversational context or real world knowledge.", "To reduce annotation workload, if annotators have to use external resources (e.g., Wikipedia, search engines, maps) to verify information, they are instructed to label the candidate as misleading instead.", "The design of this subcategory is inspired by the line of work on knowledge-grounded dialogue systems that study contextual knowledge injections (Dinan et al., 2019).", "We instruct annotators to evaluate each candidate independently as if it were the only augmentation for its associated dialogue.", "We discuss the additional dimension of complexity introduced by having multiple augmentations jointly in Section 4.1.", "Annotation time per dialogue is 243 s.", "The Fleiss' Kappa among crowd workers is 0 .", "52 .", "We view the agreement score as reasonable since whether an added chit-chat candidate leads to improved quality of a conversation can be highly subjective in many scenarios.", "We denote our augmented version of the SGD dataset as ACCENTOR-SGD and summarize the statistics in Table 2.", "We observe that the four provided justification categories provide adequate coverage of the justifications for most Rewriter Arranger Dialogue Context Chit-Chat Bot Task Bot Chit-Chat Response Candidate Task-Oriented Response Candidate RoBERTa [ ; ] [ ; ] [ ; ] GPT-2 actions; system response Input Trainable , [ ; ; ; ] Output Fixed Figure 3: A diagram for the proposed code-switching models.", "annotations.", "41 .", "4% of the candidates are good, showing the effectiveness of candidate filtering.", "An analysis based on linguistic features suggests that bad candidates are more personal and negative than good candidates.", "Specifically, 40 .", "0% of bad candidates involve first-person pronouns, while the ratio is 26 .", "5% for good candidates.", "81 .", "7% of good candidates have positive sentiment, measured by VADER, a lexicon and rule-based sentiment analysis tool (Hutto and Gilbert, 2014), while the ratio is 73 .", "0% for bad candidates.", "Examples of the resulting dataset are presented in Appendix A.4.", "Since oracle information (i.e., oracle belief states and oracle action decisions) is not available in practical use and the SGD dataset does not have the associated database (i.e., a table of possible entities) released, we focus on exploring the end-to-end setting in which we generate delexical-ized task-oriented responses without using oracle information and database search results following Hosseini-Asl et al. (2020).", "Given dialogue history (i.e., previous turns) as context, the goal of the model for each system turn is to accurately generate belief states (i.e., a list of ( domain, slot, value ) triplets), action decisions (i.e., a list of ( domain, action _ type, slot ) triplets), and a corresponding system response that is functionally accurate and socially engaging.", "We re-implement SimpleTOD (Hosseini-Asl et al., 2020) as our main baseline model, which is a state-of-the-art", "state-of-the-art model in the end-to-end setting we explore.", "In addition, we propose an extension of SimpleTOD that incorporates chit-chat acts, as well as two new models (Arranger and Rewriter; Figure 3) that code-switch between chit-chat and task-oriented responses more explicitly.", "SimpleTOD.", "It is a causal language model that models the joint probability over the concatenation of dialogue history H t , belief states B t , action decisions A t , and a task-oriented response T t for each turn t .", "During inference, the model takes as input H t and generates B t , A t , and T t .", "We refer readers to Hosseini-Asl et al. (2020) for more details.", "SimpleTOD+.", "We extend SimpleTOD by introducing to the construction of input sequences a special new dialogue action chit-chat and good chit-chat candidates during training.", "Specifically, let C + t denote the set of good candidates for system turn t .", "If C + t is empty, we construct the same training sequence as SimpleTOD.", "Otherwise, for each C t C + t that is labeled as a candidate to be prepended (resp. appended) to the turn, we use the concatenation of H t , B t , [ chit-chat ], A t , C t , and T t (resp. H t , B t , A t , [ chit-chat ], T t , and C t ) as a training sequence.", "Arranger.", "This model arranges the output of an off-the-shelf task-oriented dialogue model and an off-the-shelf chit-chat model without intervening in the task.", "It outputs the belief states and action decisions generated by the task-oriented model without modification.", "To generate a response for each system turn t , this model takes as input", "(i) dialogue history H t ,", "(ii) a chit-chat response C t generated by the chit-chat model based on H t , and", "(iii) a task-oriented response T t generated by the task-oriented dialogue model based on H t .", "The model chooses one of the following as the response: C t followed by T t , T t followed by C t , and T t only.", "Specifi-cally, the model encodes the concatenation of H t and each of these three responses by a RoBERTa encoder (Liu et al., 2019) and passes the resulting representations through a linear plus softmax layer to make the choice.", "To train the model, we form training instances by regarding each chit-chat candidate for turn t from the training set of ACCENTOR-SGD as C t and the ground-truth task-oriented response as T t and setting the target choice based on the label (i.e., good/bad) and position (i.e., be-ginning/end of the response) of the candidate.", "Rewriter.", "This model rewrites the output of an off-the-shelf task-oriented dialogue model and an off-the-shelf chit-chat model.", "It directly outputs the task-oriented model's belief states without modification and generates action decisions and a system response by a causal language model.", "The causal language model differs from SimpleTOD+ in that it has two additional components T t and C t added between H t and B t in each training sequence, where we form T t and C t in the same way as we do for Arranger.", "During the inference stage, it takes as input H t , T t output by the task-oriented dialogue model, C t output by the chit-chat model, and B t output by the task-oriented dialogue model, and generates action decisions and a system response.", "Note that since 25 .", "4% of the annotated system turns in the training set of ACCENTOR-SGD have both good and bad chit-chat candidates, C + t can be non-empty when C t is a bad candidate, which enables the model to potentially generate a suitable chit-chat augmented response even if the output of the off-the-shelf chit-chat model is not good.", "Unless specified otherwise, for causal language models, we use the 12 -layer GPT-2 ( 117 M parameters) as the pre-trained language model (Radford et al., 2019) and fine-tune for ten epochs.", "We set the batch size to 36 and the learning rate to 1 10 3 .", "We employ the SimpleTOD baseline as the off-the-shelf task-oriented dialogue model for Arranger and Rewriter.", "We fine-tune a 90M parameter model (Shuster et al., 2020) on each of the good chit-chat candidates with the associated dialogue history as the context from the training set of ACCENTOR-SGD following hyperparameters employed by Roller et al. (2020) and employ the resulting model as the off-the-shelf chit-chat model in Arranger and Rewriter.", "We use RoBERTa BASE (Liu et al., 2019) as the pre-trained language model for Arranger and fine-tune for three epochs with a learning rate of 2 10 5 and a batch size of 24 .", "ACCENTOR-SGD.", "We first evaluate ACCENTOR at the dataset level, aiming to answer two questions: Q1.", "Are task-oriented dialogues augmented with good chit-chat more preferred by human judges than the unaugmented?", "Q2.", "Does the answer to Q1 depend on how frequently we augment system Joint GA Avg GA Act-Slot F1 BLEU-4 orig BLEU-4 aug All Seen All Seen All Seen All Seen All Seen SimpleTOD 29 .", "responses with chit-chat?", "To answer these questions, we randomly sample 100 dialogues from ACCENTOR-SGD, each having at least 8 turns and enough candidates labeled as good for augmenting over 40% of system responses so that we can compare the same task-oriented dialogue with different chit-chat injection frequencies that fall into each of the following four intervals: (0 .", "1 , 0 .", "2] , (0 .", "2 , 0 .", "3] , (0 .", "3 , 0 .", "4] , and (0 .", "4 , 1] .", "Particularly, for the last interval, we augment all system responses that have chit-chat candidates labeled as good, while for the first three intervals, we only augment a randomly selected fraction to fit the interval.", "We employ ACUTE-Eval (Li et al., 2019) for evaluation, whereby we ask human evaluators to make pairwise comparisons of complete dialogues over four axes: engagingness, interestingness, knowledge, and humanness.", "We provide the wording of the questions in Appendix A.3.", "As shown in Figure 4, the chit-chat augmented dialogues from ACCENTOR-SGD are more preferred by human judges than the originals over all ACUTE-Eval metrics, regardless of the injection frequency (all p-values < 0 . 05 ).", "Among different injection frequency ranges, (0 .", "2 , 0 .", "3] is the best.", "We offer three hypotheses to explain this finding:", "(i) (0 .", "2 , 0 .", "3] best balances being engaging and not U : I like to find some movies directed by Jonathan Levine.", "too talkative.", "(ii) There are inevitable annotation errors, and scenarios where whether a candidate is good or bad is subjective.", "A higher injection frequency means a higher chance of being affected by these factors.", "(iii) Since candidates are labeled independently, inter-candidate incompatibility may arise (e.g., expressing contradicted preferences), especially when we have a high injection frequency.", "Table 4 shows a real example to support our hypotheses.", "Specifically, 3 is labeled as good but is indeed not a suitable (or at least a questionable) candidate, supporting the hypothesis", "(ii).", "While 2 and 4 are good candidates when we evaluate them separately, they may be less preferred if we assess them jointly because they convey the same meaning: Long Shot is a good comedy .", "Having them together may appear incompatible (i.e., repetition) or sound verbose to the user, supporting the hypothesis", "(i) and", "(iii).", "ment about 1 K randomly sampled dialogues from another task-oriented dataset, MultiWOZ 2.1 (Eric et al., 2020) following the same steps as described in Section 2.", "Crowd workers label 30 .", "0% of the candidates as good, which is lower compared with ACCENTOR-SGD ( 41 . 4% in Table 2).", "We attribute the difference to", "(i) the performance downgrade of the filter model since we do not re-train the model for MultiWOZ 2.1, and", "(ii) a higher chance of a chit-chat augmented response being too verbose to be good since the average number of tokens per system turn in MultiWOZ 2.1 is larger than that of SGD ( 17 . 3 vs. 13 . 1 ).", "Nevertheless, the augmented version (denoted as ACCENTOR MultiWOZ) is significantly more preferred than the original, as shown in Figure 6, where we randomly sample 100 dialogues from ACCENTOR MultiWOZ, augment all of their system responses that have chit-chat candidates labeled as good, and compare these augmented dialogues with the corresponding original dialogues.", "for evaluating action decisions, and two BLEU-4 scores (BLEU-4 orig , BLEU-4 aug ) for evaluating system responses, where we use orig inal (resp. aug mented) system responses as references for BLEU-4 orig (resp. BLEU-4 aug ).", "Table 3 summarizes the evaluation results.", "Since the test set of SGD contains unseen services (i.e., services not seen during training) designed to evaluate the model's generalizability, we report the results on all services (All) and seen services only (Seen) following Rastogi et al. (2020).", "Our proposed models generally achieve a similar task performance level compared with the SimpleTOD baseline.", "Unsurprisingly, the proposed models achieve lower BLEU-4 orig and higher BLEU-4 aug .", "Human Evaluations.", "We turn to human evaluations for a more comprehensive measure of the response generation performance.", "We employ the same ACUTE-Eval metrics as we do in data evaluations.", "We randomly sample 100 dialogues from the test set of ACCENTOR-SGD.", "For each sampled dialogue D = { u 1 , s 1 , u 2 , s 2 , . . . , u n , s n } , we pass u 1 , s 1 , . . . , u i to each model M { SimpleTOD, SimpleTOD+, Arranger, Rewriter } to obtain its system response s M i for the i -th system turn ( 1 i n ).", "Let DM represent { u 1 , s M 1 , . . . , u n , s M n } .", "We ask evaluators to compare each pair of DM 1 and DM 2 , where M 1 , M 2 { SimpleTOD, Simple-TOD+, Arranger, Rewriter } and M 1 (cid:54) = M 2 .", "As shown in Figure 5, all of the chit-chat augmented models outperform the SimpleTOD baseline over four ACUTE-Eval metrics.", "Among the chit-chat augmented models, no one shows a clear win over the other two on the quantitative level.", "We show a full dialogue example comparing responses gener-SimpleTOD vs. Modified Arranger (Win %) (Win %) Engagingness 14 10 86 10 Interestingness 25 2 75 2 Knowledge 20 3 80 3 Humanness 20 9 80 9 Figure 7: Human evaluation results of the modified Arranger with controlled injection frequency ( : p-value < 0 . 005 , / : increased/decreased win % compared with the original Arranger).", "ated by different models along with supplementary discussions in Appendix A.5.", "Considering that the injection frequency affects human evaluations (Section 4.1) and that all our models do not explicitly control the injection frequency, we experiment with controlling the injection frequency by modifying Arranger to consider including chit-chat into the current turn only when the injection frequency from the first turn to the current turn is less than 0 .", "3 .", "Compared with the original Arranger, the modified Arranger achieves a higher win percentage over SimpleTOD, as shown in Figure 7.", "We leave further exploration of injection frequency for future work.", "Approach.", "Our proposed strategy to augment task-oriented dialogue system responses with chitchat is simple, compared with how it emerges in human conversations, where both functionality and engagingness structurally intertwine with each other in a more complex fashion.", "Our proposed Rewriter model does have a modeling capability to compose both functions organically but is limited due to the dataset's target arrangement (i.e., concatenation of two separate compo-nents).", "Despite the limitation, our chosen design of code-separation has practical merits: we can easily extend the proposed approach to an existing production-level virtual assistant system as a modularized solution, and it has minimal interference to the user-perceived task success rate , a core metric widely adapted in virtual assistant systems.", "Another limitation of our work is that we only augment responses on the system side in our dataset, and the augmentations are independent of each other, whereas in real-life situations, users are also likely to make chit-chat, and the chit-chat between the user and the system should ideally be related to each other.", "We leave for future research addressing these limitations.", "Evaluation.", "We follow the previous literature on evaluation and regard the four ACUTE-Eval metrics as the primary measure of the response generation performance in this work.", "However, there is a large overlap between the desired quality measured by different human judgment categories used in ACUTE-Eval.", "The four ACUTE-Eval metrics favor the same dialogue 84 .", "4% of the time in our evaluation, indicating high correlations between these metrics.", "We leave the study of addressing this issue for future work.", "Dialogue system research has been consistently supported by the development of new datasets.", "The Dialog State Tracking Challenge (DSTC) series (Williams et al., 2013; Henderson et al., 2014a,b; Williams et al., 2014; Kim et al., 2016, 2017; Moon et al., 2020) provide common testbeds for task-oriented dialogues.", "Following DSTC, researchers have created a variety of publicly available task-oriented dialogue datasets (El Asri et al., 2017; Shah et al., 2018; Budzianowski et al., 2018; Rastogi et al., 2020).", "Another line of work seeks to facilitate open-domain chatbot development with large amounts of human-created text data generated in a social context (Baumgartner et al., 2020) and supervision for a variety of desirable general qualities such as being engaging, personable, knowledgeable, and empathetic (Zhang et al., 2018; Dinan et al., 2019; Rashkin et al., 2019; Moon et al., 2019; Wang et al., 2019; Smith et al., 2020).", "Our work bridges the two lines.", "We compare ACCENTOR-SGD and ACCENTOR -MultiWOZ with relevant and representative dialogue datasets in Table 5.", "Note that very few dialogue corpora contain explicit annotations for both task-oriented and chitchat utterances.", "For example, task-oriented dialogue corpora constructed by Rastogi et al. (2020) and Moon et al. (2020) contain annotations for a few chit-chat dialogue acts, but they are limited to light social greetings (e.g., Thank you! , Good Bye. ) typically at the end of each dialogue session.", "Zhao et al. (2017) propose to artificially augment task-oriented dialogues with randomly sampled utterances from a chit-chat corpus, mainly to improve the out-of-domain recovery performance.", "Akasaki and Kaji (2017) annotate user utterances with chat/non-chat binary labels.", "Still, they do not study the contextual combination of these two Dataset Construction Method # Dialogues Task-Oriented Chit-Chat DSTC2 (Henderson et al., 2014a) crowdsourcing 3 , 235 (cid:51) (cid:55) MultiWOZ 2.1 (Eric et al., 2020) crowdsourcing 10 , 438 (cid:51) (cid:55) Schema-Guided Dialogue (Rastogi et al., 2020) crowdsourcing 22 , 825 (cid:51) (cid:55) SIMMC (Moon et al., 2020) crowdsourcing 12 , 948 (cid:51) (cid:55) PersonaChat (Zhang et al., 2018) crowdsourcing 10 , 907 (cid:55) (cid:51) Wizard of Wikipedia (Dinan et al., 2019) crowdsourcing 22 , 311 (cid:55) (cid:51) EmpatheticDialogues (Rashkin et al., 2019) crowdsourcing 24 , 850 (cid:55) (cid:51) BlendedSkillTalk (Smith et al., 2020) crowdsourcing 6 , 808 (cid:55) (cid:51) Pushshift Reddit (Baumgartner et al., 2020) crawling & scraping 651 , 778 , 198 (cid:55) (cid:51) ACCENTOR-SGD (this work) crowdsourcing 22 , 825 (cid:51) (cid:51) ACCENTOR -MultiWOZ (this work) crowdsourcing 997 (cid:51) (cid:51) Table 5: Statistics of dialogue datasets ( : regarding each thread (i.e., a post and its comments) as a dialogue).", "to make conversations more engaging, and their corpus does not contain goal labels like typical task-oriented dialogue corpora.", "In contrast, our work drastically increases the diversity and contextual coverage of chit-chat additions for any task-oriented dialogue corpus (e.g., It's a great way to kick off the summer! , I hear it's beautiful. ).", "Compared with other approaches of creating a high-quality dialogue corpus (e.g., via human-to-human Wizard-of-Oz collection (Eric et al., 2020), dialogue self-play and paraphrase (Shah et al., 2018)), the annotation cost of the proposed model-based dialogue generation approach combined with the quality control mechanisms is lower, as our work does not involve authoring new sentences by human annotators.", "Over the past few years, neural models have achieved remarkable success in the development of the main components of task-oriented dialogue systems, including understanding user intent, tracking dialogue states, determining system actions, and generating system responses (Henderson et al., 2013; Sun et al., 2014; Wen et al., 2015; Liu and Lane, 2016; Mrkic et al., 2017; Wen et al., 2017; Nouri and Hosseini-Asl, 2018; Heck et al., 2020; Chen et al., 2020).", "Recently, connecting separate components and building end-to-end task-oriented neural dialogue systems have attracted increasing interest (Bordes et al., 2017; Peng et al., 2020b).", "The most recent thread is to unify all components in a single end-to-end neural model by fine-tuning a pre-trained deep language model on multiple tasks, which leads to state-of-the-art performance (Hosseini-Asl et al., 2020; Peng et al., 2020a).", "We follow this thread and further enhance the ability to generate appropriate non-task-oriented add-ons, on top of the ability to achieve functional goals that existing systems are typically narrowly tailored to.", "A few work have studied training a dialogue model leveraging multiple chit-chat and task-oriented dialogues (Madotto et al., 2019, 2020), which allows the model to attend on a relevant task for a given user utterance and respond accordingly, thus increasing the skill coverage of the model.", "Our proposed models are trained on the newly collected ACCENTOR-SGD dataset with the turn-level supervision signals, allowing for contextual and flexible code-switching between chit-chat and functional tasks in a single system turn.", "We propose adding chit-chat to enhance task-oriented dialogues ( ACCENTOR ) in this study.", "We present a general Human AI collaborative data construction approach for ACCENTOR , with which we create a dataset consisting of 23 .", "8 K chit-chat augmented task-oriented dialogues.", "We show via human evaluation that chit-chat augmented dialogues are preferred than the unaugmented.", "In addition, we propose three models for ACCENTOR .", "Evaluation results show that compared with the baseline trained on the original unaugmented data, our proposed models trained on the chit-chat augmented counterpart achieve a similar task performance level and higher human evaluation scores.", "We thank Gerald Demeunynck for helping with the data annotation process.", "We would also like to thank the anonymous NAACL reviewers for their constructive and insightful feedback." ]
[ "abstain", "objective", "objective", "objective", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "method", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "abstain", "objective", "method", "objective", "abstain", "result", "method", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result", "abstain", "objective", "objective", "other", "abstain", "objective", "objective", "objective", "objective", "objective", "method", "abstain", "abstain", "abstain", "abstain", "result", "objective", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "abstain", "method", "other", "other", "other", "other", "method", "other", "objective", "other", "other", "other", "other", "objective", "other", "objective", "objective", "method", "abstain", "objective", "objective", "objective", "other", "other" ]
[ "Cross-lingual transfer has improved greatly through multi-lingual language model pretraining, reducing the need for parallel data and increasing absolute performance.", "However, this progress has also brought to light the differences in performance across languages.", "Specifically, certain language families and typologies seem to consistently perform worse in these models.", "In this paper, we address what effects morphological typology has on zero-shot cross-lingual transfer for two tasks: Part-of-speech tagging and sentiment analysis.", "We perform experiments on 19 languages from four language typologies (fusional, isolating, agglutinative, and introflexive) and find that transfer to another morphological type generally implies a higher loss than transfer to another language with the same morphological typology.", "Furthermore, POS tagging is more sensitive to morphological typology than sentiment analysis and, on this task, models perform much better on fusional languages than on the other typologies.", "Cross-lingual transfer uses available annotated resources in a source language to learn a model that will transfer to a target language.", "Earlier work used machine translation (Mihalcea et al., 2007), parallel data (Pado and Lapata, 2009), or delexicalized models (Zeman and Resnik, 2008; McDonald et al., 2011; Sgaard, 2011) to bridge the gap between languages.", "However, recent improvements (Devlin et al., 2019) have reduced the need for parallel data, instead relying on multi-lingual language models, trained on the concatenation of monolingual corpora.", "Fine-tuning these multilingual language models on a task in a source language can lead to strong performance when applied directly to the target-language task (zero-shot transfer).", "This progress has uncovered gaps in performance, as transfer is generally easier between similar languages, and some language families consistently perform worse (Artetxe et al., 2020; Conneau et al., 2020a).", "So far, however, the analysis of these differences has only been anecdotal, rather than centered as a research question of its own merit.", "For these cases, linguistic typology has important implications, as it gives us ways to quantify the similarity of languages along certain variables, such as shared morphological or syntactic features (Bender, 2013).", "While previous work has studied the effects of morphological typology on language modeling (Gerz et al., 2018; Cotterell et al., 2018; Mielke et al., 2019), this effect on cross-lingual transfer has not been looked at in detail.", "In this paper we attempt to answer (RQ1) to what degree morphological typology affects the performance of state-of-the-art cross-lingual models, (RQ2) whether morphological typology has a stronger effect than other variables, e.g., the amount of data for pretraining the LM or domain mismatches between source and target, (RQ3) whether there is a different effect on a low-level structural task (POS tagging) vs. a semantic task (sentiment analysis).", "To answer these questions we experiment with two state-of-the-art cross-lingual models: multilingual BERT and XLM RoBERTa.", "We fine-tune the models for part-of-speech tagging and sentiment analysis on 19 languages from four morphologically diverse typologies.", "Our results show that POS tagging is more sensitive to morphological typology than sentiment analysis and that the models perform much better on fusional languages, such as German, than on the other typologies.", "We release the code and data 1 in order to reproduce the experiments and facilitate future work in this area.", "Cross-lingual transfer has become ubiquitous in recent years, including cross-lingual POS tagging (Tackstrom et al., 2013; Huck et al., 2019) and cross-lingual sentiment analysis (Mihalcea et al., 2007; Balahur and Turchi, 2014; Barnes and Klinger, 2019).", "While earlier research focused on annotation projection (Yarowsky et al., 2001; Banea et al., 2008) or cross-lingual embeddings (Kim et al., 2017; Artetxe et al., 2017; Barnes et al., 2018b), multi-lingual pretraining currently leads to state-of-the-art results (Devlin et al., 2019; Lample and Conneau, 2019).", "These approaches rely on training transformer-based language models (Vaswani et al., 2017) on unlabeled data from multiple languages, while using careful data selection methods to avoid the over-representation of larger languages.", "Although these approaches have led to large improvements on many cross-lingual tasks, it is clear that the success of zero-shot cross-lingual transfer depends on the typological similarity of the source and target language (Conneau et al., 2020b; Libovicky et al., 2020).", "Pires et al. (2019) find POS performance correlates with word order features taken from the World Atlas of Language Structures (WALS) database (Dryer and Haspel-math, 2013).", "Similarly, morphologically complex languages tend to achieve poorer performance (Artetxe et al., 2020; Conneau et al., 2020a).", "Similar to this work, Lauscher et al. (2020) perform zero-shot and few-shot transfer on 20 languages and 5 tasks.", "However, the choice of languages does not allow one to answer what is the effect of morphological typology.", "The effect of morphological typology on NLP tasks is well known (Ponti et al., 2019), with several dedicated workshop series (Nicolai et al., 2020; Zampieri et al., 2018).", "More recently, attention has turned to larger scale analyses of morphological typology effects on language modeling (Gerz et al., 2018; Cotterell et al., 2018; Mielke et al., 2019).", "In contrast to these previous works, we are interested in how morphological typology affects cross-lingual transfer for two supervised tasks , namely part of speech (POS) tagging and sentiment analysis.", "We choose these two tasks as 1) they both have data available in typologically diverse languages , and 2) represent a lower-level structural and higher-level semantic task , respectively.", "Our experimental setup reduces some of the complexity of comparing test results across languages, as we compare relative differences, instead of absolute differences.", "At the same time, it is necessary to take into account several other variables, i.e., presence of the language in pretraining, the amount of training data, the effect of byte-pair tokeniza-tion, the length of train and test examples, and any domain mismatches across languages.", "Although it is a simplification of the variation in morphological features (Plank, 1999), languages have traditionally been grouped into four morphological categories, i.e., isolating, fusional, introflexive, and agglutinative.", "2 These categories describe a language's tendency to group concepts together into a single word or disperse them into separate words.", "Pure isolating languages have maximally one morpheme per word.", "In agglutinative languages, morphemes tend to be neatly segmentable and carry a single feature, whereas in fusional languages, a single morpheme often carries multiple grammatic, syntactic, and semantic features.", "Finally, in introflexive languages root words are based on consonant stems, where vowels introduced around and between them lead to syntactic and semantic changes (see Plank (1999); Bickel and Nichols (2005); Gerz et al. (2018) for a more in-depth discussion).", "We select five languages from each category except introflexive (four), shown in Table", "1. A short example sentence in a fusional (Norwegian no ), isolating (Indonesian in ), agglutinative (Basque (cid:20) eu ), and introflexive (Maltese (cid:24) mt ) language with glosses and translation in English is shown in Example", "1. (1) no in (cid:20) eu (cid:24) mt Buss-en bus-DEF.ART Bus bus Autobus-a bus-DEF.ART Ix-xarabank DEF.ART-bus kom come:PERF itu that berandu late waslet come:PERF sen-t late-ADV datang come etorri come:PCP tard late terlambat late zen PRT.3S The bus came late.' 2 We use the following color combinations to denote isolating , (cid:20) agglutinative , (cid:24) introflexive , and fusional languages.", "We obtain the data for the part-of-speech tagging task from the Universal Dependencies project (Ze-man et al., 2020), which currently gathers data annotated with universal POS tags for more than 90 languages, although there are differences in size and domain.", "For Algerian we use the annotations from Seddah et al. (2020).", "We found no training sets available for Thai and Cantonese, hence we use them for testing only.", "For more details on these datasets, see Table 5 in the Appendix.", "For sentiment analysis, however, there is no centralized repository of similar data.", "Therefore, we collect data from a number of sources and process 3 Including https://github.com/ dimitrakatseli/review_sentiment_analysis 4 https://github.com/ljw9609/ SentimentAnalysis 5 https://github.com/e9t/nsmc 6 https://github.com/Darkmap/japanese_ sentiment 7 Including https://github.com/ozturkaslii/ analyze-turkish-sentiment them to create binary (positive, negative) sentence-level sentiment datasets.", "For convenience, we list the origin of each dataset in Table 2 and their full characteristics in Table 6 in the Appendix.", "We fine-tune both multilingual BERT (mBERT) (Xu et al., 2019) and XLM RoBERTa (XLM-R) (Conneau et al., 2020a) models on the available training data in each language, using a shared set of hyperparameters selected from recommended values according to the characteristics of our data.", "We set the learning rate to 2e-5, maximum sequence length of 256, batch size of 8 or 16 8 , and perform early stopping once the validation score has not improved in the last epochs, saving the model that performs best on the dev set.", "We then test each model on all languages, giving us a matrix of test scores, where the diagonal is in-language, and all others are cross-lingual.", "We use accuracy as our metric for POS and macro F 1 for sentiment, as the latter often contains unbalanced classes, and define 8 Depending on the size of the training set, model architecture and available GPU memory.", "a baseline as the result of predicting the majority class.", "Once our scores matrix is built, we average 9 the score of each fine-tuned model, which we refer to as language-to-language cross-lingual scores, over the other languages in each morphological group, thus obtaining each model's average cross-lingual performance per target group ( language-to-group cross-lingual scores).", "Next, we average again for each source language group.", "This yields the average cross-lingual performance values per training and testing language groups ( group-to-group cross-lingual scores), which we report in Table 3.", "In the part-of-speech task, the best group-to-group cross-lingual performance always corresponds to models fine-tuned in a language of 9 Note that, throughout this paper, when we average across morphological groups, we do so with a weighted average so that all groups are equally represented regardless of how many languages they include.", "the same morphological group, regardless of the model's architecture.", "Fusional models, in particular, obtain a remarkably higher score when tested on other fusional languages (over 80%).", "On the other hand, the group-to-group cross-lingual scores where the target language is introflexive are considerably lower than the rest (always below 50%).", "In contrast, both model architectures show different patterns in the sentiment analysis task.", "For the XLM-R models, the best group-to-group cross-lingual scores are all achieved by those trained on a fusional language, while for the mBERT it is mainly models trained on an isolating language that achieve the best scores.", "In any case, all scores are within a similar range of values.", "In fact, the main difference in this task seems to be due to XLM-R's considerably higher scores.", "In order to capture the cross-lingual phenomenon more accurately, we introduce transfer loss , a relative metric defined in Equation 1: T L x y = S x x S x y (1) where T L x y is the transfer loss experienced by a model fine-tuned in language x when transferring to language y ( language-to-language transfer loss) and S x y is the score 10 achieved when testing a model fine-tuned in language x on language y .", "Thus, it is a measure of the performance lost in the zero-shot transfer process: the better the transfer between both languages, the lower it will be.", "We also define its averaged variants: T L x A = S x x 1 NA (cid:88) i A i (cid:54) = x S x i (2) T LA B = 1 NA (cid:88) i AT L i B (3) where T L x A denotes the average transfer loss from language x to languages belonging to morphological type A ( language-to-group transfer loss), T LA B refers to the average transfer loss experienced by languages from morphological group A to languages from group B ( group-to-group transfer loss) and NA is the number of languages (other than x ) included in the experiment that belong to group A .", "Table 4 shows the resulting group-to-group transfer loss values for each task.", "10 The score metric will depend on the task: accuracy in POS and macro F 1 in sentiment analysis.", "Models fine-tuned in all groups except agglutinative experience the lowest performance drop when transferring to fusional languages in the part-of-speech task, whereas in the sentiment analysis task there is no clear pattern.", "It is also worth noting that the XLM-R models tend to transfer better compared to mBERT, only slightly in part-of-speech tagging but more drastically in sentiment analysis.", "Additionally, the cases of worst transfer happen when the target language is introflexive (especially for XLM-R).", "Next, to address RQ1 more directly, we compare two different types of transfer: intra-group transfer, where both the fine-tuning and target languages belong to the same morphological group, and inter-group transfer, where the two differ in morphological type.", "We calculate an average for both types of transfer and for each training group, model architecture and task.", "We present the resulting values in Figure", "1. Generally, transfer to another morphological type implies a higher cost in terms of performance, except for the introflexive models.", "This difference in transfer loss appears to be similar for all groups in the sentiment task, yet it varies considerably in the part-of-speech task.", "More specifically, there are two extremes in this latter case: fusional models suffer large performance drops when switching morphological groups, whereas isolating models experience similar transfer losses in both conditions.", "Finally, we average again to obtain a single transfer loss value for each task and model, and use it to establish a comparison in Figure", "2. Here we observe that: (1) the difference in transfer loss between an intra-group and inter-group transfer is higher on the part-of-speech task, (2) transfer is also generally worse on this task 11 , (3) XLM-R models perform better cross-lingual transfers in general (especially on the sentiment analysis task), and (4) the difference between intra-group and inter-group transfer is similar on both model architectures.", "In this section, we run several statistical tests to verify our conclusion to RQ1 and detail several points of analysis that relate to RQ2 and RQ3.", "Namely, to what degree do other variables contribute to effects on cross-lingual transfer.", "We run a set of statistical tests to validate the observations made from Figure 2 in Section 5.", "In the part-of-speech tagging task, an analysis of variance (ANOVA) reveals there is a statistically significant, although weak, difference in transfer loss between the intraand inter-group conditions, for both model architectures ( 2 0 . 06 , p < 0 . 01 in both cases).", "In contrast, a Kruskal-Wallis analysis of variance 12 finds no significant difference 11 Strictly speaking, we use different metrics for both tasks, which are not necessarily comparable.", "between the two types of transfer in the sentiment analysis task, in neither mBERT or XLM-R models ( p > 0 . 01 in both cases).", "We also test for differences in transfer loss between model architectures and find a significant difference in the sentiment analysis task (Kruskal-Wallis, p < 0 . 01 ), but not in the part-of speech tagging task (ANOVA, p > 0 . 01 ).", "This is all consistent with our previous observations.", "Additionally, we model language-to-language transfer loss with a linear regression model, using transfer type, as well as other variables, as possible predictors.", "This allows us to", "(a) test whether the intra-/inter-group difference retains its statistical significance in the presence of other variables and", "(b) evaluate its effect in comparison to other predictors.", "First, we select a set of variables that might be relevant in cross-lingual transfer, and remove those that are highly correlated with the rest to avoid multicollinearity in the model (see Table 7 in the Appendix for the final list of selected variables).", "We standardize all of the remaining features so that their units are comparable and, consequently, so are their regression coefficients.", "Again, we find transfer type (intra-/inter-group) to be a significant predictor in both regression models for part-of-speech tagging ( p < 0 . 01 ), but not in sentiment analysis.", "In the former case, it has the second strongest effect with a standardized coefficient of 8.6 13 , the first being presence of the target language in pretraining with a coefficient of -25.9.", "In other words, transferring to a language on which the model has not been pretrained implies an additional performance drop of 25.9 percentage points, while transferring to another morphological group incurs an additional 8.6.", "The remaining predictors for this task are average test example length (measured in tokens, coefficient of 4.0) and in-language score (3.3).", "The first is a complex variable because differences in text length can be due to their domain or to the lan-13 Since the regression models for mBERT and XLM-R are quite similar, we report the averaged coefficients here.", "guages themselves but, in either case, its coefficient confirms our intuition that longer sequences generally make the task more difficult.", "The second could indicate some overfitting to the fine-tuning language, as higher in-language score entails slightly poorer transfer.", "XLM-R adds another predictor: the proportion of words that have been split into subword tokens in the test data (2.1).", "This variable is related to the size of the pretraining corpus for each language 14 : a richer pretraining vocabulary will ensure more words are considered frequent during Byte Pair Encoding and, therefore, assigned a single token, instead of being broken down into subword tokens by the tokenizer.", "This means that high-resource languages will have a lower word split probability and, hence, it will be slightly easier to transfer to them.", "However, it is worth pointing out that this bias has little effect and is only statistically significant in XLM-R.", "In the case of sentiment analysis, relevant predictors are: presence of the fine-tuning (coefficient of -11.8 for mBERT and -18.7 for XLM-R) and target (-10.3 and -16.3) languages in pretraining, in-language score (6.8 and 6.5), proportion of words split into subword tokens in the training data (3.3 and 2.7) and proportion of examples labeled as positive in the test set (-2.8, XLM-R only).", "Curiously, sentiment analysis is more sensitive to variables related to the training data compared to part-of-speech tagging, whereas sequence length only affects the latter.", "On the other hand, language inclusion in pretraining and in-language score are useful predictors in both tasks, yet the former is far stronger in POS and the latter is more relevant in sentiment analysis.", "In summary, we verify that transferring to a different morphological type has a relevant effect in part-of-speech tagging but not in sentiment analysis, regardless of the model architecture.", "Given the considerable effect pretraining seems to have on transfer loss (discussed in Section 6.2), we re-evaluate our results after removing the languages that were not present during the pretraining of either of the two model architectures (Cantonese, Algerian and Maltese) and check whether there are relevant differences with our previous results.", "Of course, we observe an improvement in cross-lingual scores involving either an isolating or an introflexive language, because these are the groups the excluded languages belong to.", "Overall, however, re-running the statistical tests does not modify our previous conclusions (see Figure 3).", "Since in-language score is relevant in all regression models considered in 6.2 (and the value of transfer loss is relative to it), we decide to re-train all models, this time preventing them from increasing said score above a fixed threshold value (we choose the minimum in-language score achieved previously in each task and model architecture) and re-evaluate our previous conclusions.", "The intra-/inter-group difference in transfer loss is still statistically significant in part-of-speech tagging and not in sentiment analysis.", "Similarly, there is still a statistically significant difference in transfer loss between both models only in the sentiment analysis task.", "All of this can be seen in Figure 3.", "The only remarkable difference is in the part-of-speech task, where the average inter-group transfer loss values for all morphological groups seem to converge to the same value (see Figure 5 in the Appendix).", "For more information, see Figures 5 and 6, as well as Tables 8 and 9, all of which can be found in the Appendix.", "We also test the effect that training with considerably more data has on cross-lingual transfer.", "We select two languages, each with around 150,000 examples available: German for the part-of-speech tagging task and Korean for sentiment analysis.", "We train four models with increasingly more data and then test them on all languages.", "In German, we notice an important decline in cross-lingual scores when increasing data size from 80,000 to 150,000 examples (see Figure 4).", "More specifically, in mBERT models there is an average decrease of 15.6 and 9.0 points when the cross-lingual transfer is intraand inter-group, respectively.", "In XLM-R, the corresponding values are 25.4 and 19.5.", "Hence, it appears that a phenomenon of language specialization takes place, one to which XLM-R is more susceptible and that has more important consequences in intra-group transfer.", "To ensure this is a language and not a domain/dataset specialization, we test these models on another German dataset (PUD) and find no decrease in performance.", "In contrast, average Korean cross-lingual scores remain relatively constant (see Figure 4).", "Therefore, the language specialization phenomenon could be more characteristic of part-of-speech tagging than sentiment analysis.", "Conneau et al. (2020b) find that domain mismatch in pretraining of multilingual LMs is more problematic than domain mismatch in fine-tuning.", "Yet given the variety of domains present in the sentiment data, we decided to test its effect.", "Proxy A-distance (Glorot et al., 2011) measures the generalization error of a linear SVM trained to discriminate between two domains.", "We translate 1000 sentences from each dataset to English using GoogleTranslate and then compute the proxy A-distance.", "15 For POS tagging, there are small but insignificant negative effects of proxy A-distance on results for both models (a Pearson coefficient of -0.07, p > 0 . 01 and -0.07, p > 0 . 01 for mBERT and XLM-R, respectively).", "On the sentiment task, there is no significant domain effect for mBERT (-0.06, p > 0 . 01 ), while there is a small negative effect for XLM-R (-0.27, p < 0 . 01 ).", "This suggests that most of the transfer loss is not due to domain mismatch.", "In this paper, we have conducted an extensive analysis of the effects of morphological typology on cross-lingual transfer and attempted to isolate these factors from other variables.", "We have compared performance of two state-of-the-art zero-shot cross-lingual models on two tasks (part-of-speech tagging and sentiment analysis) for 19 languages across four morphological typologies.", "We have found that transfer to another morphological type generally implies a higher performance loss than transfer to another language with the same morphological typology.", "Additionally, part-of-speech tagging is more sensitive to morphological differences than sentiment analysis, while sentiment analysis is more sensitive to variables related to the fine-tuning data and is less predictable in general.", "We have tested this sensitivity to morphology after balancing other influential factors, such as 15 Implementation adapted from the code available at https://github.com/rpryzant/ proxy-a-distance .", "in-language score, and, still, the intra-/inter-group difference remains.", "However, the effect of morphological typology, while significant, is not strong, given that most of the variability in transfer loss is due to other factors.", "We have also confirmed that XLM-R generally transfers better than mBERT, especially on sentiment analysis.", "In part-of-speech tagging, we have reported considerably better transfer within fusional languages, as well as easier transfer from the other groups towards the fusional type.", "Moreover, we have found a case that suggests that fine-tuning on large training sets might lead to language specialization and, consequently, be detrimental to cross-lingual transfer.", "It is worth noting that we do not explore whether the type of script used by the languages has an effect on cross-lingual transfer.", "This is hard to control in our experimental setup, as there are some scripts that are either unique to a language or only have one with enough data to represent it, making it impossible to make comparisons.", "The recent cross-lingual suite Xtreme (Hu et al., 2020) includes a number of benchmark tasks in 40 languages.", "While this dataset is a useful collection of cross-lingual tasks, it is unfortunately not suffi-cient for our purposes.", "The POS data is the same as we use, while other tasks either", "a) do not contain a representative sample of language typologies", "b) use translation, introducing problems of transla-tionese', or", "c) are automatically created and not manually curated Named Entity Recognition data.", "Our experimental setup avoids these problems by focusing on binary sentiment analysis, which is a task that has data available in many languages and does not require translation to get multilingual data.", "Finally, this work ties in with the increasing interest in typological questions in NLP (Takamura et al., 2016; Ponti et al., 2019; Bjerva et al., 2019; Nooralahzadeh et al., 2020; Bjerva and Augenstein, 2021), which often try to directly predict typological features, or use these to analyze model performance.", "In the future, it would be interesting to train multi-lingual language models on specific language families in order to find maximal benefits from shared morphology.", "Finally, as typology seems to affect tasks differently, it would be interesting to explore other tasks, e.g., dependency parsing or semantic role labeling." ]
[ "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "result", "abstain", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "objective", "method", "abstain", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "method", "method", "method", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "result", "abstain", "other", "abstain", "abstain", "method", "abstain", "result", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain" ]
[ "Bragging is a speech act employed with the goal of constructing a favorable self-image through positive statements about oneself.", "It is widespread in daily communication and especially popular in social media, where users aim to build a positive image of their persona directly or indirectly.", "In this paper, we present the first large scale study of bragging in computational linguistics, building on previous research in linguistics and pragmatics.", "To facilitate this, we introduce a new publicly available data set of tweets annotated for bragging and their types.", "We empirically evaluate different transformer-based models injected with linguistic information in", "(a) binary bragging classification, i.e., if tweets contain bragging statements or not; and", "(b) multi-class bragging type prediction including not bragging.", "Our results show that our models can predict bragging with macro F1 up to 72.42 and 35.95 in the binary and multi-class classification tasks respectively.", "Finally, we present an extensive linguistic and error analysis of bragging prediction to guide future research on this topic.", "1 1 Introduction The desire to be viewed positively is a key driver of human behavior (Baumeister, 1982; Leary and Kowalski, 1990; Sedikides, 1993; Tetlock, 2002) and creating a positive image often leads to personal rewards (Gilmore and Ferris, 1989; Hogan, 1982; Schlenker, 1980).", "Self-presentation strategies are means for individuals to build and establish this positive social image to meet their goals (Goffman et al., 1978; Jones et al., 1982; Jones, 1990; Bak et al., 2014a).", "Bragging (or self-praise) is one of the most common strategies and involves disclosing a positively valued attribute about the speaker or their in-group (Dayter, 2014, 2018).", "Social media platforms tend to promote self-presentation tendencies (Chen et al., 2016) and allow users to craft an idealized self-image of themselves (Chou and Edge, 2012; Michikyan et al., 2015; Halpern et al., 2017).", "Self-presentation online is predominantly positive (Chou and Edge, 2012; Lee-Won et al., 2014; Matley, 2018).", "Furthermore, self-promotion is acceptable and even desired in certain online contexts (Dayter, 2018).", "This is also amplified by social media platforms through the presence of likes or positive reactions to users' posts (Reinecke and Trepte, 2014) which often are used to quantify impact on the platform (Lampos et al., 2014).", "Bragging in particular was found to be more frequent on social media than face-to-face interactions (Ren and Guo, 2020).", "However, bragging is considered a high risk act (Brown and Levinson, 1987; Holtgraves, 1990; Van Damme et al., 2017) and can lead to the opposite effect than intended, such as dislike or decreased perceived competence (Jones et al., 1982; Sezer et al., 2018; Matley, 2018).", "It is, thus, paramount to understand the types of bragging and strategies to mitigate the face-threat introduced by bragging as well as how effective the self-presentation attempt is (Herbert, 1990).", "Table 1 shows examples of a non-bragging and bragging statements grouped in six types under a taxonomy that we propose in this paper based on previous linguistic research (Dayter, 2018; Matley, 2018).", "Despite its pervasiveness and importance in online communication, bragging has yet to be studied at scale in computational (socio) linguistics.", "The ability to identify bragging automatically is important for:", "(a) linguists to better understand the context and types of bragging through empirical studies (Dayter, 2014; Ren and Guo, 2020);", "(b) social scientists to analyze the relationship between bragging and personality traits, online behavior and communication strategies (Miller et al., 1992; Van Damme et al., 2017; Sezer et al., 2018);", "(c) on-3945 Type Definition Tweet Achievement Concrete outcome obtained as a result of the tweet author's actions.", "line users to enhance their self-presentation strategies (Miller et al., 1992; Dayter, 2018);", "(d) enhancing NLP applications such as intent identification (Wen et al., 2017) and conversation modeling.", "In this paper, we aim to bridge the gap between previous work in pragmatics and the computational study of speech acts.", "Our contributions are: A new publicly available data set containing a total of 6,696 English tweets annotated with bragging and their types; Experiments with transformer-based models combined with linguistic features for bragging identification (binary classification) and bragging type classification (seven classes); A qualitative linguistic analysis of markers of bragging in tweets and the model behavior in predicting bragging.", "Bragging as a Speech Act Bragging as a speech act is considered a face-threatening act to positive face (i.e. the desire to be liked) under politeness theory (Brown and Levinson, 1987).", "It is directly oriented to the speaker and may threaten their likeability if the bragging is perceived negatively, while also may affect hearer's face by implying that their feelings are not valued by the speaker (Matley, 2018).", "Bragging online plays an important role in self-presentation and its pervasiveness challenges classic politeness theories, such as the modesty maxim (Leech, 2016) and the self-denigration maxim (Gu, 1990).", "Thus, research in social psychology and linguistics has mostly focused on identifying the pragmatic strategies for bragging that mitigate face threat and their impact of likeability and perceived competence, which the speakers aim to increase with this self-presentation strategy.", "Bragging Strategies Modest and sincere self-presentation styles are more likely to be perceived positively (Sedikides et al., 2007).", "Bragging framed as mere information-sharing, but with positive connotation to the speaker, can make the speaker be perceived as more likeable (Miller et al., 1992).", "It can also be perceived negatively and causes greater aggression when it involves boasting, elements of competitiveness, use of superlatives and explicit comparisons to others (Miller et al., 1992; Hoorens et al., 2012; Scopelliti et al., 2015; Matley, 2018).", "In addition, competence related statements are more likely to be negatively perceived than those based on warmth (e.g. the ability to form connections with others) (Van Damme et al., 2017).", "Common mitigation strategies include speaker's attempts to deny compliments, shifting focus to per-sons closely related to them, reframing bragging as praise from a third party, admitting the bragging act through disclaimers (e.g. using #brag) or expressing it as a complaint (Wittels, 2011; Sezer et al., 2018), question, narration or sharing (Dayter, 2018; Matley, 2018; Ren and Guo, 2020).", "The success of self-presentation strategies are also impacted by the social context (Tice et al., 1995) or speaker identity (Paramita and Septianto, 2021).", "Analysis of Bragging Bragging has been studied in the context of a small ballet community (Dayter, 2014), a pick-up artist forum (Rdiger and Dayter, 2020) and a small set of WhatsApp conversations (Dayter, 2018).", "On social media, Matley (2018) studied the functional use of hashtags (e.g. #brag, #humblebrag) in Instagram posts, Tobback (2019) examined bragging strategies on LinkedIn, Ren and Guo (2020) investigated bragging and its pragmatic functions in Chinese social media and Matley (2020) studied impact of mitigating bragging through irony showing that bragging was negatively perceived.", "However, all these studies rely on manual analyses of small data sets (e.g. < 300 posts).", "Speech Acts in NLP Speech acts have been studied in NLP with examples including politeness (Danescu-Niculescu-Mizil et al., 2013), complaints (Preotiuc-Pietro et al., 2019; Jin and Aletras, 2020, 2021), humor (Yang et al., 2021), parody (Maronikolakis et al., 2020), irony (Bamman and Smith, 2015), deception (Chen et al., 2020) and self-disclosure (Bak et al., 2012; Levontin and Yom-Tov, 2017; Ravichander and Black, 2018).", "Self-disclosure is closer to bragging as it is related to revealing personal information about oneself.", "It is usually employed to improve or maintain relationships (Bak et al., 2012) as measured through conversation frequency (Bak et al., 2014b).", "On the other hand, bragging is about aspects that are positively valued by the audience with the goal of improving the speaker's self-image.", "Bak et al. (2014a) aim to predict different levels of self-disclosure statements, from general to sensitive; while Wang et al. (2021) examine gender differences in self-promotion by Congress members on Twitter.", "Bragging also involves in some cases possessions (Chin-nappa and Blanco, 2018).", "Definition Bragging is a speech act which explicitly or implicitly attributes credit to the speaker for some good (e.g.possession, skill) that is positively valued by the speaker and their audience (Dayter, 2014).", "A bragging statement should clearly express what the author is bragging about.", "Types We generalize and extend the bragging types based on the definitions by Dayter (2018) and Matley (2018).", "The former summarizes them as accomplishments and some aspects of self; while the latter includes everyday achievements (e.g. cooking) and personal qualities.", "We divide the some aspects of self' category into two categories, namely Possession' and Trait' respectively.", "We also add an Affiliation' category for bragging involving a group to which the speaker belongs.", "In total, we consider six bragging types and a non-bragging category.", "Table 1 shows the definitions of each type.", "Classification Tasks Given the taxonomy above, we define two classification tasks:", "(i) binary bragging prediction (i.e. if a tweet contains a bragging statement or not); and", "(ii) seven-way multiclass classification for predicting if a tweet contains one of the six bragging types or no bragging at all.", "To the best of our knowledge, there is no other data set available for our study.", "We use Twitter for data collection as tweets are openly available for research and widely used in other related tasks, e.g. predicting sentiment (Rosenthal et al., 2017), affect (Mohammad et al., 2018), sarcasm (Bamman and Smith, 2015), stance (Mohammad et al., 2016).", "Random Sampling We select tweets for annotation by randomly sampling from the 1% Twitter feed one day per month from January 2019 to December 2020 (approximately 10k tweets per day) to ensure diversity using the Premium Twitter Search API for academic research.", "2 Keyword-based Sampling To give a model access to more positive examples of bragging statements for training, we use a keyword-based sampling method that increases the hit rate of bragging, following previous work on labeling infrequent linguistic phenomena, e.g. irony (Mohammad et al., 2018) or hate speech (Waseem and Hovy, 2016).", "We build queries based on indicators of positive self-disclosure (e.g. I, just ) (Dayter, 2018) and stylistic indicators, e.g. positive emotion words, present tense verbs (Bazarova et al., 2013).", "As the frequency of these keywords is high, we construct multi-word queries consisting of a personal pronoun and an indicator.", "In addition, we use a short list of curated bragging-related hashtags.", "3 After annotating 1,000 tweets, we compute the percentage 2 https://tinyurl.com/2p8wnure 3 The queries are: {[ I, proud ], [ I, glad ], [ I, happy ], [ I, best ], [ I, amazed ], [ I, amazing ], [ I, excellent ], [ I, just ], [ I'm, proud] , [ I'm, glad ], [ I'm, happy ], [ I'm, best ], [ I'm, amazed ], [ I'm, amazing ], [ I'm, excellent ], [ me, proud ], [ my, best ], # brag , # bragging , # humblebrag , # humble , # braggingrights }.", "of bragging tweets for each keyword and remove from sampling tweets with less than 5% (i.e. [ I, amazed ], [ I'm, amazing ], [ I'm, best ], [ my, best ], [ I, excellent ], # humble ).", "We initially collected around 6K and 368K tweets using hashtags and multi-word queries respectively.", "We obtain over 9k tweets by keeping all tweets collected using hashtags and sample 1% from those collected using multi-word queries to balance the two types.", "Data Filtering After collecting tweets, we exclude those with duplicate or no meaningful textual content (e.g. only @-mentions or images).", "We only focus on English posts and filter out non-English ones using the language code provided by Twitter.", "We also exclude retweets and quoted tweets, as these do not typically express the thoughts of the user who retweeted them.", "Moreover, we exclude 131 tweets containing a URL in the text because these were related to advertisements based on initial results from our annotation calibration rounds.", "This resulted in a total of 6,696 tweets which is of similar size with data sets recently released for social NLP (Oprea and Magdy, 2020; Chung et al., 2019; Beck et al., 2021; Mendelsohn et al., 2021).", "We manually annotate tweets for providing a solid benchmark and foster future research.", "All authors of the paper have significant experience in linguistic annotation.", "We run three calibration rounds of 100 tweets each, where all annotated all tweets and discussed disagreements, until a Krippendorf's Alpha above 0.80 in the seven-class task was reached.", "To monitor quality, a subset of 1,564 tweets were annotated by two annotators or more in case of disagreements.", "If a tweet fits into multiple bragging types, we assign the more prominent one.", "4 The 4 For example, we annotate New car New crib New Class Self-disclosure (%) Non-self-disclosure (%) Bragging 31.63 68.37 Non-bragging 24.04 75.96 Achievement 31.65 68.35 Action 27.57 72.43 Feeling 31.82 68.18 Trait 36.69 63.31 Possession 29.07 70.93 Affiliation 35.29 64.71 Non-bragging 24.04 75.96 Total 24.93 75.07 Table 3: Percentages of self-disclosure class across bragging classes annotation is based only on the actual text of the tweet without considering additional modalities (e.g. images), context or replies.", "This is similar to the information available to predictive models during training.", "We selected the final label as the majority vote and a final label was assigned after consensus in cases of three different votes.", "5 The full task guidelines, examples and interface are presented in Appendix B. The inter-annotator agreement between two annotations of all tweets is:", "(a) percentage agreement: 89.03;", "(b) Krippendorf's Alpha (Krippen-dorff, 2011) (7-class): 0.840;", "(c) Krippendorf's Alpha (binary): 0.786.", "Agreement values are between the upper part of the substantial agreement band and the perfect agreement band (Artstein and Poesio, 2008).", "The final data set consists of 6,696 tweets with one of the seven classes.", "Before annotation, the keyword-based and randomly sampled tweets were shuffled to not induce frequency bias.", "Data set statistics are shown in Table 2, including statistics across the two sampling strategies.", "The model performance curve by varying the training set size indicates that annotating more data is not likely to lead in substantial improvements in bragging prediction (see Figure 3 in Appendix).", "We conduct an analysis of the relationship between self-disclosure and bragging as they are closely related.", "We use self-disclosure lexicon by Bak et al. (2014a) to assign each tweet in our data set a label (i.e. self-disclosure or non-self-disclosure).", "The percentages of self-disclosure across each bragging type are shown in Table 3.", "We also used self-disclosure models as a predictor for bragging in barbershop 20 years young as Possession' because bragging is mostly about possessions ( crib , car , barbershop ).", "5 We experimented on training models using the subset annotated by a single annotator compared to multiple annotators and find no significant differences (see Appendix A).", "We use the keyword sampled data for training and the random data for development and testing (in the ratio of 2:8) because the latter is representative of the real distribution of tweets (see Table 2).", "We evaluate vanilla transformer-based models (Vaswani et al., 2017) and further leverage external linguistic information to improve them.", "BERT, RoBERTa and BERTweet We experiment with Bidirectional Encoder Representations from Transformers (BERT; Devlin et al. (2019)), RoBERTa (Liu et al., 2019) and BERTweet (Nguyen et al., 2020).", "RoBERTa is a more robust variant of BERT that obtains better results on a wide range of tasks.", "BERTweet is pretrained on English tweets using RoBERTa as basis and achieves better performance on Twitter tasks (Nguyen et al., 2020).", "We fine-tune BERT, RoBERTa and BERTweet for binary and multiclass bragging prediction by adding a classification layer that takes the [CLS] token as input.", "BERTweet with Linguistic Features We inject linguistic knowledge that could be related to bragging to the BERTweet model with a similar method proposed by Jin and Aletras (2021), 6 that was found to be effective on complaint severity classification, a related pragmatics task.", "The method is adapted from Rahman et al. (2020), which integrates multimodal information (e.g. audio, visual) in transformers using a fusion mechanism called Multimodal Adaption Gate (MAG).", "MAG integrates multimodal information to text representations in transformer layers using an attention gating mechanism for modality influence controlling.", "We first expand vectors of linguistic information to a comparable size to the embeddings fed to the pre-trained transformer.", "We, then, use MAG to concatenate contextual and linguistic representations after the embedding layer of the transformer similar to Rahman et al. (2020).", "The output is sent to a pre-trained BERTweet encoder for fine-tuning followed by an output layer.", "6 Early experimentation with simply concatenating or applying attention resulted in lower performance.", "NRC: The NRC word-emotion lexicon contains a list of English words mapped to ten categories related to emotions and sentiment (Mohammad and Turney, 2013).", "We represent each tweet as a 10-dimensional vector where each element is the proportion of tokens belonging to each category.", "LIWC: Linguistic Inquiry and Word Count (Pennebaker et al., 2001) is a dictionary-based approach to count words in linguistic, psychological and topical categories.", "We use LIWC 2015 to represent each tweet as a 93-dimensional vector.", "Clusters: We use Word2Vec clusters proposed by Preotiuc-Pietro et al. (2015) to represent each tweet as a 200-dimensional vector over thematic subjects.", "Text Processing We pre-process text by lower-casing, replacing all username mentions with placeholder tokens @USER and emojis with words using demojize.", "7 We also remove hashtags that are used as keywords (e.g. #brag ) in data collection.", "Finally, we tokenize the text using TweetTokenizer.", "8 Baselines Majority Class: As a first baseline, we label all tweets with the label of the majority class.", "LR-BOW: We train a Logistic Regression with bag-of-words using L2 regularization.", "BiGRU-Att: We also train a bidirectional Gated Recurrent Unit (GRU) network (Cho et al., 2014) with self-attention (Tian et al., 2018).", "Tokens are first mapped to GloVe embeddings (Pennington et al., 2014) and then passed to a bidirectional GRU.", "Subsequently, its output is passed to a self-attention layer and an output layer for classification.", "Hyperparameters For BiGRU-Att , we use 200-dimensional GloVe embeddings (Pennington et al., 2014) pre-trained on Twitter data.", "The hidden size is h = 128 where h {64, 128, 256, 512} with dropout d = .2, d {.2, .5}.", "We use Adam optimizer (Kingma and Adam, 2015) with learning rate l = 1e-2, l {1e-3, 1e-2, 1e-1}.", "For BERT , 7 https://pypi.org/project/emoji/ 8 https://www.nltk.org/api/nltk.", "RoBERTa and BERTweet , we use the base cased model (12 layers and 109M parameters, 12 layers and 125M parameters and 12 layers and 135M parameters accordingly) and fine-tune them with learning rate l = 3e-6, l {1e-4, 1e-5, 5e-6, 3e-6, 1e-6}.", "For BERTweet with linguistic features , we project these to vectors of size l NRC = 200, l LIWC = 400, l Clusters = 768, l {10, 93, 200, 400, 600, 768}.", "For MAG, we use the default parameters from Rahman et al. (2020).", "For multi-class classification , we apply class weighting due to the im-balanced data and set the training epoch to n = 40, n {15, 20, 25, 30, 35, 40, 45, 50, 55, 60,}.", "The maximum sequence length is set to 50 covering 95% of tweets in the training set.", "We use a batch size of 32.", "Training and Evaluation We train each model three times using different random seeds and report the mean Precision, Recall and F1 (macro).", "We apply early stopping during training based on the dev loss.", "The experiments with linguistic features are performed with the best pre-trained transformer in each of the two classification tasks.", "Binary Bragging Classification Table 4 (left) shows the predictive performance of all models on predicting bragging (i.e. binary classification).", "Overall, BERTweet models with linguistic information achieve better overall performance.", "Transformer models perform substantially above the majority class baseline (+23.29 F1) and above Logistic Regression (+18.76).", "BERTweet (71.44 F1) performs better than BERT (64.58 F1) and RoBERTa (67.34 F1), which illustrates the advantage of pretraining on English tweets for this task.", "Performance is further improved (+0.98 F1) by using LIWC features alongside BERTweet, which indicates that injecting extra linguistic information benefits bragging identification.", "We speculate that this is because a bragging statement usually contains particular terms (e.g. personal pronouns, positive terms) or involves at least one certain aspect or theme (e.g. reward or property), which can be captured by linguistic features (e.g. feature I and ACHIEVE in LIWC).", "Combining lexicons lead to worse results than using a single one, so we refrain from reporting these results for clarity.", "Multi-class Bragging Classification Table 4 (right) shows the predictive performance of all models on multiclass bragging type prediction including not bragging.", "We again find that pre-trained transformers substantially outperform the majority class baseline (+21.1 F1) and logistic regression (+16.27 F1).", "In line with the binary results, we find that BERTweet (34.86 F1) performs best out of all transformers.", "BERTweet-Clusters outperforms all models (35.95 F1), which indicates that topical information helps to identify different types of bragging.", "Each bragging type might be particularly specialized to certain topics (e.g. weight loss in Achievement' category).", "Linguistic Feature Analysis We analyze the linguistic features i.e. unigrams, LIWC and part-of speech (POS) tags associated with bragging and its types in all tweets of our data set.", "For this purpose, we first tag all tweets using the Twitter POS Tagger (Derczynski et al., 2013).", "Each tweet is represented as a bag-of-words distribution over POS unigrams and bigrams to reveal distinctive syntactic patterns of bragging and their types.", "For each unigram, LIWC and POS feature, we compute correlations between its distribution across posts and the label of the post.", "Then, we use the method introduced by Schwartz et al. (2013) to rank the features using univariate Pearson correlation with 3950 Bragging Non-Bragging Braggingtype Achievement Action Feeling Trait Possession Affiliation Feature r Feature r Feature r Feature r Feature r Feature r Feature r Feature r UnigramsandLIWC AUTHENTIC 0.149 CLOUT 0.109 FOCUSPAST 0.200 get 0.146 happy 0.228 APOSTRO 0.197 own 0.211 FAMILY 0.276 my 0.127 YOU 0.089 Number 0.157 trip 0.128 POSEMOE 0.218 COGPROC 0.181 buy 0.175 CLOUT 0.271 I 0.122 DISCREP 0.078 Analytic 0.153 RELATIV 0.119 0.191 FOCUSPRESENT 0.179 bought 0.149 proud 0.263 TONE 0.104 NEGEMO 0.077 finished 0.150 ready 0.114 blessed 0.190 cute 0.159 car 0.146 rights 0.215 FOCUSPAST 0.102 SOCIAL 0.076 3 0.133 him 0.114 AFFECT 0.184 PRONOUN 0.157 bedroom 0.144 SOCIAL 0.209 WC 0.100 FOCUSPRESENT 0.070 WORK 0.132 happen 0.105 feels 0.176 take 0.143 extra 0.144 amazing 0.205 RELATIV 0.090 INFORMAL 0.056 managed 0.130 FOCUSFUTURE 0.105 love 0.169 COMPARE 0.143 xr 0.142 0.197 TIME 0.081 COGPROC 0.056 over 0.129 fun 0.102 sunrise 0.166 ANGER 0.138 macbook 0.055 law 0.185 during 0.078 ANGER 0.056 under 0.119 gave 0.097 weighted 0.162 I 0.137 new 0.139 team 0.182 ACHIEVE 0.075 just 0.054 beat 0.112 hours 0.096 july 0.159 if 0.137 afford 0.139 OTHERP 0.181 PREP 0.073 your 0.052 race 0.104 before 0.095 time 0.159 SWEAR 0.134 PERIOD 0.106 words 0.164 managed 0.072 IPRON 0.051 office 0.103 sitting 0.095 truly 0.156 am 0.133 HOME 0.105 teams 0.164 REWARD 0.069 ?", "Table 5 (left) presents the top 15 features from unigrams (lowercase) and LIWC (uppercase) and top 10 features from POS unigrams and bigrams correlated with bragging and non-bragging tweets.", "We notice that the top words in the bragging category can be classified into", "(a) personal pronouns (e.g. my , I ) that usually indicate the author of the bragging statement;", "(b) words related to time (e.g. FOCUSPAST , TIME , during ); and", "(c) words related to a specific bragging target (e.g. RELATIV , ACHIEVE , REWARD , managed ).", "These findings are in line with the indicators of positive self-disclosure by Dayter (2018) and Bazarova et al. (2013).", "Furthermore, personal pronouns followed by a verb in past tense ( PRP_VBD ) is common in bragging (e.g. I forgot what it's like to be good at school. Today I finished a thing we were doing so fast that everyone around me started asking ME for help instead of the prof :') ) Table 5 (right) presents the top 15 features from unigrams (lowercase) and LIWC (uppercase) correlated with bragging tweets grouped in six types.", "We observe that Achievement statements usually involve verbs that are in past tense or indicate a result (e.g. FOCUSPAST , finished , beat ).", "A POS pattern common in Achievement statements is a cardinal number followed by nouns in plural ( CD_NNS ), similar to its unigram and LIWC features ( NUMBER , 3 , 5 ) (e.g. I made a total of 5 dollars from online surveys wooo ).", "It is worth noting that one of the prevalent LIWC features for Action is FOCUSFUTURE .", "This is because the user may brag about a planned action (e.g. @USER You know what? I'm going to make some PizzaRolls Brag ).", "Most of the top words in Feeling express emotion or sensitivity (e.g. happy , blessed ), which is consistent with the top POS feature, RB_JJ (e.g. absolutely chuffed , so happy ).", "In Trait category, words are mostly pronouns (e.g. I , PRP , PRP_VBP ) and verbs (e.g. VBP , VBP_JJ ).", "Words appear frequently in Possession category are actions related to purchase (e.g. own , buy ) and nouns related to a tangible object (e.g. car , bedroom ).", "In addition, users usually show off the value of their possessions using statements that involve currency signs ( $ ) or currency signs followed by a number ( $_CD ) (e.g. I just signed a new three-year contract and I'll be getting 235 anytime minutes per month. Plus, the company is going to throw in a phone for just $ 49 per month. I'll bet you can't beat that deal! ).", "Finally, top words in Affiliation category involve positive feeling towards belonging to a group (e.g. proud , amazing ) and nouns related to it (e.g. FAMILY , team ).", "Bragging and Post Popularity We also analyze the association between bragging posts and the number of favorites/retweets they receive by other users.", "Similar to the previous linguistic feature analysis, we use univariate Pearson correlation to compute the correlations between the log-scaled fa-3951 Class Mean Median Achievement 3.06 3.00 Action 0.91 0 Feeling 0.50 0 Trait 2.38 2.00 Possession 2.00 0.50 Affiliation 5.50 2.00 Table 6: Mean and median Twitter favorites across bragging classes on a sample set of the data.", "vorites/retweets number of each tweet and its label (i.e. bragging or non-bragging) by controlling the numbers of followers and friends of the user who post the tweet.", "Our results show that the number of favorites is positively correlated with bragging (see Appendix Figure 5) while there is no correlation between bragging and the number of retweets.", "We further explore the popularity of different bragging types.", "We randomly analyze a set of 443 tweets containing 56 bragging statements, where the follower and friend number of users are within a similar range: from 100 to 500 followers and from 500 to 1000 friends ( r = 0.19, p < .01).", "We compute the mean and median Twitter favorites across the six bragging classes (see Table 6).", "We observe that bragging statements about Affiliation such as family members or sports teams are more likely to receive considerable amount of favorites with the mean of 5.5.", "For example, 14 users favorite the tweet This maybe is a little, but I'm SO proud of my research group.", "We represent so many different personality types, cultures, ways of thinking, etc, and every single member of my lab (all 21 of them) .", "We speculate this is because praising the group that one belongs to instead of oneself as a bragging strategy enables users be perceived as more likeable.", "Furthermore, bragging about Achievement is generally marked as favorite by other users with the median of 3, where bigger achievements in the content such as job offers may receive more favorites (e.g. tweet Scored 80 % on my thesis. Rather proud of that given the circumstances: new baby; pandemic; late topic change due to lockdown; minimal uni support because of furloughs; and an international move. was marked as favorite 15 times).", "Class Confusion Analysis Figure 1 presents the confusion matrix of human agreement on seven classes normalized over the actual values (rows).", "We observe that Non-bragging (97%), Achievement (81%) and Action (78%) have high agreement, consistent with the class frequency.", "Affiliation (77%), Possession (76%) and Trait (72%) have comparable percentages as these are easily associated with a bragging target or group.", "The Feeling category has the lowest percentage mostly caused by misclassification to the Action category.", "This is due to the fact that both types are not associated to a concrete outcome by definition, with the feeling class linked to a feeling linked to an action.", "Thus, it makes the boundary between bragging about the action or the feeling associated to the action more challenging to interpret.", "The next most frequent confusion is between possession and achievement , which usually arises when a tangible possession is involved and the annotators disagree if the author was bragging about the actual possession or the action that lead to the author obtaining that possession (e.g. @USER I just got some stealth 300 easily the best headset I've ever had going from astro to turtle beach was a night and day difference ).", "Figure 2 presents the confusion matrix between bragging type predictions from the best performing model, BERTweet-Clusters, on the multi-class classification task.", "First, we observe that the model is more likely to misclassify other classes as the dominant class, Non-bragging .", "Secondly, the most unambiguous classes are Non-bragging (87%) and Achievement (52%), which are in line with human agreement.", "Also, the model is good at identifying Trait (50%) and Possession (46%) due to the particular bragging targets (e.g. personalities, skills or tangible objects).", "Furthermore, we notice that the percentages of Action (31%) and Feeling (37%) are low.", "We speculate this is because they share more similarities with other classes (e.g. involving actions).", "This might also explain the high percentage of misclassified data points between Action and Achievement , Feeling and Action .", "Lastly, the model often confuses Affiliation with Feeling likely because the terms that express positive feelings (e.g. proud', ) also appear frequently in Affiliation (see Table 5).", "Error Analysis Finally, we perform an error analysis to examine the behavior and limitations of our best performing model (i.e. BERTweet-LIWC for binary classification and BERTweet-Clusters for multi-class classification) and identify pathways to improve the task modeling.", "We first start with the binary bragging classification.", "We observe that non-bragging tweets containing positive sentiment are easy to be misclassified as bragging and even if such tweets involve something valued positively by authors, the purpose is 3952 Figure 1: Confusion matrix of annotator agreement on seven bragging categories.", "Bragging often involves contextual understanding that goes beyond word use and require deep understanding of the context to determine the label.", "For example, common terms such as first , finally , just often appear in both non-bragging (T3) and bragging (T4) tweets: T3: just cleaned my cats' toilets T4: It happened again!", "T5: 9 hr drives feel like nothing now lol", "Some bragging statements use additional mitigation strategies, e.g. re-framing the bragging statement as irony, as a complaint or invoking praise from a third party:", "Finally, we highlight some representative examples of model confusion between bragging types.", "One example is when users' actions lead or not to a concrete result.", "In this example the model predicted Action , but the actual label is Achievement : T7: not to appropriate the gang escapes culture but me n my parents just did an escape room n actually got out?", "Another example is an Action misclassified as Possession .", "This usually happens when a common phrase indicative of a certain type of bragging ( a new dish) ) is invoked as part of an action: T8: I had a new dish \"egusi\" it's so damn good!", "Other errors occur when multiple types of bragging are present (e.g. feeling and action) but the label expresses the more salient type, such as the feeling highlighted in this example: T9: Literally had the best time with the girls last night, don't think I've drank that much in my life?", "We presented the first computational approach to analyzing and modeling bragging as a speech act along with its types in social media.", "We introduced a publicly available annotated data set in English collected from Twitter.", "We experimented using transformer models combined with linguistic information on binary bragging and multiclass bragging type prediction.", "Finally, we presented an extensive analysis of features related to bragging statements and an error analysis of the model predictive behavior.", "In future work, we plan to study the extent to which bragging is used across various locations (Snchez Villegas et al., 2020; Snchez Villegas and Aletras, 2021) and languages and how it is employed by users across contexts.", "We would like to thank Ari Silburt, Danae Snchez Villegas, Yida Mu, and all the anonymous reviewers for their valuable feedback.", "Our work has received approval from the Ethics Committee of the Department of Computer Science at the University of Sheffield (No 037572) and complies with Twitter's data policy for research.", "9 References Ron Artstein and Massimo Poesio." ]
[ "abstain", "abstain", "objective", "objective", "method", "abstain", "abstain", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "other", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "other", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "method", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "method", "method", "method", "other", "abstain", "abstain" ]
[ "An increasing number of natural language processing papers address the effect of bias on predictions, introducing mitigation techniques at different parts of the standard NLP pipeline (data and models).", "However, these works have been conducted individually, without a unifying framework to organize efforts within the field.", "This situation leads to repetitive approaches, and focuses overly on bias symptoms/effects , rather than on their origins , which could limit the development of effective countermeasures.", "In this paper, we propose a unifying predictive bias framework for NLP .", "We summarize the NLP literature and suggest general mathematical definitions of predictive bias.", "We differentiate two consequences of bias: outcome disparities and error disparities , as well as four potential origins of biases: label bias , selection bias , model overamplification , and semantic bias .", "Our framework serves as an overview of predictive bias in NLP, integrating existing work into a single structure, and providing a conceptual baseline for improved frameworks.", "Predictive models in NLP are sensitive to a variety of (often unintended) biases throughout the development process.", "As a result, fitted models do not generalize well, incurring performance and reliability losses on unseen data.", "They also have socially undesirable effects by systematically under-serving or mispredicting certain user groups.", "The general phenomenon of biased predictive models in NLP is not recent.", "The community has long worked on the domain adaptation problem (Jiang and Zhai, 2007; Daume III, 2007): models fit on newswire data do not perform well on social media and other text types.", "This problem arises from the tendency of statistical models to pick up on non-generalizable signals during the training process.", "In the case of domains, these non-generalizations are words, phrases, or senses that occur in one text type, but not another.", "However, this kind of variation is not just restricted to text domains: it is a fundamental property of human-generated language: we talk differently than our parents or people from a different part of our country, etc. (Pennebaker and Stone, 2003; Eisenstein et al., 2010; Kern et al., 2016).", "In other words, language reflects the diverse demographics, backgrounds, and personalities of the people who use it.", "While these differences are often subtle, they are distinct and cumulative (Trudg-ill, 2000; Kern et al., 2016; Pennebaker, 2011).", "Similar to text domains, this variation can lead models to pick up on patterns that do not generalize to other author-demographics, or to rely on undesirable word-demographic relationships.", "Bias may be an inherent property of any NLP system (and broadly any statistical model), but this is not per se negative.", "In essence, biases are priors that inform our decisions (a dialogue system designed for elders might work differently than one for teenagers).", "Still, undetected and unaddressed, biases can lead to negative consequences: There are aggregate effects for demographic groups, which combine to produce predictive bias .", "I.e., the label distribution of a predictive model reflects a human attribute in a way that diverges from a theoretically defined ideal distribu-tion.", "For example, a Part Of Speech (POS) tagger reflecting how an older generation uses words (Hovy and Sgaard, 2015) diverges from the population as a whole.", "A variety of papers have begun to address countermeasures for predictive biases (Li et al., 2018; Elazar and Goldberg, 2018; Coavoux et al., 2018).", "1 Each identifies a specific bias and counter-1 An even more extensive body of work on fairness exists as part of the FAT* conferences, which goes beyond the scope features X source features X target label bias Biased annotations, interaction, or latent bias from past classifications.", "error disparity The distribution of error ( ) is inconsistent over Figure 1: The Predictive Bias Framework for NLP : Depiction of where bias may originate within a standard supervised NLP pipeline.", "Evidence of bias is seen in y via outcome disparity and error disparity .", "measure on their terms, but it is often not explicitly clear which bias is addressed, where it originates, or how it generalizes.", "There are multiple sources from which bias can arise within the predictive pipeline, and methods proposed for one specific bias often do not apply to another.", "As a consequence, much work has focused on bias effects and symptoms rather than their origins .", "While it is essential to address the effects of bias, it can leave the fundamental origin unchanged (Go-nen and Goldberg, 2019), requiring researchers to rediscover the issue over and over.", "The bias discussed in one paper may, therefore, be quite different than that in another.", "2 A shared definition and framework of predictive bias can unify these efforts, provide a common terminology, help to identify underlying causes, and allow coordination of countermeasures (Sun et al., 2019).", "However, such a general framework had yet to be proposed within the NLP community.", "To address these problems, we suggest a joint conceptual framework, depicted in Figure 1, outlining and relating the different origins of bias.", "We base our framework on an extensive survey of the relevant NLP literature, informed by se-of this biased-focused paper.", "Note also that while bias is an ethical issue and contributes to many papers in the ethics in NLP area, the two should not be conflated: ethics covers more than bias.", "2 Quantitative social science offers a background for bias (Berk, 1983).", "However, NLP differs fundamentally in analytic goals (namely out-of-sample prediction for NLP versus parameter inference for hypothesis testing in social science) that bring about NLP-specific situations: biases in word embeddings, annotator labels, or predicting over-amplified demographics.", "lected works in social science and adjacent fields.", "We identify four distinct sources of bias: selection bias , label bias , model overamplification , and semantic bias .", "We can express all of these as differences between", "(a) a true or intended distribution (e.g., over users, labels, or outcomes), and", "(b) the distribution used or produced by the model.", "These cases arise at specific points within a typical predictive pipeline: embeddings, source data, labels (human annotators), models, and target data.", "We provide quantitative definitions of predictive bias in this framework intended to make it easier to:", "(a) identify biases (because they can be clas-sified),", "(b) develop countermeasures (because the underlying problem is known), and", "(c) compare biases and countermeasures across papers.", "We hope this paper will help researchers spot, compare, and address bias in all its various forms.", "Contributions Our primary contributions include: (1) a conceptual framework for identifying and quantifying predictive bias and its origins within a standard NLP pipeline, (2) a survey of biases identified in NLP models, and (3) a survey of methods for countering bias in NLP organized within our conceptual framework.", "Our definition of predictive bias in NLP builds on its definition within the literature on standardized testing (i.e., SAT, GRE, etc.) Specifically, Swinton (1981) states:", "specific criterion for a particular population, and is found to give systematically different predictions for subgroups of this population who are in fact identical on that specific criterion. 3", "We generalize Swinton's definition in two ways: First, to align notation with standard supervised modeling, we say there are both Y (a random variable representing the true values of an outcome) and Y (a random variable representing the predictions. Next, we allow the concept to apply to differences associated with continuously-valued human attributes rather than simply discrete subgroups of people. 4 Below, we define two types of measurable systematic differences (i.e. dispari-ties): (1) a systematic difference between Y and Y ( outcome disparity ) and (2) a difference in error ( (cid:15) = | Y Y ) error disparity , both as a function of a given human attribute, A .", "Outcome disparity. Formally, we say an outcome disparity exists for outcome, Y , a domain D (with values source or target ), and with respect to attribute, A , when the distribution of the predicted outcome ( Q ( YD | AD ) ) is dissimilar to a given theoretical ideal distribution ( P ( YD | AD ) ):", "The ideal distribution is specific to the target application. Our framework allows researchers to use their own criteria to determine this distribution. However, the task of doing so may be nontrivial. First, the current distribution within a population may not be accessible. Even when it is, it may not be what most consider the ideal distribution (e.g., the distribution of gender in computer science and the associated disparity of NLP models attributing male pronouns to computer scientists more frequently (Hovy, 2015)). Second, it may be difficult to come to an agreed-upon ideal distribution from a moral or ethical perspective. In such a case, it may be helpful to use an ideal di-rection, rather than specifying a specific distribution (e.g., moving toward a uniform distribution of", "We have substituted test\" with predictive model.", "4 Attributes include both continuously valued user-level variables, like age, personality on a 7-point scale, etc. (also referred to as dimensional or factors), and discrete categories like membership in an ethnic group.", "Psychological research suggests that people are better represented by continuously valued scores, where possible, than discrete categories (Baumeister et al., 2007; Widiger and Samuel, 2005; McCrae and Costa Jr., 1989).", "In NLP, Lynn et al. (2017) shows benefits from treating user-level attributes as continuously when integrating into NLP models.", "pronouns associated with computer science).", "Our framework should enable its users to apply evolving standards and norms across NLP's many application contexts.", "A prototypical example of outcome disparity is gender disparity in image captions.", "Zhao et al. (2017) and Hendricks et al. (2018) demonstrate a systematic difference with respect to gender in the outcome of the model, Y even when taking the source distribution as an ideal target distribution: Q ( Y target | gender ) (cid:28) Q ( Y target | gender ) Q ( Y source | gender ) .", "As a result, captions over-predict females in images with ovens and males in images with snowboards.", "Error disparity.", "We say there is an error disparity when model predictions have larger error for individuals with a given user attribute (or range of attributes in the case of continuously-valued at-tributes).", "Formally, the error of a predicted distribution is (cid:15) D = | YD YD | If there is a difference in (cid:15) D over at least two different values of an attribute A (assuming they have been adequately sampled to establish a distribution of (cid:15) D ) then there is an error disparity: Q ( (cid:15) D | A i ) (cid:28) Q ( (cid:15) D | A j ) In other words, the error for one group might systematically differ from the error for another group, e.g., the error for green people differs from the error for blue people.", "Under unbiased conditions, the difference would be equal.", "This formulation allows us to capture both the discrete case (arguably more common in NLP, for example, in POS tagging) and the continuous case (for example, in age or income prediction).", "We propose that if either of these two disparities exist in our target application, then there is a predictive bias .", "Note that predictive bias is then a property of a model given a specific application, rather than merely an intrinsic property of the model by itself.", "This definition mirrors predictive bias in standardized testing (Swinton, 1981): a [predictive model] cannot be called biased without reference to a specific prediction situation; thus, the same instrument may be biased in one application, but unbiased in another.\"", "difference in error as a function of demographics, first documented by Hovy and Sgaard (2015).", "In theory, POS tagging errors increase the further an author's demographic attributes differ from the average WSJ author of the 1980s and 1990s (on whom many POS taggers were trained a selection bias, discussed next).", "Work by Sap et al. (2019) shows error disparity from a different origin, namely unfairness in hate speech detection.", "They find that annotators for hate speech on social media make more mistakes on posts of black individuals.", "Contrary to the case above, the disparity is not necessarily due to a difference between author and annotator population (a selection bias).", "Instead, the label disparity stems from annotators failing to account for the authors' racial background and sociolinguistic norms.", "Source and Target Populations.", "An important assumption of our framework is that disparities are dependent on the population for which the model will be applied.", "This assumption is reflected in distinguishing a separate target population from the source population on which the model was trained.", "In cross-validation over random folds, models are trained and tested over the same population.", "However, in practice, models are often applied to novel data that may originate from a different population of people.", "In other words, the disparity may exist as a model property for one application, but not for another.", "Quantifying disparity.", "Given the definitions of the two types of disparities, we can quantify bias with well-established measures of distributional divergence or deviance.", "Specifically, we suggest the Log-likelihood ratio as a central metric: D ( Y, Y | A ) = 2( log ( p ( Y | A )) log ( p ( Y | A ))) where p ( Y | A ) is the specified ideal distribution (either derived empirically or theoretically) and p ( Y | A ) is the distribution within the data.", "For error disparity the ideal distribution is always the Uniform and Y is replaced with the error.", "KL divergence ( DKL [ P ( Y | A ) P ( Y | A )] ) can be used as a secondary, more scalable alternative.", "Our measure above attempts to synthesize metrics others have used in works focused on specific biases.", "For example, the definition of outcome disparity is analogous to that used for semantic bias.", "Kurita et al. (2019) quantify bias in embeddings as the difference in log probability score when replacing words suspected to carry semantic differences (he', she') with a mask: log( P ([ Mask ] = (cid:104) PRON (cid:105) | [ Mask ] is (cid:104) NOUN (cid:105) )) log( P ([ Mask ] = (cid:104) PRON (cid:105) | [ Mask ] is [ Mask ]))) (cid:104) NOUN (cid:105) is replaced with a specific noun to check for semantic bias (e.g., an occupation), and (cid:104) P RON (cid:105) is an associated demographic word (e.g., he or she).", "But what leads to an outcome disparity or error disparity?", "We identify four points within the standard supervised NLP pipeline where bias may originate: (1) the training labels ( label bias ), (2) the samples used as observations for training or testing ( selection bias ), (3) the representation of data ( semantic bias ), or (4) due to the fit method itself ( overamplification ).", "Label Bias Label bias emerges when the distribution of the dependent variable in the data source diverges substantially from the ideal distribution: Q ( Y s | A s ) (cid:28) P ( Y s | A s ) Here, the labels themselves are erroneous concerning the demographic attribute of interest (as compared to the source distribution).", "Sometimes, this bias is due to a non-representative group of annotators (Joseph et al., 2017).", "In other cases, it may be due to a lack of domain expertise (Plank et al., 2014), or due to preconceived notions and stereotypes held by the annotators (Sap et al., 2019).", "Selection bias.", "Selection bias emerges due to non-representative observations.", "I.e., when the users generating the training (source) observations differ from the user distribution of the target, where the model will be applied.", "Selection bias (sometimes also referred to as sample bias ) has long been a concern in the social sciences.", "At this point, testing for such a bias is a fundamental consideration in study design (Berk, 1983; Culotta, 2014).", "Non-representative data is the origin for selection bias.", "Within NLP, some of the first works to note demographic biases were due to a selection bias (Hovy and Sgaard, 2015; Jrgensen et al., 2015).", "A prominent example is the so-called Wall Street Journal effect, where syntactic parsers and part-of-speech taggers are most accurate over language written by middle-aged white men.", "The effect occurs because this group happened to be the predominant authors' demographics of the WSJ articles, which are traditionally used to train syntactic models (Garimella et al., 2019).", "The same effect was reported for language identification difficulties for African-American Vernacular English (Blodgett and O'Connor, 2017; Jurgens et al., 2017).", "The predicted output is dissimilar from the ideal distribution, leading, for example, to lower accuracy for a given demographic, since the source did not reflect the ideal distribution .", "We say that the distribution of human attribute, A , within the source data, s , is dissimilar to the distribution of A within the target data, t : Q ( A s ) (cid:28) P ( A t ) Selection bias has several peculiarities.", "First, it is dependent on the ideal distribution of the target population, so a model may have selection bias for one application (and its associated target popula-tion), but not for another.", "Also, consider that either the source features ( X s ) or source labels ( Y s ) may be non-representative .", "In many situations, the distributions for the features and labels are the same.", "However, there are some cases where they diverge.", "For example, when using features from age-biased tweets, but labels from non-biased census surveys.", "In such cases, we need to take multiple analysis levels into account: corrections can be applied to user features as they are aggregated to communities (Almodaresi et al., 2017).", "The consequences could be both outcome and error disparity .", "One of the challenges in addressing selection bias is that we can not know a priori what sort of (demographic) attribute will be important to control.", "Age and gender are well-studied, but others might be less obvious.", "We might someday realize that a formerly innocuous attribute (say, handedness) turns out to be relevant for selection biases.", "This problem is known as The Known and Unknown Unknowns.", "As we know, there are known knowns: there are things we know we know.", "We also know there are known unknowns: that is to say, we know there are some things we do not know.", "But there are also unknown unknowns: the ones we don't know we don't know.", "Donald Rumsfeld ANNOTATION incorrect correct SAMPLE not-repr.", "We will see later how better documentation can help future researchers address this problem.", "Overamplification.", "Another source of bias can occur even when there is no label or selection bias.", "In overamplification , a model relies on a small difference between human attributes with respect to the objective (even an acceptable difference matching the ideal distribution), but amplifies this difference to be much more pronounced in the predicted outcomes.", "The origins of overamplification are during learning itself.", "The model learns to pick up on imperfect evidence for the outcome, which brings out the bias.", "Formally, in overamplification the predicted distribution ( Q ( Y s | A s ) ) is dissimilar to the source training distribution ( Q ( Y s | A s ) ) with respect to a human attribute, A .", "The predicted distribution is therefore also dissimilar to the target ideal distribution : Q ( Y s | A s ) (cid:28) Q ( Y s | A s ) P ( Y t | A t ) For example, Yatskar et al. (2016) found that in the imSitu image captioning data set, 58% of captions involving a person in a kitchen mention women.", "However, standard models trained on such data end up predicting people depicted in kitchens as women 63% of the time (Zhao et al., 2017).", "In other words, an error in generating a gender reference within the text (e.g., A [woman (cid:107) man] standing next to a counter-top ) males an incorrect female reference much more common.", "The occurrence of overamplification in the absence of other biases is an important motivation for countermeasures.", "It does not require bias on the part of the annotator, data collector, or even the programmer/data analyst (though it can escalate existing biases and the models' statistical discrimination along a demographic dimension).", "In particular, it extends countermeasures beyond the point some authors have made, that they are merely cosmetic and do not address the underlying cause: biased language in society (Gonen and Goldberg, 2019).", "Semantic bias.", "Embeddings (i.e., vectors representing the meaning of words or phrases) have become a mainstay of modern NLP, providing more flexible representations that feed both traditional and deep learning models.", "However, these representations often contain unintended or undesirable associations and societal stereotypes (e.g., connecting medical doctors more frequently to male pronouns than female pronouns, see Bolukbasi et al. (2016); Caliskan et al. (2017)).", "We adopt the term used for this phenomenon by others, se-mantic bias.", "Formally, we attribute semantic bias to the parameters of the embedding model ( emb ).", "Semantic bias is a unique case since it indirectly affects both outcome disparity and error disparity by creating other biases, such as overamplification (Yatskar et al., 2016; Zhao et al., 2017) or diverging words associations within embeddings or language models (Bolukbasi et al., 2016; Rudinger et al., 2018).", "However, we distinguish it from the other biases, since the population does not have to be people, but rather words in contexts that yield non-ideal associations.", "For example, the issue is not (only) that a particular gender authors more of the training data for the embeddings.", "Instead, that gendered pronouns are mentioned alongside occupations according to a non-ideal distribution (e.g., texts talk more about male doctors and female nurses than vice versa).", "Furthermore, pre-trained embeddings are often used without access to the original data (or the resources to process it).", "We thus suggest that embedding models themselves are a distinct source of bias within NLP predictive pipelines.", "They have consequently received increased attention, with dedicated sessions at NAACL and ACL 2019.", "As an example, Kurita et al. (2019) quantify human-like bias in BERT.", "Using the Gender Pronoun Resolution (GPR) task, they find that, even after balancing the data set, the model predicts no female pronouns with high probability.", "Semantic bias is also of broad interest to the social sciences as a diagnostic tool (see Section A).", "However, their inclusion in our framework is not for reasons of social scientific diagnostics, but rather to guide mindful researchers where to look for problems.", "Multiple Biases.", "Biases occur not only in isolation, but they also compound to increase their effects.", "Label and selection bias can and often do interact, so it can be challenging to distinguish them.", "Table 1 shows the different conditions to understand the boundaries of one or another.", "Consider the case where a researcher chooses to balance a sentiment data set for a user attribute, e.g., age.", "This decision can directly impact the label distribution of the target variable.", "E.g., because the positive label is over-represented in a minority age group.", "Models learn to exploit this confounding correlation between age and label prevalence and magnify it even more.", "The resulting model may be useless, as it only captures the distribution in the synthetic data sample.", "We see this situation in early work on using social media data to predict mental health conditions.", "Models to distinguish PTSD from depression turned out to mainly capture the differences in user age and gender, rather than language reflecting the actual conditions (Preotiuc-Pietro et al., 2015).", "While this is the first attempt at a comprehensive conceptual framework for bias in NLP, alternative frameworks exist, both in other fields and based on more qualitative definitions.", "Friedler et al. (2016) define bias as unfairness in algorithms.", "They specify the idea of a construct space, which captures the latent features in the data that help predict the right outcomes.", "They suggest that finding those latent variables would also enable us to produce the right outcomes.", "Hovy and Spruit (2016) take a broader scope on bias based on ethics in new technologies.", "They list three qualitative sources (data, modeling, and research design), and suggest three corresponding types of biases: demographic bias, overgeneralization, and topic exposure.", "Suresh and Guttag (2019) propose a qualitative framework for bias in machine learning, defining bias as a potential harmful property of the data.", "They categorize bias into historical bias, representation bias, measurement bias, and evaluation bias.", "Glymour and Herington (2019) classify algorithmic bias, in general, into four different categories, depending on the causal conditional dependencies to which it is sensitive: procedural bias, outcome bias, behavior-relative error bias, and score-relative error bias.", "Corbett-Davies and Goel (2018) propose statistical limitations of the three prominent definitions of fairness (anti-classification, classification parity, and calibration), enabling researchers to develop fairer machine learning algorithms.", "Our framework focuses on NLP, but it follows Glymour and Herington (2019) in providing probabilistic based definitions of bias.", "It incorporates and formalizes the above to varying degrees.", "In social sciences, bias definitions often relate to the ability to test causal hypotheses.", "Hernn et al. (2004) propose a common structure for various types of selection bias.", "They define bias as the difference between a variable and the outcome, and the causal effect of a variable on the outcome.", "E.g., when the causal risk ratio (CRR) differs from associational risk ratio (ARR).", "Similarly, Baker et al. (2013) define bias as uncontrolled covariates or disturbing variables that are related to measures of interest.", "Others provide definitions restricted to particular applications.", "For example, Caliskan et al. (2017) propose the Word-Embedding Association Test (WEAT).", "It quantifies semantic bias based on the distance between words with demographic associations in the embedding space.", "The previously mentioned work by Kurita et al. (2019) and Sweeney and Najafian (2019) extend such measures.", "Similarly, Romanov et al. (2019) define bias based on the correlation between the embeddings of human attributes with the difference in the True Positive rates between human traits.", "This approach is reflective of an error disparity.", "We group proposed countermeasures based on the origin(s) on which they act.", "Label Bias.", "There are several ways to address label bias, typically by controlling for biases of the annotators (Pavlick et al., 2014).", "Disagreement between annotators has long been an active research area in NLP, with various approaches to measure and quantify disagreement through inter-annotator agreement (IAA) scores to remove outliers (Artstein and Poesio, 2008).", "Lately, there has been more of an emphasis on embracing variation through the use of Bayesian annotation models (Hovy et al., 2013; Passonneau and Carpenter, 2014; Paun et al., 2018).", "These models arrive at a much less biased estimate for the final label than majority voting, by attaching confidence scores to each annotator, and reweighting them through that method.", "Other approaches have explored harnessing the inherent disagreement among annotators to guide the training process (Plank et al., 2014).", "By weighting updates by the amount of disagreement on the labels, this method prevents bias towards any one label.", "The weighted updates act as a regularizer during training, which might also help prevent overamplification.", "If annotators behave in predictable ways to produce artifacts (i.e., always add not to form a contradic-tion), we can train a model on such biased features and use it in ensemble learning (Clark et al., 2019).", "Hays et al. (2015) attempt to make Web studies equivalent to representative focus group panels.", "They give an overview of probabilistic and non-probabilistic approaches to generate the Internet panels that contribute to the data generation.", "Along with the six demographic attributes (age, gender, race/ethnicity, education, marital status, and income), they use poststratification to reduce the bias (some of these methods cross into addressing selection bias).", "Selection bias.", "The primary source for selection bias is the mismatch between the sample distribution and the ideal distribution.", "Consequently, any countermeasures need to re-align the two distributions to minimize this mismatch.", "The easiest way to address the mismatch is to re-stratify the data to more closely match the ideal distribution.", "However, this often involves down-sampling an overly represented class, which reduces the number of available instances.", "Mohammady and Culotta (2014) use a stratified sampling technique to reduce the selection bias in the data.", "Almeida et al. (2015) use demographic user attributes, including age, gender, and social status, to predict the election results in six different cities of Brazil.", "They use stratified sampling on all the resulting groups to reduce selection bias.", "Rather than re-sampling, others use reweighting or poststratifying to reduce selection bias.", "Culotta (2014) estimates county-level health statistics based on social media data.", "He shows we can stratify based on external socio-demographic data about a community's composition (e.g., gender and race).", "Park et al. (2006) estimate state-wise public opinions using the National Surveys corpus.", "To reduce bias, they use various socioeconomic and demographic attributes (state of residence, sex, ethnicity, age, and education level) in a multilevel logistic regression.", "Choy et al. (2011) and Choy et al. (2012) also use race and gender as features for reweighting in predicting the results of the Singapore and US presidential elections.", "Baker et al. (2013) study how selection bias manifests in inferences for a larger population, and how to avoid it.", "Apart from the basic demographic attributes, they also consider attitudinal and behavioral attributes for the task.", "They suggest using reweighting, ranking reweighting or propensity score adjustment, and sample-matching techniques to reduce selection bias.", "Others have suggested combinations of these approaches.", "Hernn et al. (2004), propose Directed Acyclic graphs for various heterogeneous types of selection bias, and suggest using stratified sampling, regression adjustment, or inverse probability weighting to avoid the bias in the data.", "Zagheni and Weber (2015), study the use of Internet Data for demographic studies and propose two approaches to reduce the selection bias in their task.", "If the ground truth is available, they adjust selection bias based on the calibration of a stochastic microsimulation.", "If unavailable, they suggest using a difference-in-differences technique to find out trends on the Web.", "Zmigrod et al. (2019) show that gender-based selection bias could be addressed by data augmentation, i.e., by adding slightly altered examples to the data.", "This addition addresses selection bias originating in the features ( X source ), so that the model is fit on a more gender-representative sample.", "Their approach is similar to the reweighting of poll data based on demographics, which can be applied more directly to tweet-based population surveillance (see our last case study, A.2).", "Li et al. (2018) introduce a model-based countermeasure.", "They use an adversarial multitask-learning setup to model demographic attributes as auxiliary tasks explicitly.", "By reversing the gradient for those tasks during backpropagation, they effectively force the model to ignore confounding signals associated with the demographic attributes.", "Apart from improving overall performance across demographics, they show that it also protects user privacy.", "The findings from Elazar and Goldberg (2018), however, suggest that even with adversarial training, internal representations still retain traces of demographic information.", "Overamplification.", "In its simplest form, overamplification of inherent bias by the model can be corrected by downweighting the biased instances in the sample, to discourage the model from exaggerating the effects.", "A common approach involves using synthetic matched distributions.", "To address gender bias in neural network approaches to coreference resolution Rudinger et al. (2018); Zhao et al. (2018) suggest matching the label distributions in the data, and training the model on the new data set.", "They swap male and female instances and merge them with the original data set for training.", "In the same vein, Webster et al. (2018) provide a gender-balanced training corpus for coreference resolution.", "Based on the first two corpora, Stanovsky et al. (2019) introduce a bias evaluation for machine translation, showing that most systems overamplify gender bias (see also Prates et al. (2018)).", "Hovy et al. (2020) show that this overamplification consistently makes translations sound older and more male than the original authors.", "Several authors have suggested it is essential for language to be understood within the context of the author and their social environment Jurgens (2013); Danescu-Niculescu-Mizil et al. (2013); Hovy (2018); Yang et al. (2019).", "Considering the author demographics improves the accuracy of text classifiersVolkova et al. (2013); Hovy (2015); Lynn et al. (2017), and in turn, could lead to decreased error disparity.", "Semantic bias.", "Countermeasures for semantic bias in embeddings typically attempt to adjust the parameters of the embedding model to reflect a target distribution more accurately.", "Because all of the above techniques can be applied for model fitting, here we highlight techniques that are more specific to addressing bias in embeddings.", "Bolukbasi et al. (2016) suggest that techniques to de-bias embeddings can be classified into two approaches: hard de-biasing (completely removes bias) and soft de-biasing (partially removes bias avoiding side effects).", "Romanov et al. (2019) generalize this work to a multi-class setting, exploring methods to mitigate bias in an occupation classification task.", "They reduce the correlation between the occupation of people and the word embedding of their names, and manage to simultaneously reduce race and gender biases without reducing the classifier's performance.", "Manzini et al. (2019), identify the bias subspace using principal component analysis and remove the biased components using hard Neutralize and Equalize de-biasing and soft biasing methods proposed by Bolukbasi et al. (2016).", "The above examples evaluate success through the semantic analogy task (Mikolov et al., 2013), a method whose informativeness has since been questioned, though (Nissim et al., 2019).", "For a dedicated overview of semantic de-biasing techniques see Lauscher et al. (2020).", "Social-Level Mitigation.", "Several initiatives propose standardized documentation to trace potential biases, and to ultimately mitigate them.", "Data Statements Bender and Friedman (2018) suggest clearly disclosing data selection, annotation, and curation processes explicitly and transparently.", "Similarly, Gebru et al. (2018) suggest Datasheets to cover the lifecycle of data including motivation for dataset creation; dataset composition; data collection process; data preprocessing; dataset distribution; dataset maintenance; and legal and ethical considerations.", "Mitchell et al. (2019) extend this idea to include model specifications and performance details on different user groups.", "Hitti et al. (2019) propose a taxonomy for assessing the gender bias of a data set.", "While these steps do not directly mitigate bias, they can encourage researchers to identify and communicate sources of label or selection bias.", "Such documentation, combined with a conceptual framework to guide specific mitigation techniques, acts as an essential mitigation technique at the level of the research community.", "See Appendix A.2 for case studies outlining various types of bias in several NLP tasks.", "We present a comprehensive overview of the re-cent literature on predictive bias in NLP.", "Based on this survey, we develop a unifying conceptual framework to describe bias sources and their effects (rather than just their effects).", "This framework allows us to group and compare works on countermeasures.", "Rather than giving the impression that bias is a growing problem, we would like to point out that bias is not necessarily something gone awry, but rather something nearly inevitable in statistical models.", "We do, however, stress that we need to acknowledge and address bias with proactive measures.", "Having a formal framework of the causes can help us achieve this.", "We would like to leave the reader with these main points: (1) every predictive model with errors is bound to have disparities over human attributes (even those not directly integrating human attributes); (2) disparities can result from a variety of origins the embedding model, the feature sample, the fitting process, and the outcome sample within the standard predictive pipeline; (3) selection of protected attributes (or human attributes along which to avoid biases) is necessary for measuring bias, and often helpful for mitigating bias and increasing the generalization ability of the models.", "We see this paper as a step toward a unified understanding of bias in NLP.", "We hope it inspires further work in both identifying and countering bias, as well as conceptually and mathematically defining bias in NLP.", "Framework Application Steps (TL;DR)", "1. Specify target population and an ideal distribution of the attribute ( A ) to be investigated for bias; Consult datasheets and data statements 5 if available for the model source", "; 2. If outcome disparity or error disparity , check for potential origins:", "(a) if label bias : use post-stratification or retrain annotators.", "(b) if selection bias : use stratified sampling to match source to target populations, or use post-stratification, re-weighting techniques.", "(c) if overamplification : synthetically match distributions or add outcome disparity to cost function.", "(d) if semantic bias : retrain or retrofit embeddings considering approaches above, but with attributed (e.g., gendered) words (rather than people) as the population.", "The authors would like to thank Vinod Prab-hakaran, Niranjan Balasubramanian, Joao Sedoc, Lyle Ungar, Rediet Abebe, Salvatore Giorgi, Margaret Kern and the anonymous reviewers for their constructive comments.", "Dirk Hovy is a member of the Bocconi Institute for Data Science and Analytics (BIDSA) and the Data and Marketing Insights (DMI) unit.", "5 (Gebru et al., 2018; Bender and Friedman, 2018) References Jussara M Almeida, Gisele L Pappa, et al. 2015." ]
[ "abstain", "abstain", "abstain", "objective", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "objective", "method", "objective", "other", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "method", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "method", "abstain", "abstain", "result", "result", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other" ]
[ "News editorials argue about political issues in order to challenge or reinforce the stance of readers with different ideologies.", "Previous research has investigated such persuasive effects for argumentative content .", "In contrast, this paper studies how important the style of news editorials is to achieve persuasion.", "To this end, we first compare contentand style-oriented classifiers on editorials from the liberal NYTimes with ideology-specific effect annotations.", "We find that conservative readers are resistant to NYTimes style, but on liberals, style even has more impact than content.", "Focusing on liberals, we then cluster the leads, bodies, and endings of editorials, in order to learn about writing style patterns of effective argumentation.", "The interaction between the author and the intended reader of an argumentative text is encoded in the linguistic choices of the author and their persuasive effect on the reader (Halmari and Virtanen, 2005).", "News editorials, in particular, aim to challenge or to reinforce the stance of readers towards controversial political issues, depending on the readers' ideology (El Baff et al., 2018).", "To affect readers, they often start with an enticing lead paragraph and end their argument with a punch (Rich, 2015).", "Existing research has studied the persuasive effect of argumentative content and structure (Zhang et al., 2016; Wachsmuth et al., 2016) or combinations of content and style (Wang et al., 2017; Persing and Ng, 2017).", "In addition, some works indicate that different types of content affect readers with different personalities (Lukin et al., 2017) and beliefs (Durmus and Cardie, 2018).", "However, it remains unexplored so far what stylistic choices in argumentation actually affect which readers.", "We expect such choices to be key to generating effective argumentation (Wachsmuth et al., 2018).", "This paper analyzes the persuasive effect of style in news editorial argumentation on readers with different political ideologies (conservative vs. liberal).", "We model style with widely-used features capturing argumentativeness (Somasundaran et al., 2007), psychological meaning (Tausczik and Pennebaker, 2010), and similar (Section 3).", "Based on the NYTimes editorial corpus of El Baff et al. (2018) with ideology-specific effect annotations (Section 4), we compare style-oriented with content-oriented classifiers for persuasive effect (Section 5).", "1 While the general performance of effect prediction seems somewhat limited on the corpus, our experiments yield important results: Conservative readers seem largely unaffected by the style of the (liberal) NYTimes, matching the intuition that content is what dominates opposing ideologies.", "On the other hand, the style features predict the persuasive effect on liberal readers even better than the content features while being complementary.", "That is, style matters as soon as ideology matches.", "Knowing about the specific structure of news editorials, we finally obtain common stylistic choices in their leads, bodies, and endings through clustering.", "From these, we derive writing style patterns that challenge or reinforce the stance of (liberal) readers of (liberal) news editorials, giving insights into what makes argumentation effective.", "Compared to other argumentative genres (Stede and Schneider, 2018), news editorials use many rhetorical means to achieve a persuasive effect on readers (van Dijk, 1995).", "Computational research has dealt with news editorials for retrieving opinions (Yu and Hatzivassiloglou, 2003; Bal, 2009), mining arguments (Al-Khatib et al., 2017), and 1 For reproducibility, the code of our experiments can be found here: https://github.com/webis-de/ acl20-editorials-style-persuasive-effect Feature Base Overview Reference Linguistic inquiry and word count Psychological meaningfulness in percentile Pennebaker et al. (2015) NRC emotional and sentiment lexicon Count of emotions (e,g. sad , etc.) and polarity words Mohammad and Turney (2013) Webis Argumentative Discourse Units Count of each evidence type (e.g., statistics ) Al-Khatib et al. (2017) MPQA Arguing Lexicon Count of 17 types of arguing (e.g., assessments ) Somasundaran et al. (2007) MPQA Subjectivity Classifier Count of subjective and objective sentences Riloff and Wiebe (2003) Table 1: Summary of the style feature types in our dataset.", "analyzing their properties (Bal and Dizier, 2010; Scheffler and Stede, 2016).", "While Al-Khatib et al. (2016) modeled the structure underlying editorial argumentation, we use the corpus of El Baff et al. (2018) meant to study the persuasive effects of editorials depending on the readers' political ideology.", "Halmari and Virtanen (2005) state that four aspects affect persuasion in editorials: linguistic choices, prior beliefs of readers, prior beliefs and behaviors of authors, and the effect of the text.", "Persuasive effectiveness reflects the rhetorical quality of argumentation (Wachsmuth et al., 2017).", "To assess effectiveness, Zhang et al. (2016) modeled the flow of content in debates, and Wachsmuth et al. (2016) the argumentative structure of student essays.", "Others combined different features for these genres (Persing and Ng, 2015).", "The impact of content selection relates to the notion of framing (Ajjour et al., 2019) and is well-studied in theory (van Eemeren, 2015).", "As Wang et al. (2017), however, we hypothesize that content and style achieve persuasion jointly.", "We target argumentative style here primarily, and we analyze its impact on liberal and conservative readers.", "In related work, Lukin et al. (2017) found that emotional and rational arguments affect people with different personalities, and Durmus and Cardie (2018) take into account the religious and political ideology of debate portal participants.", "In followup work, Longpre et al. (2019) observed that style is more important for decided listeners.", "Unlike them, we focus on the stylistic choices made in well-planned argumentative texts.", "The lead paragraphs and the ending of an editorial have special importance (Rich, 2015).", "Hynds (1990) analyzes how leads and endings changed over time, whereas Moznette and Rarick (1968) examined the readability of an editorial based on them.", "To our knowledge, however, no one investigated their importance computationally so far.", "In this paper, we close this gap by analyzing what style of leads and endings is particularly effective compared to the editorial's body.", "To model style, we need to abstract from the content of a news editorial.", "This section outlines the feature types that we employ for this purpose.", "Most of them have been widely used in the literature.", "Table 1 summarizes all features.", "LIWC Psychological word usage is reflected in the Linguistic Inquiry and Word Count (Tausczik and Pennebaker, 2010).", "LIWC is a lexicon-based text analysis that assigns words to psychologically meaningful categories (Tausczik and Pennebaker, 2010).", "We use the LIWC version of Pennebaker et al. (2015), which contains 15 dimensions listed in the following with examples.", "(1) Language metrics : words per sentence, long words.", "(2) Function words : pronouns, auxiliaries.", "(3) Other grammar : common verbs, comparisons.", "(4) Affect words : positive and negative emotion.", "(5) Social word : family, friends.", "(6) Cognitive processes : discrepancies, certainty.", "(7) Perceptual processes : feeling, seeing.", "(8) Biological processes : body, health.", "(9) Core drives and needs : power, reward focus.", "(10)", "Time orientation .", "(11)", "Relativity .", "(12)", "Personal concerns .", "(13)", "Informal speech .", "(14)", "Punctuation .", "(15)", "Summary variables .", "The last dimension (15) contains four variables, each of which is derived from various LIWC dimensions:", "(a) Analytical thinking (Pennebaker et al., 2014): The degree to which people use narrative language (low score), or more logical and formal language (high score).", "(b) Clout (Kacewicz et al., 2014): The relative social status, confidence, and leadership displaced in a text.", "(c) Authenticity (Newman et al., 2003): The degree to which people reveal themselves authentically.", "(d) Emotional tone (Cohn et al., 2004): Negative emotions, for scores lower than 50, and positive emotions otherwise.", "NRC Emotion&Sentiment To represent the mood of editorials, we use the NRC lexicon of Mohammad and Turney (2013).", "NRC contains a set of English words and their associations with (1) emotions such as anger , disgust , and fear as well as (2) negative and positive sentiment polarities.", "Webis ADUs To identify argumentative units in editorials that present evidence, we use the pre-trained evidence classifier of Al-Khatib et al. (2017).", "For each editorial, we identify the number of sentences that manifest anecdotal , statistical , and testimonial evidence respectively.", "MPQA Arguing Somasundaran et al. (2007) constructed a lexicon that includes various patterns of arguing such as assessments , doubt , authority , emphasis .", "For each lexicon, we have one feature that represents the count of the respective pattern in an editorial.", "MPQA Subjectivity We apply the subjectivity classifier provided in OpinionFinder 2.0 (Riloff and Wiebe, 2003; Wiebe and Riloff, 2005) on the editorials, in order to count the number of subjective and objective sentences there.", "As the basis of our analysis, we use the Webis-Editorial-Quality-18 corpus (El Baff et al., 2018).", "The corpus includes persuasive effect annotations of 1000 English news editorials from the liberal New York Times (NYTimes).", "2 The annotations capture whether a given editorial challenges the prior stance of readers (i.e., making them rethink it, but not necessarily change it), reinforces their stance (i.e., helping them argue better about the discussed topic), or is ineffective for them.", "Each editorial has been annotated by six annotators: three with liberal and three with conservative ideology.", "To evaluate an editorial's persuasive effect on liberals, we computed the majority vote of their annotations for the editorial (and, similarly, for conservatives).", "We ended up with 979 editorials with effect labels for liberals and conservatives, because we found 21 duplicate editorials with the same content but different IDs (for these, we use the majority vote across all duplicates).", "The corpus does not have predefined evaluation datasets.", "To mimic real-life scenarios, we chronologically split it into a training set (oldest 80%) and a test set (newest 20%).", "Table 2 shows the distribution of ideology-specific effects in the datasets.", "2 For copyright reasons, the corpus provides only annotations for IDs of editorials.", "The actual texts of these editorials come from the NYTimes Annotated Corpus (Sandhaus, 2008).", "To assess the impact of news editorial style on readers, we employ our style-based features on the task of predicting an editorial's persuasive effect: Given either of the two ideologies (liberal or conserva-tive), predict for each editorial whether it is challenging , reinforcing , or ineffective .", "We developed separate prediction models for the effect on liberals and conservatives, respectively.", "For each style feature type and for their combinations, we trained one SVM model with a linear kernel on the training set using scikit-learn (Pe-dregosa et al., 2011).", "Given the dataset split mentioned above (train-ing set 80%, test set 20%), we tuned the SVM's cost hyperparameter using grid search with 5-fold cross-validation on the training set.", "Since the distribution of effect labels is highly skewed, we set the hyperparameter class_weight to balanced.", "We then trained the best model on the whole training set and evaluated it on the test set.", "For comparison, we also built models for standard content features (lemma 1to 3-grams), and we consider the random baseline that picks an effect class by chance.", "For both ideologies, Table 3 reports the macro-and micro F 1 -scores for the style features, their best-performing combination, 3 the content features, and the best combination of content and style.", "4 We computed significance using Wilcoxon's test to reveal differences between each two approaches among best style , content , best content+style , and baseline .", "5 We obtained the means of F 1 -scores used in the significance tests by conducting five-fold cross-validation on the test set, using the same SVM hyperparameters as above.", "3 Best style liberals: LIWC, MPQA Subjectivity .", "Best style conservatives: NRC Emotion&Sentiment, Webis ADUs 4 Content+style liberals: LIWC, MPQA Arguing, MPQA Subjectivity, Content .", "Conservatives: MPQA Arguing, Content 5 A non-parametric test was needed, because a normal distribution was not given.", "In general, the results indicate that the persuasive effect seems hard to predict on the given corpus.", "Still, we observe that the style features play a notable role in predicting the effect of editorials on liberals .", "They achieve a significantly better macro F 1 -score of 0.43 when combined with content compared to 0.36 when using content alone, at p < 0.05.", "On the other hand, the F 1 -scores of content (macro 0.37, micro 0.38) and style (both 0.36) in predicting the effect on conservatives , are insignificantly different even from the baseline (0.33, 0.34).", "These results suggest that style is important as soon as the ideology of a reader matches the one of the news portal (at least, this holds for liberal ideol-ogy), but not if it mismatches (here, conservative).", "Observing that the style of NYTimes editorials affects liberal readers, we seek to learn what patterns of writing style makes their argumentation effective.", "To this end, we (1) abstract each discourse part of an editorial (lead, body, ending) into a style label using cluster analysis and (2) identify sequential patterns of style labels that are specific to challenging, ineffective, and reinforcing editorials.", "Clustering Styles of Discourse Parts Given the importance of specific discourse parts of editorials (Rich, 2015), we split each editorial into lead, body, and ending.", "For each part, we separately perform three steps on the training set of the given corpus: 6 6 The corpus of Sandhaus (2008) contains lead and paragraph annotations.", "The lead spans either the first two paragraphs (994 editorials), the first three (5), or the first only (1).", "We consider the last paragraph as the ending in all cases.", "1.", "Extract the style features from Section", "3.", "2. Perform a cluster analysis on the style features using cosine k -means.", "k is determined with the elbow method on the inertia of the clusters.", "3. Derive cluster labels from the most discriminating features across clusters: For each cluster, we determine those 23 values (e.g., high tone, low authenticity) whose combination suffices to significantly distinguish a cluster from others.", "With high to very low , we mean here a feature has significantly higher or lower scores compared to other clusters.", "7 Table 4 shows the distribution of lead, body, and ending clusters over challenging, ineffective, and reinforcing editorials.", "For each discourse part, the most discriminating feature is tone , followed by authenticity .", "The former combines positive (higher scores) and neg-7 For each feature (e.g., tone ), we measured significance using Anova (in case of homogeneity and normality) or Kruskal (otherwise).", "In the case of p < 0.05, we conducted posthoc analysis (independent t -test in case of normality, MannWhitney otherwise) with Bonferroni correction for each cluster pair, and we calculated the effect size r .", "Based on the effect size values, we deduced the labels of each cluster and the relative differences between them ( high to very low ).", "ative (lower scores) emotional tones (Cohn et al., 2004).", "The latter indicates the degree to which people authentically reveal themselves; the higher the score, the more personal, humble, or vulnerable the writer is (Newman et al., 2003).", "In Table 4, we observe, for example, that the lead of challenging editorials over-proportionally often shows low authenticity, or that bodies with positive tone but low authenticity tend to be ineffective.", "Identification of Style Patterns From Table 4, we determine the (maximum) two labels for each discourse part that are most specific to each of the three persuasive effect classes.", "From these, we build all possible lead-body-ending sequences, as visualized in Figure 1.", "According to a -square test, the distributions of these sequences differ significantly at p < 0.05.", "They reveal the following patterns of NYTimes editorials for liberal readers: Challenging editorials often begin with a polar emotional tone, followed by a negative tone.", "They tend to have low authenticity (i.e., not humble/personal) in the whole discourse (see Figure 2 for an example).", "Ineffective editorials over-proportionally often start with authenticity and dull tone.", "They then tend to diffuse in different directions and to have a short ending paragraph.", "Reinforcing editorials tend to start and end with a negative tone.", "They often avoid relativ-toneauthentic.", "While these insights are naturally still vague to some extent and require more analysis in follow-up research, they show a first way of capturing the style of editorial argumentation.", "This paper analyzes the importance of news editorials style in achieving persuasive effects on readers with different political ideologies.", "We find evidence that style has a significant influence on how a (liberal) editorial affects a (liberal) reader.", "Inspired by the theory of the high importance of the lead and ending in writing editorials (Rich, 2015), we also reveal common effective and ineffective style sequences (lead-body-ending) statistically.", "Our findings help to understand how effective argumentation works in the political sphere of editorial argumentation and how to generate such argumentation.", "In related work, El Baff et al. (2019) revealed the impact of style features on generating pathosand logos-oriented short argumentative texts based on the rhetorical strategies discussed by Wachsmuth et al. (2018).", "With the findings of this paper, we go beyond, defining the basis of a style-dependent generation model for more sophisticated argumentation, as found in news editorials." ]
[ "abstain", "abstain", "objective", "objective", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "result", "abstain", "abstain", "result", "abstain", "other", "other", "other", "method", "other", "other", "other", "other", "other", "abstain", "method", "other", "other", "method", "other", "other", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "method", "abstain", "result" ]
[ "We compare three new datasets for question answering: SQuAD 2.0, QuAC, and CoQA, along several of their new features: (1) unanswerable questions, (2) multi-turn interactions, and (3) abstractive answers.", "We show that the datasets provide complementary coverage of the first two aspects, but weak coverage of the third.", "Because of the datasets' structural similarity, a single extractive model can be easily adapted to any of the datasets and we show improved baseline results on both SQuAD 2.0 and CoQA.", "Despite the similarity, models trained on one dataset are ineffective on another dataset, but we find moderate performance improvement through pretraining.", "To encourage cross-evaluation, we release code for conversion between datasets at https://github.com/my89/co-squac.", "Question answering on textual data has served as a challenge problem for the NLP community (Voorhees, 2001; Richardson et al., 2013).", "With the development of large scale benchmarks and sufficiently simple evaluations (Trischler et al., 2016; Nguyen et al., 2016; Hermann et al., 2015) progress has been rapid.", "In recent evaluation on SQuAD (Rajpurkar et al., 2016), performance exceeded that of annotators (Wang et al., 2018; Hu et al., 2017; Wang et al., 2017).", "In response to this development, there have been a flurry of new datasets.", "In this work, we analyze three such new proposed datasets, SQuAD 2.0 (Rajpurkar et al., 2018), QuAC (Choi et al., 2018), and CoQA (Reddy et al., 2018).", "1 In each of these datasets, crowd workers are asked to (1) produce questions about a paragraph of text (context) and (2) produce a reply 1 A review of other new datasets is in the related work.", "by either indicating there is no answer, or providing an extractive answer from the context by highlighting one contiguous span.", "QuAC and CoQA contain two other features: questions are asked in the form of a dialog, where co-reference to previous interactions is possible and directly answering yes/no is possible.", "CoQA also allows workers to edit the spans to provide abstractive answers.", "2 We compare these three datasets along several of their new features: (1) unanswerable questions, (2) multi-turn interactions, and (3) abstractive answers.", "Unanswerable question coverage is complementary among datasets; SQuAD 2.0 focuses more on questions of extreme confusion, such as false premise questions, while QuAC primarily focuses on missing information.", "QuAC and CoQA dialogs simulate different types of user behavior: QuAC dialogs often switch topics while CoQA dialogs include more queries for details.", "Unfortunately, no dataset provides significant coverage of abstractive answers beyond yes/no answers, and we show that a method can achieve an extractive answer upper bound of 100 and 97.8 F1 on QuAC and CoQA , respectively.", "Motivated by the above analysis, we apply the baseline presented in QuAC (Choi et al., 2018), BiDAF++, a model based on BiDAF (Seo et al., 2016), augmented with self attention (Clark and Gardner, 2018) and ELMo contextualized embeddings (Peters et al., 2018) to all datasets.", "Experiments show that this extractive baseline outperforms existing extractive and abstractive baselines on CoQA by 14.2 and 2.7 F1 respectively.", "Finally, we show models can transfer between datasets with pretraining yielding moderate gains.", "3 2 Also, SQuAD 2.0 and QuAC cover only Wikipedia text, CoQA covers six other domains and QuAC is the only one of these datasets that doesn't allow the questioner to see the context before formulating a question.", "3 To facilitate easy future cross-evaluation, we release tools for conversion between these dataset.", "CoQA contains questions that drill into details about topics and cover 60% of sentences in the context while in QuAC dialog switch topic more often and cover less than 30% of sentences.", "Neither dataset has a significant number of returns to previous topics, clarifications, or definitional interactions.", "In this section we analyze unanswerable questions, dialog features, abstractive answers in SQuAD 2.0, QuAC, and CoQA.", "All analysis was performed by the authors, on a random sample of 50 contexts (300-700 questions) from the development set of each dataset.", "In Table 1 we compare types of unanswerable questions across dataset.", "We identify five types of questions found between the datasets:", "1. Entity Salad A nonsensical reference to entities found in the context or made-up entities (e.g. What infinite hierarchy implies that the graph isomorphism problem s NQ-complete? ).", "Such questions are unanswerable for any context.", "2. False Premise A fact that contradicts the context is asserted in the question (e.g. When is the correlation positive? but in the context says the correlation is strictly negative ).", "3. Topic Error A questions that references an entity in the context but the context does not focus on that entity (e.g How many earthquakes occur in California? when the article focus is actually about Southern California ).", "Such questions potentially have answers, but it would be unlikely for the answer to be found in the context.", "4. Missing Information A question who's answer could be plausibly in the context but is not (e.g. What is the record high in January? and the article is about temperature extremes).", "Such questions have an answer but it is not mentioned.", "5. Content Negation A question which asks for the opposite information of something mentioned in the context (e.g. Who didnt cause the dissolution of the Holy Roman Empire? ).", "Such questions either have answers that are the set of all entities other than the one mentioned or answers that could be found in some other context.", "Results SQuAD 2.0 contains the highest diversity of unanswerable questions of all datasets analyzed.", "Some SQuAD 2.0 questions are unlikely to be asked without significant foreknowledge of the context material and do not occur in QuAC.", "4 Both SQuAD 2.0 and QuAC cover a significant number of unanswerable questions that could be plausibly in the article.", "The difference in settings and distributions of unanswerable questions in SQuAD 2.0 and QuAC appear to be complementary: SQuAD 2.0 focuses more on questions simulating questioner confusion, while QuAC primarily focuses on missing information.", "5 2.2 Dialog Features In Table 2 we analyze five dialog behaviors:", "1. Topic Shift A question about something previously discussed (e.g. Q: How does he try to take over? ... Q: Where do they live?).", "2. Drill Down A request for more information about a topic being discussed (e.g. A: The Sherpas call Mount Everest Chomolungma. Q: Is Mt. Everest a holy site for them?)", "3. Topic Return Asking about a topic again after it had previously been shifted away from.", "4 Such questions resemble text from entailment datasets such as SNLI (Bowman et al., 2015) and seem more likely to arise if questioners are receiving very complex information and become confused.", "5 CoQA does not contain a significant number of unanswerable questions, and many of the ones that do exist are erroneously marked.", "4. Clarification Reformulating a question that had previously been asked.", "5. Definition Asking what is meant by a term (e.g. What are polygenes?) Results QuAC and CoQA contain many similar features but at very different rates, offering complementary coverage of types of user behavior.", "CoQA dialogs drill down for details significantly more frequently and cover more than 60% of sentences in the context material (Sentence Cover-age).", "QuAC dialogs shift to new topics frequently and cover less than 30% of sentences in the context.", "Both datasets contain only a small numbers of definition questions and returns to previous topics and few requests for clarification.", "Table 3 compares abstractive behavior in CoQA and QuAC.", "We observed five phenomena:", "1. Yes/No Questions annotated with yes/no.", "In QuAC such questions and their corresponding yes or no are marked in addition to an extractive answer.", "In CoQA, the single token yes or no is simply asserted as the abstractive answer, with an extractive answer provided in the rationale (e.g. Q: Is atmosphere one of them? A: yes).", "2. Coref Coreference is added to previously mentioned entities in either context or question (e.g. Q: How was France's economy in the late 2000s? A: it entered the recession).", "3. Count Counting how many entities of some type were mentioned (e.g. Q: how many specific genetic traits are named? A: five)", "4. Picking A question that requires the answer to pick from a set defined in the question (e.g. Q: Is this a boy or a girl? A: boy)", "5. Fluency Adding a preposition, changing the form of a word, or merging two non-contiguous spans (e.g. Q: how did he get away? A: by foot) Results Both QuAC and CoQA have a similar rate of yes/no questions. QuAC contains no other abstractive phenomena while CoQA contains a Overall F1 DrQA (Extractive) 54.7 DrQA + PGNet (Abstractive) 66.2 BiDAF++ w/ 0-ctx 63.4 BiDAF++ w/ 3-ctx 69.2 Table 4: Development set performance by training BiDAF++ (Choi et al., 2018) models (extractive) on CoQA data with handling yes/no and no-answer questions as in QuAC. Despite being extractive, these models significantly outperform reported baselines, DrQA and DrQA + PGNet (Reddy et al., 2018). in-F1 out-F1 F1 DrQA 54.5 47.9 52.6 DrQA + PGNet 67.0 60.4 65.1 BiDAF++ w/ 3-ctx 69.4 63.8 67.8 Table 5: Test set results on CoQA. We report in domain F1 (in-F1), out of domain F1 on two held out domains, Reddit and Science (out-F1) and the overall F1 (F1). small number of predominately insertions, often at the beginning of an extractive span, for coreference and or other fluency improvements. Because abstractive behavior in CoQA includes mostly small modifications to spans in the context, the maximum achievable performance by a model that predicts spans from the context is 97.8 F1. 6 3 New Extractive Baseline for CoQA Our analysis strongly implies that beyond yes/no questions, abstractive behavior is not a significant component in either QuAC or CoQA. As such, QuAC models can be trivially adapted to CoQA. We train a set of BiDAF++ baselines from the original QuAC dataset release (Choi et al., 2018) by optimizing the model to predict the span with maximum F1 overlap with respect to annotated abstractive answers. 7 If the abstractive answer is ex-6 To compute the upper bound, if abstractive answer is exactly yes, no, or unknown, we consider the upper bound to be 100.", "Otherwise, we use the CoQA evaluation script to find a span in the context that has maximum F1 with respect to the abstractive answer.", "7 We use the implementation on http://allennlp.", "org , and do not modify any hyper-parameters except the the maximum dialog length and that models were allowed to train up to 65 epochs.", "actly yes or no, we train the model to output the whole rationale span, and classify the question as yes/no with the appropriate answer.", "At evaluation time, if the model predicts a question is a yes/no question, instead of returning the extracted span, we simply return yes or no.", "Results Table 4 and Table 5 summarize our results for training BiDAF++ with varying contexts on CoQA.", "Beyond the difference of underlying base question-answer models (DrQA (Chen et al., 2017) vs. BiDAF (Seo et al., 2016) with self attention (Clark and Gardner, 2018)), BiDAF++ has two core differences with respect to DRQA+PGNet: (1) instead of appending previous questions and answers to input question tokens, BiDAF++ marks answers of previous questions directly on the context, and (2) BiDAF++ uses contextualized word embeddings through ELMo (Pe-ters et al., 2018).", "These differences, in combination with appropriate handling of yes/no and unanswerable questions significantly improves on the existing extractive baseline (+14.2 F1) and even on the existing abstractive baseline (+2.7 F1) .", "In this section we consider whether models can benefit from transfer between SQuAD 2.0, QuAC, and CoQA, and show that the datasets, while ineffective for direct transfer, can be used as pretraining.", "In all experiments, we use BiDAF++, either with two context or no context, depending on if we are training for dialog settings or not, with default configurations.", "Models are trained by initializing from other models trained on different datasets and we do not decrease initial learning rates from just training directly on the target dataset.", "When SQuAD 2.0 is used to initialize models that use context, we randomly order questions in SQuAD 2.0 and train as if questions were asked in the form of a dialog.", "8 8 Likely a better strategy exists but we would like to demonstrate transfer in the simplest way.", "Results Tables 6-8 summarize our results.", "Across all of the datasets, BiDAF++ outperforms other baselines, and there exists at least one other dataset that significantly improves performance on a target dataset on average +2.1 F1 .", "Experiments do not support that direct transfer is possible.", "Other proposals exist other than the three we analyzed that expand on features in SQuAD (Ra-jpurkar et al., 2016).", "For example, maintaining question independence of context to reduce the role of string matching and having long context length (Joshi et al., 2017; Kocisky et al., 2017), higher level reasoning (Khashabi et al., 2018; Clark et al., 2018; Yang et al., 2018), multi-turn information seeking interactions, in either table settings (Iyyer et al., 2017; Talmor and Berant, 2018; Saha et al., 2018), regulation settings (Saeidi et al., 2018), or Quiz Bowl settings (Elgohary et al., 2018).", "Other work considers multi-modal contexts where interactions are a single turn (Tapaswi et al., 2016; Antol et al., 2015; Lei et al., 2018) or multi-turn (Das et al., 2017; Pasunuru and Bansal, 2018).", "These efforts contain alternative challenges than ones we analyze in this paper.", "We thank Eunsol Choi, Hsin-Yuan Huang, Mohit Iyyer, He He, Yejin Choi, Percy Liang, and Luke Zettlemoyer for their helpful discussions in formulating this work.", "Also, Siva Reddy and Danqi Chen for help evaluating on CoQA and all reviewers for their comments." ]
[ "objective", "objective", "result", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "other", "abstain", "method", "method", "objective", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "other", "method", "other", "other" ]
[ "The task of Natural Language Inference (NLI) is widely modeled as supervised sentence pair classification.", "While there has been a lot of work recently on generating explanations of the predictions of classifiers on a single piece of text, there have been no attempts to generate explanations of classifiers operating on pairs of sentences.", "In this paper, we show that it is possible to generate token-level explanations for NLI without the need for training data explicitly annotated for this purpose.", "We use a simple LSTM architecture and evaluate both LIME and Anchor explanations for this task.", "We compare these to a Multiple Instance Learning (MIL) method that uses thresholded attention make token-level predictions.", "The approach we present in this paper is a novel extension of zero-shot single-sentence tagging to sentence pairs for NLI.", "We conduct our experiments on the well-studied SNLI dataset that was recently augmented with manually annotation of the tokens that explain the entailment relation.", "We find that our white-box MIL-based method, while orders of magnitude faster, does not reach the same accuracy as the black-box methods.", "Large-scale datasets for Natural Language Inference (NLI) (Bowman et al., 2015; Williams et al., 2018) have enabled the development of many deep-learning models (Rocktaschel et al., 2016; Peters et al., 2018; Radford et al., 2018).", "The task is modeled as 3-way classification of the entailment relation between a pair of sentences.", "Model performance is assessed through accuracy on a held-out test set.", "While state-of-the-art models achieve high accuracy, their complexity makes it difficult to interpret their behavior.", "Kim, 2017).", "It has been studied in natural language processing through both black-box analysis, and through modifications to the models under investigation; we refer to the latter approaches as white-box .", "Common black-box techniques generate explanations of predictions through training meta-models by perturbing input tokens (Ribeiro et al., 2016; Nguyen, 2018; Ribeiro et al., 2018) or through interpretation of model sensitivity to input tokens (Li et al., 2016; Feng et al., 2018).", "White-box methods induce new features (Aubakirova and Bansal, 2016), augment models to generate explanations accompanying their predictions (Lei et al., 2016; Camburu et al., 2018), or expose model internals such as magnitude of hidden states (Linzen et al., 2016), gradients (as a proxy for model sensitivity to input tokens (Li et al., 2016)) or attention (Bahdanau et al., 2014; Xu et al., 2015).", "Model explanations typically comprise a list of features (such as tokens) that contributed to the prediction and can serve two distinct purposes: acting either as a diagnostic during model development or to allow for a rationale to be generated for a system user.", "While methods for explaining predictions may output what was salient to the model, there is no guarantee these will correspond to the features that users deem important.", "In this paper we introduce a white-box method that thresholds the attention matrix of a neural entailment model to induce token-level explanations.", "To encourage the model's prediction of salient tokens to correspond better to the tokens users would find important, our approach uses Multiple Instance Learning (MIL) (Maron and Lozano-Perez, 1998) to regularize the attention distributions.", "We compare this against two black-box methods: LIME (Ribeiro et al., 2016) and Anchor Explanations (Ribeiro et al., 2018); both whiteand black-box methods are applied to a simple neural architecture relying on independent sentence encoding with cross-sentence attention, and thus could also be applied to more complex architectures of the same family.", "Finally, we also compare against a fully supervised baseline trained to jointly predict entailment relation and token-level explanations.", "Our experiments are conducted on e-SNLI (Camburu et al., 2018), a recently introduced extension to SNLI (Bowman et al., 2015), containing human-selected highlights of which words are required to explain the entailment relation between two sentences (see Fig. 1).", "Our experimental results indicate that regularizing the model's attention distributions encourages the explanations generated to be better aligned with human judgments (even without our model having explicit access to the labels which tokens annotators found important).", "Compared to the baseline thresholded attention mechanism, our method provides an absolute increase in token-level precision and recall by 6 .", "68% and 28 .", "05% respectively for the hypothesis sentence for e-SNLI explanations.", "We also found that attention based explanations are not as reliable as black-box model explanation techniques, as indicated by higher F 1 scores for both LIME and Anchor Explanations.", "This is consistent with findings of contemporaneous work by Jain and Wallace (2019).", "However, we do show that, if generating explanations from a model is a requirement, incorporating an explicit objective in training can be beneficial.", "This can be particularly useful in practicw due to the computational cost of black-box model explanations, which in empirical evaluation we found to be orders of magnitude slower ( 0 . 01 seconds vs 64 seconds per instance).", "The model we use for both whiteand black-box experiments is based on an architecture widely adopted for sentence-pair classification (Lan and Xu, 2018).", "It comprises the following: Word Embeddings We use pretrained GloVe embeddings (Pennington et al., 2014) that were fixed during training.", "Sentence Encoding Both the premise and hypothesis are independently encoded with the same LSTM (Hochreiter and Schmidhuber, 1997), yielding h p and h h respectively.", "Attention A matrix of soft alignments between tokens in the premise sentence and the hypothesis sentence is computed using attention (Bahdanau et al., 2014) over the encodings.", "Like Parikh et al. (2016), we project the encoded sentence representations using a feed-forward network, f attend , ( u i = f attend ( h pi ) , v j = f attend ( h hj ) ) before computing the inner product: A ij = u Ti v j .", "Given a premise of length m , the attention distribution for the hypothesis sentence is a h = normalize ( A m, ) where linear normalization is applied (normalize ( w ) = w (cid:107) w (cid:107) 1 ).", "Likewise for the corresponding hypothesis of length n , the premise attention distribution is a p = normalize ( A ,n ) .", "Output Classifier We predict the class label through a feed-forward neural network, f cls , where both attended encodings of the premise and hypothesis final hidden states are concatenated as input: f cls ([ a pm h pm ; a hn h hn ]) .", "The logits are normalized using the softmax function, yielding a distribution over class labels y .", "Training The model is trained in a supervised environment using cross-entropy loss between the predicted class labels for an instance y and the labeled value in the dataset, formally defined in Section 3.", "Let x p = ( x p 1 , . . . , x pm ) and x h = ( x h 1 , . . . , x h n ) be sequences of tokens of length m and n respectively for the input premise and hypothesis sentences.", "Let y represent an entailment relation between x p and x h where y { entails , contradicts , neutral } .", "Labeled training data is provided of the form { ( x pk , x hk , y k ) } Kk =1 .", "For each instance, the model must generate an explanation e defined as a subset of zero or more tokens from both the premise and hypothesis sentences: e p P ( x p ) , e h P ( x h ) .", "We generate token-level explanations by thresholding token attention weights.", "Concretely, we select all tokens, x , with a weight greater than a threshold.", "While similar to Rei and Sgaard (2018), we incorporate a re-scaling using the tanh function: e p = { x pi | a pi A ,n tanh( a pi ) } and likewise for the hypothesis.", "Thresholding the attention distributions from our model will give an indication of which tokens the model is weighting strongly for the entailment task.", "However, as mentioned in the introduction, there is no guarantee that this method of explaining model behavior will correspond with tokens that humans judge as a reasonable explanation of entailment.", "To better align the attention-based explanations with the human judgments, we model the generation of explanations as Multiple Instance Learning (MIL) (Maron and Lozano-Perez, 1998).", "In training the model sees labeled bags (sentences) of unlabeled features (tokens) and learns to predict labels both for the bags and the features.", "In MIL, this is often achieved by introducing regularizers when training.", "To encourage our NLI model to predict using sparser attention distributions (which we expect to correspond more closely with human token-level expla-nations), we introduce the following regularizers into the loss function: R 1 : This entropy-based term allows us to penalize a model that uniformly distributes probability mass between tokens.", "R 1 = K (cid:88) k =1 (cid:0) H ( a pk ) + H ( a hk ) (cid:1) = K (cid:88) k =1 ( m (cid:88) i =1 a pk,i log a pk,i + n (cid:88) j =1 a hk,j log a hk,j ) (1) R 2 : This term, adapted from a loss function for zero-shot tagging on single sentences (Rei and Sgaard, 2018), penalizes the model for breaking the assumption that at least one token must be selected from both premise and hypothesis sentences to form an explanation.", "The only exception is that, following the e-SNLI dataset annotation by Camburu et al. (2018), if the neutral entailment is predicted, no tokens are selected from the premise.", "R 3 : This term, also adapted from Rei and Sgaard (2018), encodes the assumption that not all tokens must be selected in the explanation.", "This is achieved by penalizing the smallest nonzero attention weight, which has the effect of encouraging at least one weight to be close to zero.", "The loss function used for training of our proposed model incorporating the regularizers which are controlled with hyperparameters is:", "We use two established black-box model explanation techniques for generating token-level explanations: LIME (Ribeiro et al., 2016) and Anchors (Ribeiro et al., 2018).", "Both techniques probe a classifier by making perturbations to a single input and modeling which of these perturbations in-fluence the classification.", "To adapt these for use in NLI, we make a simple modification that runs the analysis twice: once for the premise sentence and once for the hypothesis sentence on the NLI model described in Section 2.", "LIME Generates local explanations for a classifier through the introduction of a simple meta-model that is trained to replicate a local decision boundary of an instance under test.", "The training data is generated through observing the impact on classification when removing tokens from the input string.", "Anchor Explanations Considers the distribution of perturbed instances in the neighborhood of the instance under test through word substitution to identify a rule (a set of tokens in our case) for which the classification remains unchanged.", "For a supervised model we build upon the model discussed in Section 2, adding components to support LSTM-CRF-based tagging (Lample et al., 2016).", "We use the following architecture: Model Runtime (s) Token Explanation (%) per instance Premise Hypothesis P R F1 P R F1 Fully Supervised LSTM-CRF 0.02 86.91 40.98 55.70 81.16 54.79 65.41 Thresholded Attention (Linear) 0.01 19.96 19.67 19.56 46.70 34.92 39.89 + MIL Regularizers (R1) -16.59 15.67 16.12 50.02 42.44 46.01 + MIL Regularizers (R2 + R3) -18.19 20.18 19.13 51.29 50.73 51.00 + MIL Regularizers (R1 + R2 + R3) -19.23 26.21 22.18 53.38 62.97 57.78 LIME 64 60.56 48.28 53.72 57.04 66.92 61.58 Anchors 10 42.06 20.04 27.14 53.12 63.94 58.03 Table 1: Token-level scores for human-selected explanations of NLI using the e-SNLI dataset.", "Context Encoding We use the same pretrained GloVe embeddings (Pennington et al., 2014) that were fixed during training.", "The premise and hypothesis sentence were independently encoded with the same LSTM (Hochreiter and Schmidhu-ber, 1997) yielding h p and h h respectively and attended to as per the description in Section 2.", "Outputs The model is jointly trained with two output objectives: a labeling objective and a tagging objective.", "During training, the losses for both tasks are equally weighted.", "The first output objective is the three-way SNLI classification over the pair of sentences.", "This is the same component as the model presented in Section 2.", "The second objective is a binary tagging objective over the highlighted token-level explanations.", "We use a jointly-trained LSTM-CRF decoder architecture (Lample et al., 2016) which operates a CRF over encoded representations for each token.", "In our model, we independently decode the premise and hypothesis sentences.", "The inputs to our CRF are the attended premise and hypothesis: a p (cid:12) h p and a h (cid:12) h h respectively (where (cid:12) is the point-wise multiplication between the attention vector and the encoded tokens).", "We evaluate the generated explanations through evaluation of token-level F 1 scores comparing them against tokens selected by humans to explain the entailment relation using the e-SNLI dataset (Camburu et al., 2018).", "The development split of the e-SNLI dataset is used for hyperparam-eter selection and we report results on the test split.", "Where multiple annotations are available for a sentence pair, the union of the annotations is taken.", "We also report average runtime per sentence in seconds measured using 1 thread on an AWS c4.xlarge instance.", "Implementation Details The model is implemented in AllenNLP (Gardner et al., 2018) and we optimized our model with Adagrad (Duchi et al., 2011), selecting the models which attained high hypothesis F 1 without greatly affecting the accuracy of entailment task (approx 81% for the thresholded attention model).", "The cell state and hidden dimension was 200 for the LSTM sentence encoder.", "The projection for attention, f attend , was a single layer 200 dimension feed forward network with ReLU activation.", "The final feed forward classifier, f cls , dimension was (200 , 200 , 3) and ReLU activation over the first 2 layers.", "For the comparison against black-box explanation mechanisms, we use the code made public by the authors of the respective works setting any hyperparameters to the default values or those suggested in the papers.", "Results Our experimental results (Table", "1) indicate that the LIME black-box explanation technique over the model described in Section 2 provides token-level explanations that are more similar to human judgments than thresholding the attention distributions.", "We show that the addition of MIL regularizers for generating explanations using thresholded attention improved precision and recall hypothesis explanations.", "However, similar improvements were not realized for the premise sentence.", "While the black-box methods generated better explanations than thresholded attention, they were 3 orders of magnitude slower.", "Only LIME was able to generate good token-level explanations for the premise.", "This is in contrast to the attention-based explanations of the premise (in the model that LIME was run on) which could not generate satisfactory explanations (see row 2 of Table 1).", "This supports findings in recent works (Jain and Wallace, 2019) that indicate that attention does not always correspond to other measures of feature importance.", "We also found that the black-box model explanation methods behave differently given the same model under test: the premise explanation generated by the Anchors method was more in line with what the model attended to, reflected by the lower recall.", "The fully supervised model had high precision yet (relatively) low recall.", "We observed it has a bias towards predicting common words that often appear in highlights (e.g. man', woman', dog', people') for both premise and hypothesis sentences rather than highlighting keywords that would form an instance-specific explanation.", "This behaviour is also more pronounced in the premise sentence highlights rather than the hypothesis.", "We reason that this due to how the SNLI dataset was constructed: a premise sentence was used to generate 3 hypothesis sentences (entailed, contradicted and neutral).", "This is corroborated by a survey of 250 instances from the SNLI dataset, where we found that all or part of the subject noun phrase remained unchanged between the premise and hypothesis sentences 60% of the time.", "While the supervised model correctly captured commonly occurring text patterns, as demonstrated by the high F 1 scores, this behaviour alone was not sufficient to identify tokens that correlated with the entailment label.", "We found that most of the commonly predicted tokens by our supervised model did not appear in lists of features highly correlated with the entailment label (Poliak et al., 2018; Gururan-gan et al., 2018).", "In this paper we explored how to generate token-level explanations from NLI models.", "We compared the LIME and Anchors black-box methods against a novel, white-box Multiple Instance Learning (MIL) method and a fully supervised baseline.", "The explanations generated by LIME were more similar to the human judgments of the tokens that justify an entailment relation than the attention thresholding approach.", "This corroborates contemporaneous work (Jain and Wallace, 2019) indicating a lack of correspondence between attention and other measures of feature importance.", "The MIL method we introduced steered the attention distributions over tokens in our model to correspond closer to the human judgments allowing better explanations to be generated.", "Even though, when considering the token-level F 1 score, the attention-based explanations were not as good as the black-box techniques we evaluated, they were orders of magnitude faster.", "The attention thresholding model we tested did not generate satisfactory explanations had low F 1 for the premise sentences.", "A possible explanation for the poor performance is what is found by Rei and Sgaard (2018) who show that MIL regularizers performed better when there is a higher degree of association between the sentence-level label and the token-level labels.", "Our model used independent encodings of the premise and hypothesis but in NLI there is a strong dependence between the two sentences; thus the entailment prediction should be explained through pairwise token comparisons (e.g. synonyms, upward entailment, etc.).", "In future work we plan to address this by adding explicit cross-sentence semantic knowledge (Joshi et al., 2018).", "This work was conducted while James Thorne was an intern at Amazon.", "Andreas Vlachos is supported by the EU H2020 SUMMA project (grant agreement number 688139)." ]
[ "abstain", "abstain", "objective", "method", "method", "objective", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "result", "method", "method", "result", "result", "method", "abstain", "abstain", "result", "abstain", "objective", "result", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "objective", "objective", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "other", "other" ]
[ "Generating open-domain conversational responses in the desired style usually suffers from the lack of parallel data in the style.", "Meanwhile, using monolingual stylistic data to increase style intensity often leads to the expense of decreasing content relevance.", "In this paper, we propose to disentangle the content and style in latent space by diluting sentence-level information in style representations.", "Combining the desired style representation and a response content representation will then obtain a stylistic response.", "Our approach achieves a higher BERT-based style intensity score and comparable BLEU scores, compared with baselines.", "Human evaluation results show that our approach significantly improves style intensity and maintains content relevance.", "Linguistic style is an essential aspect of natural language interaction and provides particular ways of using language to engage with the audiences (Kab-bara and Cheung, 2016).", "In human-bot conversations, it is crucial to generate stylistic responses for increasing user engagement to conversational systems (Gan et al., 2017).", "Currently, most of the existing parallel datasets are not stylistically consistent.", "Samples in these datasets are usually contributed by a variety of users, resulting in an averaging effect across style characteristics (Zhang et al., 2018a).", "Meanwhile, constructing a parallel stylistic dataset for training the open-domain conversational agents is both labor-intensive and time-consuming.", "Recent studies show the effect of stylizing responses using a monolingual dataset in the desired style and a conventional conversational dataset (Niu and Bansal, 2018; Gao et al., 2019b).", "However, increasing style intensity often leads to Corresponding author.", "the expense of decreasing content relevance between dialogue history and response.", "As an example in Figure 1 shows, Niu and Bansal (2018) independently train a response generation model and a stylistic language model and subsequently interpolates them in the inference phase.", "Lacking the interaction between the stylistic language model and response generation encoder, it usually yields a trade-off between style intensity and content relevance.", "Gao et al. (2019a,b) fuse a structured latent space where the direction denotes the diversity, and the distance denotes style intensity and content relevance.", "The main issue is that style intensity and content relevance are contradictory in measurement but are coupling to the same distance metric of the latent space.", "To sum up, the key issue of the above studies is the improper entanglement of style and content.", "To address the issue, we propose to disentangle the style and content of a response.", "The disentanglement is conducted on the structured latent space, where each sentence (dialogue history, response, and stylistic sentence) is projected into a vector representation.", "We further split the representation into two components: style and content representations.", "The former is a corpus-level feature since sentences within a dataset have the same style.", "In contrast, the content representation is a sentence-level feature decided by a sentence itself.", "We thus disentangle the content and style by diluting sentence-level information in the style representation.", "This encourages the encoding of content information into the content representation.", "Otherwise, the content information will be corrupted in the style representation, making it hard to reconstruct the original content in the subsequent decoding process.", "We conduct experiments on DailyDialogue conversational dataset (Li et al., 2017) and Holmes monolingual stylistic dataset (Gao et al., 2019b).", "Experimental results show that our proposed approach improves style intensity and maintains content relevance.", "Our contributions are listed below: We propose a unified framework to simultaneously improve style intensity and maintain content relevance for neural stylistic response generation.", "We introduce a scheme of learning latent variables by a diluting strategy to disentangle the style and content.", "Experimental results show that our approach achieves higher performance in style intensity without decreasing content relevance, compared with previous approaches.", "The task of stylistic response generation is defined as follows: given a monolingual stylistic dataset S = { S 1 , ..., SN } 1 and a conversational dataset C = { ( X 1 , Y 1 ) , ..., ( XM , YM ) } , where S i , X i , and Y i denote a stylistic sentence, dialogue history, and a response respectively, the goal is to learn a generation model P ( Y | X ) , where Y is a generated response expected to be in the style of S (called the desired style in the following sec-tions).", "We will first briefly review the concept of structured latent space and then introduce our disentanglement approach.", "Overview The structured latent space is constructed by two main mechanisms:", "(i) sharing a decoder between a sequence-to-sequence (S2S) model and an auto-encoder (AE), and", "(ii) fusion and smoothness objectives.", "As an example in Figure 2 shows, a response representation ZAE ( Y i ) is regularized by the two mechanisms to be distributed around its dialogue history representation Z S2S ( X i ) .", "The notations ZAE ( ) and Z S2S ( ) denote the representations computed by AE encoder and S2S encoder, respectively.", "Such a latent space makes it possible to predict a response Y by sampling nearby the dialogue history representation.", "Based on that, Gao et al. (2019b) further align stylistic sentence representations into the latent space, which improves the style intensity of generated responses.", "In summary, the construction of the structred latent space is a process of aligning the three spaces ( Z S2S ( X i ) , ZAE ( Y i ) , and ZAE ( S j ) ) by two mechanisms (sharing the decoder, and fusion and smoothness objectives).", "Fusion Objective cross-aligns sentences of different spaces.", "Since X i and Y i are paired, we align them by minimizing their pair-wise dissimilarity: d conv = (cid:88) i batch d E ( Z S2S ( X i ) , ZAE ( Y i )) n l , (1) where d E denotes the Euclidean distance, n is the batch size, and l is the dimensionality of the latent space.", "cannot be applied to stylistic sentences since they are not paired with conversational data.", "To this end, the fusion objective instead optimizes the nearest neighbor distance between the two datasets: d style =12 d crossNN ( { Z S2S ( X i ) } , { ZAE ( S j ) } ) +12 d crossNN ( { ZAE ( S j ) } , { Z S2S ( X i ) } ) , (2) where d crossNN ( { a i } , { b j } ) denotes the batch average distance between a i and its nearest neighbor in the set { b j } .", "To further encourage the representations spread-out the latent space, a inner-distance loss is introduced: d spread-out = min { d innerNN ( Z S2S ( X i )) , d innerNN ( ZAE ( Y i )) , d innerNN ( ZAE ( S j )) } , (3) where d inner NN ( { a i } ) denotes the batch average distance between a i and its nearest neighbor in the set { a i } .", "The final fusion objective is defined as: L fuse = d conv + d style d spread-out .", "(4) Smoothness Objective aims to make the structured latent space a continuous space, where each point can decode a natural sentence.", "Given three discrete points Z S2S ( X i ) , ZAE ( Y i ) , and ZAE ( S j ) , the objective encourages points in the area between Z S2S ( X i ) and ZAE ( Y i ) to generate Y i : Z conv = U Z S2S ( X i ) + (1 U ) ZAE ( Y i ) + (cid:15), L smooth,conv = log P ( Y i | Z conv ) , (5) where (cid:15) N (0 , 2 I ) , and U U (0 , 1) .", "Meanwhile, as a point moves from ZAE ( Y i ) to ZAE ( S j ) , the corresponding generation is expected to gradually move from Y i to S j : Z style = U ZAE ( Y i ) + (1 U ) ZAE ( S j ) + (cid:15) L smooth,style = U log P ( Y i | Z style ) (1 U ) log P ( S j | Z style ) .", "(6) The smoothness objective L smooth is the sum of L smooth,conv and L smooth,style , and is added to the final loss function along with the fusion objective and response generation loss of S2S.", "Despite aligning monolingual stylistic sentences into the structured latent space helps stylize generated responses, their style intensity is still limited.", "We conjecture this is due to the coupling of the style and the content in sentence representations.", "To this end, we propose to disentangle the two aspects in the structured latent space.", "In our proposed approach, a sentence representation Z R l in the latent space consists of two components: content representation Z c R l c and style representation Z s R l s , where l is the dimensionality of latent space and l c + l s = l .", "Z s encodes all the style information of a sentence.", "It is a corpus-level feature because Z s for different sentences in the same corpus should be similar.", "In contrast, Z c can be seen as a sentence-level feature which only decided by the content of its corresponding sentence.", "Figure 3 shows an example of our approach, where Z c and Z s can be seen as two contain-ers.", "Colored squares represent the content and style information.", "We encourage the disentanglement of the two types of information by diluting sentence-level content information in Z s .", "As an example in Figure 3", "(a) shows, the content and style information may be mixed in both Z c and Z s .", "During the decoding process of a sentence, i.e., Y i , we replace its style representation Z s AE ( Y i ) with its batch average style representation Z s AE ( Y i ) = 1 n (cid:80) j batch Z s AE ( Y j ) .", "In this way, its sentence-level content information will be diluted since it greatly varies from other sentences' content information, which introduces extra noise.", "In contrast, its corpus-level style information, which is similar to that of other sentences within the batch, will remain unaffected.", "As the training processes, the content information will be encouraged to be encoded into Z c where it can remain unchanged, as an example in Figure 3", "(b) shows.", "Otherwise, the content information will be corrupted in Z s , making it hard to recover the content of Y i .", "As a result, the encoding process will be punished by the response generation loss of S2S and the reconstruction loss of AE, as shown in Figure 3", "(a).", "Based on that, we update the response generation process by replacing its style representation Z s with the corresponding batch average style representation Z s : L S2S = log P ( Y i | [ Z c S2S ( X i ) : Z s S2S ( X i )]) , (7) where the bracket [:] denotes concatenation.", "The decoding process in the smoothness objective is updated similarly.", "Note that when we move from Content Information of Sentence #1 (S 1 ) Content Information of Sentence #2 (S 2 ) Style Information Content Representation Z c Style Representation ZSS 1 : Could you please tell me how I can go job-hunting in the web?", "The batch average style representation Z s remains consistent with the target, i.e., being Z s AE ( S j ) when the target is S j .", "The updated smoothness objective is as follows: L smooth,conv = log P ( Y i | [ Z c conv : Z s AE ( Y i )]) , L smooth,style = U log P ( Y i | [ Z c style : Z s AE ( Y i )]) (1 U ) log P ( S j | [ Z c style : Z s AE ( S j )]) .", "(9) The final training loss is the sum of the response generation loss, fusion objective, and smoothness objective: L = L S2S + L fuse + L smooth .", "Here, we do not employ pre-training models, i.e., DialoGPT (Zhang et al., 2020b) and OpenAI GPT2 (Radford et al., 2019).", "This is because the disentanglement is usually conducted on a sentence representation.", "While most of the pre-training models depend on the attention mechanism, and there is no static global sentence representation during the decoding process.", "To generate a stylistic response Y i given dialogue history X i during the inference process, we first", "obtain Z c S2S ( X i ) by S2S encoder and subsequently sample Z c ( Y i ) from the hypersphere of Z c S2S ( X i ) with a mannually tuned radius r .", "After that, we generate Y i by concatenating Z c ( Y i ) and Z s AE ( S j ) , which is the batch average style representation of randomly sampled stylistic sentences.", "Considering the discrepancy between training and inference that content and style representations in different corpora have never been concatenated for generation, we propose a soft combination approach to introduce the desired style by interpolating Z s S2S ( X i ) and Z s AE ( S j ) : Z s soft = Z s S2S ( X i ) + Z s AE ( S j ) , (11) where is the weight of the desired style.", "After that, Y i is generated by the decoder whose hidden state is set to [ Z c ( Y i ) : Z s soft ] .", "To further balance style intensity and content relevance, we also employ the re-ranking strategy following Gao et al. (2019b).", "It samples N y candidate responses and re-ranks them by: s r = P S2S ( Y i | X i )+(1 ) P style ( Y i ) , (12) where P S2S ( Y i | X i ) is the generation probability under a S2S model measuring the relevance.", "P style ( Y i ) is the probability that Y i has the desired style.", "It is a interpolation between the probabilities of a neural-based classifier and a n-gram classifier: P style ( Y i ) = P neural ( Y i ) + (1 ) N (cid:88) n =1 w n P n-gram ( Y i ) , (13) Training Dialogues 11,118 Validation Dialogues 1,000 Test Dialogues 1,000 Average Tokens Per Dialogue 114.7 Average Tokens Per Utterance 14.6 Table 1: Statistics of the DailyDialog dataset.", "where w n is a weight which is set to the accuracy of the corresponding classifier.", "Conversational Dataset We employ DailyDialog 2 (Li et al., 2017) as our conversational dataset C .", "It is a human-written multi-turn dataset covering various topics of daily life.", "Table 1 shows some statistics of its training, validation, and test set.", "We split dialogue of K utterances into K -1 samples.", "Each sample consists of at most three continuous utterances.", "The last utterance of a sample is regarded as the response.", "The previous utterances of the response are concatenated as its dialogue history.", "Here, Reddit dataset is not employed as Gao et al. (2019b) because the post-reply format data collected from social networks is noisy and different from real conversations (Li et al., 2017).", "Monolingual Stylistic Dataset Following Gao et al. (2019b), we use Holmes 3 as the stylistic dataset S .", "It is collected from the Sherlock Holmes novel series and consists of roughly 38k sentences.", "We do not use the arXiv dataset as it contains too many special tokens, i.e., equations, and incomplete sentences, such as is concerned and ex-actly identical restrictions.", "We compare the proposed approach with the following baselines:", "S2S , the sequence-to-sequence response generation model (Shang et al., 2015).", "S2S+LM , a S2S trained on C and a stylistic language model trained on S (Niu and Bansal, 2018).", "During the inference process, it generates a stylistic response by interpolating outputs of the two models.", "2 http://yanran.li/dailydialog 3 https://github.com/golsun/StyleFusion Model Time (s) # of parameters S2S 4.55 63M Style Fusion 4.60 75M Ours 4.60 75M Table 2: The average running time (in seconds per batch) and the number of parameters.", "Style Fusion , a multi-task learning based model whose latent space fuses dialogue history, responses, and stylistic sentences with a specific structure (Gao et al., 2019b).", "Note that we do not consider the Label-Fine-Tuning model and Polite Reinforcement Learning model (Niu and Bansal, 2018), because they require some training samples in the conversational dataset to have the desired style (Gao et al., 2019b).", "We implement the proposed approach based on the released code of Style Fusion model 4 .", "The vocabulary table consists of the most frequent 20,000 words.", "S2S encoder, AE encoder, and the shared decoder are two-layer LSTMs.", "The number of their hidden units is 1000, which is also the size of the structured latent space.", "The dimension of Z c and Z s is 950 and 50, respectively.", "The maximum length is set to 90 for the dialogue history and 30 for the response.", "During the training process, we use the ADAM optimizer, whose learning rate is 0.0003.", "2 for sampling (cid:15) in Equation 8 is 0 .", "1 2 .", "Table 2 shows the average running time on a single TITAN X (Pascal) GPU.", "During the inference process, the weights and for re-ranking are set to 0.5.", "The weight (accuracy) of n-gram classifier is 0.93, 0.87, 0.77, and 0.65 for n from 1 to", "4. The number of candidate responses, N y , is set to 10.", "The radius r is set to 3.", "Automatic Evaluation Considering that it is unfair to evaluate a response by the classifiers that are used for selecting the response (Song et al., 2020), we fine-tune a BERT (Devlin et al., 2019) to measure style intensity.", "Concretely, positive samples are the stylistic sentences.", "Negative samples are 4 https://github.com/golsun/StyleFusion Model SI(%) Dist-1 Dist-2 BLEU-3 BLEU-4 Mean S2S (Shang et al., 2015) 6.32 0.035 0.227 0.70 0.20 0.10 S2S+LM (Niu and Bansal, 2018) 32.79 0.015 0.086 0.55 0.08 0.13 Style Fusion (Gao et al., 2019b) 10.58 0.043 0.280 0.82 0.22 0.14 Ours ( =0.25) 11.91 0.041 0.275 0.79 0.23 0.16 Ours ( =0.50) 20.67 0.040 0.275 0.64 0.17 0.19 Ours ( =0.75) 34.85 0.038 0.285 0.47 0.10 0.16 Table 3: Automatic evaluation results of SI, Dist-1, Dist-2, and BLEU.", "randomly selected from DailyDialog's responses, which are of the same amount of sentences as the positive samples.", "Given the fine-tuned BERT classifier (whose accuracy achieves 0.96 on the validation set), we report the average probability of responses being positive as a measurement of the style intensity.", "For brevity, we denote this metric as SI .", "The content relevance is evaluated by BLEU.", "Since it may correlate weakly with human judgments of quality in a single reference setting (Liu et al., 2016), we employ the expanded responses in multi-reference DailyDialog test set (Gupta et al., 2019) as references to alleviate the problem.", "Meanwhile, we evaluate the diversity by Dist-k (Li et al., 2016), which is the number of distinct k -grams normalized by the total number of words of responses.", "Human Evaluation We randomly sample 200 messages from the test set of C to conduct the human evaluation from two aspects: style intensity and content relevance.", "Each aspect is independently evaluated by five Amazon Mechanical Turk (AMT) 5 workers whose approval rate is greater than 95%, and the number of approved is greater than 500.", "Given dialogue history and two responses generated by a baseline and our approach, the workers are asked to give a preference of which one is 5 https://www.mturk.com Content Relevance Style Intensity Win Lose Win Lose vs. S2S 40.21 39.79 49.37 36.84 vs. S2S+LM 65.00 20.00 53.30 32.50 vs. Style Fusion 43.32 42.67 48.77 36.68 Table 4: Pair-wise human evaluation results of content relevance and style intensity.", "Figure 4 shows the trade-off between style intensity and content relevance in our approach.", "There is an improvement in SI and a decrease in BLEU associated with the increase of in Equation 11.", "To assess the overall performance, we also compute their harmonic mean, whose maximum lies around = 0 .", "5 .", "We thus conduct the human evaluation and analysis in this parameter setting.", "We report the human evaluation results in Table", "4. Our approach is clearly preferred in style intensity because the percentage of Win is significantly higher than that of Lose ( p < 0.001, T-test).", "In terms of content relevance, the ratios of Win in vs. S2S and vs. Style Fusion are similar to those of Lose.", "This suggests that our approach can significantly improve the style intensity without decreasing the content relevance.", "In contrast, S2S+LM loses in most of the cases in the content relevance.", "Following Zhou et al. (2018) and Ke et al. (2018), we evaluate the agreement of annotators via inter-rater consistency.", "The percentage of samples that at least three annotators have the same preference (3/5 agreement) is 81.80%.", "And the percentage for 4/5 agreement is 32.15%.", "Table 3 shows the results of the automatic evaluation.", "Our approach has the highest mean score, which indicates that it achieves the best overall performance.", "S2S+LM has a high SI score, but its BLEU scores are not as good as others, i.e., S2S.", "This is in line with our human evaluation results and Niu and Bansal (2018)'s observation that biasing a decoder with a stylistic language model may harm the content relevance.", "In contrast, our approach ( = 0 . 25 ) significantly outperforms S2S and is comparable to Style Fusion.", "By increasing to 0.5, the BLEU score drops slightly but is comparable to baselines (evidenced by the human evaluation results).", "Meanwhile, there is a significant improvement (up to 95.37%) in SI comparing with Style Fusion.", "This verifies the effectiveness of our disentanglement approach in improving the style intensity and maintaining the content relevance.", "Besides, the Dist-k results in Table 3 also indicate that the diversity of our approach is comparable to the best-performed Style Fusion.", "We conduct ablation studies to investigate the contributions of the fusion objective, smoothness objective, and our disentanglement approach.", "To focus on their effects on the generation process, in this section, we sample a single response without using the re-ranking strategy (Equation 12).", "Table 5 shows the results of the ablation study.", "There is a significant decline in SI and a slight change in BLEU-3 and BLEU-4 after removing each component.", "This indicates that a multi-task learning architecture without the three components [ Z c : Z s ] Z s Style Fusion 0.83 0.72 (-13.02%) Ours 0.88 0.86 (-1.71%) Table 6: Style classification accuracy of the full latent variable ( [ Z c : Z s ] ) and Z s .", "can achieve a good content relevance performance but fails to stylize a response.", "By removing the disentanglement component, our approach degenerates into Style Fusion.", "In this case, the SI score decreases significantly while BLEU scores are nearly unchanged, which demonstrates the disentanglement could improve the style intensity and maintain the relevance at the same time.", "The decreases in SI after removing the fusion objective and smoothness objective are more significant than that after removing the disentanglement.", "This is because the two objectives are bottom components for constructing the structured latent space, where our approach and Style Fusion are built upon.", "In this section, we analyze whether style information is disentangled into Z s .", "To achieve this goal, we train style classifiers taking as input a latent variable and use the validation accuracy as an indicator.", "Taking our approach as an instance, we first freeze the parameters of our well-trained model.", "Then we independently learn two style classifiers whose inputs are the full latent variable ( [ Z c : Z s ] ) and Z s respectively.", "Note that Z c and Z s in Style Fusion are a simple partition of its latent variable.", "There are not any disentanglement approaches applied to obtain the two representations.", "As shown in Table 6, Style Fusion achieves 0.83 validation accuracy training on its full latent variable.", "And the accuracy decreases by 13.02% when the classification is only based on Z s .", "In contrast, the decrease of our approach is only 1.71%, indicating that most of the style information is disentangled into Z s .", "We show a visualization of the disentanglement of the latent variable by MDS (Borg and Groe-nen, 2005) in Figure", "5. Each figure consists of Z s (black) and three continuous sub-sequences extracted from the head (yellow), middle (red), and tail (blue) of Z c .", "The sub-sequences are of the same length with Z s .", "For both stylistic and conversational samples, all the sub-sequences and Z s are mixed in Style Fusion.", "In contrast, there is a clear separation between Z s and the sub-sequences Dialogue Yes , after my graduation, History I worked in a trade company in Macao for one year.", "in our approach.", "This is because most of the style information is disentangled into Z s in our approach, making its distribution different from sub-sequences of Z c .", "Table 7 shows some examples of generated responses.", "There is no significant Holmes style in responses of S2S.", "Similarly, the style intensity of responses in Style Fusion is also limited.", "The semantics of S2S+LM's response in the first example is not very clear, making it less relevant to the dialogue history than other responses.", "We believe this is also due to the lack of interaction between the response generation encoder and the stylistic language model.", "In contrast, our approach not only achieves a good content relevance performance but also has a significant Holmes style, which is quite polite and formal.", "The task of text style transfer aims at transferring the style of a sentence while preserving its meaning.", "One way is to disentangle the content and style, and subsequently combine the content with the desired style.", "The disentanglement can be achieved by adversarial learning (Shen et al., 2017; Hu et al., 2017; Fu et al., 2018; Yang et al., 2018; Lo-geswaran et al., 2018), reinforcement learning (Jain et al., 2019), back-translation (Prabhumoye et al., 2018; Nogueira dos Santos et al., 2018), multi-task learning (John et al., 2019), and removing stylistic phrases (Li et al., 2018; Xu et al., 2018; Zhang et al., 2018b).", "The other way transfers the style without disentangled representations, for example using generator-evaluator architecture (Gong et al., 2019), cycle reconstruction (Dai et al., 2019), parameter sharing (Wang et al., 2020), and data augmentation (Zhang et al., 2020a).", "The main difference between our task and text style transfer lies in two aspects.", "First, all the content to be generated is available in the input in text style transfer, while our task needs to create new (response) content.", "And the key is content relevance to the dialogue history, rather than content preservation of the input.", "Second, the data for text style transfer is isomorphic.", "Data in different styles are in the same free-text format.", "However, our conversational data are context-response pairs while the stylistic data are free-texts, which is heterogeneous and requires more sophisticated structures, i.e., the structured latent space (Gao et al., 2019b).", "Niu and Bansal(2018) propose three weak-supervised models based on reinforcement learning, conditional text generation, and language model.", "Gao et al. (2019b) fuses the latent spaces of a response generation model and a stylistic auto-encoder to improve the style intensity of sampled responses.", "Yang et al. (2020) inject the style information by introducing a word-level KL loss and a sentence-level style classifier to the fine-turning process of DialoGPT (Zhang et al., 2020b).", "Distinct from previous work, we explicitly disentangle the style and content in the latent space and employ a unified architecture to jointly optimize the style intensity and content relevance.", "We propose a uniform framework to simultaneously improve the style intensity and maintain the content relevance for neural stylistic response generation.", "In contrast to existing approaches, our approach disentangles the style and the content in the latent space by a diluting strategy.", "Experiments show that our approach improves the style intensity of generated responses and maintains the content relevance at the same time, which demonstrates the effectiveness of this approach.", "The authors would like to thank all the anonymous reviewers for their insightful comments.", "The authors from HIT are supported by the National Natural Science Foundation of China (No. 62076081, No. 61772153, and No. 61936010) and Science and Technology Innovation 2030 Major Project of China (No. 2020AAA0108605).", "The author from UCSB is not supported by any of the projects above.", "This paper honors the ACL Code of Ethics.", "Stylistic response generation intends to improve the engagement of a dialogue system in human-bot conversations.", "It responds to users with the desired style, i.e., being polite, humorous, or romantic, rather than imitating any specific person.", "Meanwhile, style is a linguistic aspect of natural language interaction.", "There is not any identity characteristic being used as a variable." ]
[ "abstain", "abstain", "objective", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "objective", "abstain", "result", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "other", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "other", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "other", "other", "other", "abstain", "objective", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "method", "objective", "objective", "objective", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain" ]
[ "Simultaneous machine translation (SiMT) outputs translation while reading source sentence and hence requires a policy to decide whether to wait for the next source word (READ) or generate a target word (WRITE), the actions of which form a read/write path .", "Although the read/write path is essential to SiMT performance, no direct supervision is given to the path in the existing methods.", "In this paper, we propose a method of dual-path SiMT which introduces duality constraints to direct the read/write path.", "According to duality constraints, the read/write path in source-to-target and target-to-source SiMT models can be mapped to each other.", "As a result, the two SiMT models can be optimized jointly by forcing their read/write paths to satisfy the mapping.", "Experiments on En Vi and De En tasks show that our method can outperform strong baselines under all latency.", "Simultaneous machine translation (SiMT) (Cho and Esipova, 2016; Gu et al., 2017; Ma et al., 2019; Arivazhagan et al., 2019), which outputs translation while reading source sentence, is important to many live scenarios, such as simultaneous interpretation, live broadcast and synchronized subtitles.", "Different from full-sentence machine translation which waits for the whole source sentence, SiMT has to decide whether to wait for the next source word (i.e., READ action) or translate a target word (i.e., WRITE action) to complete the translation.", "The sequence of READ and WRITE actions in the translation process form a read/write path , which is key to SiMT performance.", "Improper read/write path will bring damage to translation performance as compared to the following WRITE actions too many but not necessary READ actions Corresponding author: Yang Feng.", "(b) The duality between the read/write paths in two directions.", "will result in high translation latency while too few but not sufficient READ actions will exclude indispensable source information.", "Therefore, an ideal read/write path is that the READ actions compared to the following WRITE actions are just sufficient and necessary , which means the source words covered by consecutive READ actions and the target words generated by the following consecutive WRITE actions should be semantically equivalent.", "Ensuring sufficiency and necessity between READ/WRITE actions will lead to a proper read/write path and thereby good SiMT performance.", "But unfortunately, the existing SiMT methods, which employ a fixed or adaptive policy, do not consider the sufficiency or necessity in their policy.", "The fixed policy performs SiMT based on 2461 a pre-defined read/write path (Dalvi et al., 2018; Ma et al., 2019), where the number of READ actions before WRITE is fixed.", "The adaptive policy (Gu et al., 2017; Zheng et al., 2019b; Arivazhagan et al., 2019; Zheng et al., 2019a; Ma et al., 2020; Liu et al., 2021) dynamically decides to READ or WRITE guided by translation quality and total latency, but skips the evaluation of sufficiency and necessity between READ/WRITE actions.", "Under these grounds, we aim at introducing the evaluation of sufficiency and necessity between READ/WRITE actions to direct the read/write path without involving external information.", "As mentioned above, in an ideal read/write path, the source segment (i.e., source words read by the consecutive READ actions) and the corresponding target segment (i.e., target words generated by the following consecutive WRITE actions) are supposed to be semantically equivalent and thus translation to each other, which constitutes a separate segment pair .", "Hence, an ideal read/write path divides the whole sentence pair into a sequence of segment pairs where the source sentence and the target sentence should be translation to each other segment by segment.", "That means if the translation direction is reversed, an ideal read/write path for target-to-source SiMT can also be deduced from the same sequence of segment pairs.", "For example, according to the alignment in Figure", "1(a), the ideal read/write paths should be RRWWW | RW | RW' in De En SiMT and RRRWW | RW | RW' in En De SiMT, as shown in Figure", "1(b), both of which share the same segment pair sequence of Fand ich , I fount it , super , great and .", ", .", ".", "Therefore, agreement on the segment pairs derived from read/write paths in source-to-target and target-to-source SiMT, called duality constraints , can be a good choice to evaluate sufficiency and necessity between READ/WRITE actions.", "Based on the above reasoning, we propose a method of Dual-Path SiMT , which uses the SiMT model in the reverse direction to guide the SiMT model in the current direction according to duality constraints between their read/write paths.", "With duality constraints, the read/write paths in source-to-target and target-to-source SiMT should reach an agreement on the corresponding segment pairs.", "Along this line, our method maintains a source-to-target SiMT model and a target-to-source SiMT model concurrently, which respectively generate their own read/write path using monotonic multihead attention (Ma et al., 2020).", "By minimizing the difference between the segment pairs derived from the two read/write paths, the two SiMT models successfully converge on the segment pairs and provide supervision to each other.", "Experiments on IWSLT15 En Vi and WMT15 De En SiMT tasks show that our method outperforms strong baselines under all latency, including the state-of-the-art adaptive policy.", "We first briefly introduce SiMT with a focus on", "monotonic multi-head attention (Ma et al., 2020).", "For a SiMT task, we denote the source sentence as x = { x 1 , , x J } and the corresponding source hidden states as m = { m 1 , , m J } , where J is the source length.", "The model generates target sentence y = { y 1 , , y I } with target hidden states s = { s 1 , , s I } , where I is the target length.", "During translating, SiMT model decides to read a source word (READ) or write a target word (WRITE) at each step, forming a read/write path.", "Read/write path can be represented in multiple forms, such as an action sequence of READ and WRITE (e.g., RRWWWRW ), or a path from (0 , 0) to ( I, J ) in the attention matrix from the target to source, where moving right (i.e., ) means READ action and moving down (i.e., ) means WRITE action, as shown in Figure", "1(b).", "Mathematically, a read/write path can be represented by a monotonic non-decreasing sequence { g i } Ii =1 of step i , where the g i represents the number of source words read in when writing the i th target word y i .", "The value of { g i } Ii =1 depends on the specific SiMT policy, where monotonic multi-head attention (MMA) (Ma et al., 2020) is the current state-of-the-art SiMT performance via modeling READ/WRITE action as a Bernoulli variable.", "Monotonic multi-head attention MMA processes the source words one by one, and concurrently predicts a selection probability p ij to indicates the probability of writing y i when reading x j , and accordingly a Bernoulli random variable z ij is calculated to determine READ or WRITE action: p ij =Sigmoid (cid:18) m j VK ( s i 1 VQ ) d k (cid:19) , (1) z ij Bernoulli ( p ij ) , (2) where VK and VQ are learnable parameters, d k is dimension of head.", "If z ij = 0 , MMA performs READ action to wait for the next source word x j +1 .", "If z ij =1 , MMA sets g i = j and performs WRITE action to generate y i based on x g i .", "Therefore, the decoding probability of y with parameters is p ( y | x ; ) = I (cid:89) i =1 p ( y i | x g i , y <i ; ) , (3) where x g i are first g i source tokens, and y <i are previous target tokens.", "Note that when integrated into multi-head attention, all attention heads in decoder layers independently determine the READ/WRITE action.", "If and only when all heads decide to perform WRITE action, the model starts translating, otherwise the model waits for the next source word.", "Expectation training Since sampling a discrete random variable z ij precludes back-propagation, MMA applies expectation training Raffel et al. (2017) to replace z ij with a expected writing probability , denoted as = ( ij ) I J , (4) where ij calculates the expectation probability of writing y i when reading x j .", "Then, the attention distribution and context vectors are accordingly calculated in the expected form.", "To trade-off between translation quality and latency, MMA introduces a latency loss L g to the training loss: L ( ) = (cid:88) ( x , y ) log p ( y | x ; ) + L g , (5) where L g measures the total latency, and is the weight of latency loss.", "Please refer to Arivazhagan et al. (2019) and Ma et al. (2020) for more detailed derivation and implementation.", "Our dual-path SiMT model employs a source-to-target ( forward ) model and a target-to-source ( backward ) model, called single-path SiMT, which generate their own read/write path based on MMA.", "According to duality constraints that the read/write paths of the two single-path SiMT models should share the same segment pair sequence, the two read/write paths should be transposed to each other in principle as shown in Figure 1.", "But in practice, after transposing one of the read/write paths, there is always a gap between the transposed read/write path and the original one in the reverse translation direction.", "By closing the gap between the Encoder Decoder Encoder Decoder Read / Write Path Read / Write Path Source-to-Target( Forward ) Target-to-Source( Backward ) Guide Guide Transpose Transpose Figure 2: The architecture of dual-path SiMT, consisting of the forward and backward single-path SiMT models.", "aforementioned transposed and original read/write paths, as shown in Figure 2, duality constraints are introduced into the dual-path SiMT model and thereby the two single-path SiMT models can provide guidance to each other.", "In what follows, we will introduce how to get the transposed read/write path (Sec.3.1) and how to reduce the gap (Sec.3.2).", "The purpose of transposing a read/write path is to get a new read/write path in the reverse direction based on the same segment pairs as the original path.", "As the transposing process works in the same way for the two directions, we just introduce the process for the forward single-path SiMT.", "Since there is no explicit read/write path in the training of single-path SiMT model, the transposing process can only use the expected writing probability matrix as the input, shown in", "Eq.(4).", "Similarly, the output of the transposing process is the transposed writing probability matrix = ( ji ) J I calculated from the transposed read/write path, which will be used to guide the backward single-path SiMT.", "The transposing process consists of three steps.", "First, derive the read/write path from the expected writing probability matrix and segment the sentence pair into a sequence of segment pairs.", "Second, transpose the sequence of segment pairs into the corresponding one for the backward SiMT.", "Last, merge the transposed segment pairs to get the transposed path and then project it to .", "In the following, we will introduce the steps of segment, transpose and merge in details.", "Segment Given the expected writing probability matrix , to get the read/write path, we first find out the source position d i that the WRITE action for each target position i corresponds to, which is d i = argmax j ij .", "According to the property of monotonic attention, there are some consecutive WRITE actions that corresponds to the same source position, so the target words generated by the consecutive WRITE actions form a target segment.", "Formally, we assume there are K target segments in total, denoted as y = { y 1 , , y k , , y K } .", "For each target segment y k = ( y b yk , , y e yk ) , where b yk and e yk are its beginning and end target positions, we can get the corresponding source segment as x k =( x b xk , , x e xk ) where b xk = (cid:40) 1 k =1 d e yk 1 + 1 otherwise (7) and e xk = d b yk .", "Thus the sentence pairs x , y can be segmented into the sequence of segment pairs as x 1 , y 1 | | x K , y K .", "By replacing the source words with READ actions and target words with WRITE actions, we can get the action segment pairs.", "Then, the read/write path is formed by concatenating all the action segment pairs, where the length of the read/write path is equal to the total number of source words and target words.", "For the example shown in Figure", "3(a), the sequence of source positions d i corresponding to WRITE actions for the whole target sentence is 2 , 2 , 2 , 3 , 3 , 5 , with the corresponding read/write path RRWWWRWWRRW .", "Then, we can get the sequence of segment pairs as x 1 x 2 , y 1 y 2 y 3 | x 3 , y 4 y 5 | x 4 x 5 , y 6 , and thereby the sequence of action segment pairs as RR , WWW | R , WW | RR , W shown in Figure", "3(b).", "Transpose After getting the sequence of segment pairs, the transposed read/write path can be derived from it.", "As the transposed read/write path is in the form to fit the backward single-path SiMT, the sequence of segment pairs should also be transposed to fit the another direction.", "According to duality constraints, the sequence of segment pairs is shared by forward and backward SiMT, so we only need to exchange the source segment and target segment in each segment pair, that is from x k , y k to y k , x k , where the beginning and end positions of each source/target segment remain the same.", "Then, we get the corresponding transposed action segment pairs by replacing target words with READ actions and source words with WRITE actions.", "In this way, we accomplish the transposing of segment pairs.", "Let's review the example in Figure", "3(b), after transposing, the sequence of segment pairs as y 1 y 2 y 3 , x 1 x 2 | y 4 y 5 , x 3 | y 6 , x 4 x 5 , and the corresponding sequence of transposed action segment pairs is RRR , WW | RR , W | R , WW as shown in Figure", "3(c).", "Merge By merging the transposed action segment pairs, we can get the transposed read/write path.", "The goal of the transposing process is to 2464 get the transposed writing probability matrix to constrain the excepted writing probability matrix for the backward single-path SiMT.", "According to the definition of the writing probability matrix, only the last column in the sub-matrix covered by each segment pair corresponds to WRITE actions.", "Formally, for each transposed segment pair y k , x k , the following elements in should have the greatest probability to perform WRITE actions as { b xk e yk , , e xk e yk } .", "For the three sub-matrices shown in Figure", "3(c), only the elements of the last column correspond to WRITE actions as shown in Figure", "3(d), which are { 13 , 23 , 35 , 46 , 56 } .", "We employ the 0 1 distribution to set the value of elements in , where the elements corresponding to WRITE actions are set to 1 and others are set to 0 .", "This is equivalent to the situation that the selection probability for the Bernoulli distribution (in", "Eq.(2)) is 1 .", "Assuming the expected writing probability matrix for the forward single-path SiMT is F and its transposed expected writing probability matrix is F , and similarly in the backward single-path SiMT, the matrices are B and B , respectively.", "We reduce the gap between the read/write path with the transposed path of read/write path in another direction by minimizing L 2 distance between their corresponding expected writing probability matrix as follows: F = (cid:13)(cid:13) F B (cid:13)(cid:13) 2 (9) B = (cid:13)(cid:13) B F (cid:13)(cid:13) 2 .", "where L (cid:0) F (cid:1) and L (cid:0) B (cid:1) are the loss function of the forward and backward single-path SiMT model respectively, calculated as", "Eq.(5).", "dual is a hyperparameter and we set dual =1 in our experiments.", "In the inference time, the forward and backward single-path SiMT models can be used separately, depending on the required translation direction.", "used in dual tasks, especially machine translation.", "For both unsupervised (He et al., 2016; Artetxe et al., 2019; Sestorain et al., 2019) and supervised NMT (Xia et al., 2017; Wang et al., 2018), dual learning can provide additional constraints by exploiting the dual correlation.", "Unlike most previous dual learning work on NMT, which use the reconstruction between source and target sequences, we focus on SiMT-specific read/write path and explorer its intrinsic properties.", "SiMT policy falls into two categories: fixed and adaptive.", "For fixed policy, the read/write path is defined by rules and fixed during translating.", "Dalvi et al. (2018) proposed STATIC-RW, which alternately read and write RW words after reading S words.", "Ma et al. (2019) proposed wait-k policy, which always generates target k tokens lagging behind the source input.", "Elbayad et al. (2020) enhanced wait-k policy by sampling different k during training.", "Han et al. (2020) applied meta-learning in wait-k.", "Zhang et al. (2021) proposed future-guided training to apply a full-sentence MT model to guide wait-k policy.", "Zhang and Feng (2021a) proposed a char-level wait-k policy.", "Zhang and Feng (2021b) proposed a universal SiMT with mixture-of-experts wait-k policy to perform SiMT under arbitrary latency levels.", "For adaptive policy, the read/write path is learned and adaptive to the current context.", "Early adaptive policies used segmented translation (Banga-lore et al., 2012; Cho and Esipova, 2016; Siah-bani et al., 2018).", "Gu et al. (2017) trained an agent with reinforcement learning.", "Alinejad et al. (2018) added a predict operation based on Gu et al. (2017).", "Zheng et al. (2019a) trained an agent with golden READ/WRITE actions generated by rules.", "Zheng et al. (2019b) added a delay token to read source words.", "Arivazhagan et al. (2019) proposed MILk, which applied monotonic attention and used a Bernoulli variable to determine writing.", "Ma et al. (2020) proposed MMA, which is the implementation of MILk on the Transformer and achieved the current state-of-the-art SiMT performance.", "Zhang et al. (2020) proposed a adaptive segmentation policy.", "Wilken et al. (2020) used the external ground-truth alignments to train the policy.", "Liu et al. (2021) proposed cross-attention augmented transducer.", "Alinejad et al. (2021) introduced a full-sentence model to generate a ground-truth action sequence.", "Miao et al. (2021) proposed a generative SiMT policy.", "The previous methods often lack the internal supervision on read/write path.", "Some works use external information such as alignment or generated 2465 3 4 5 6 7 8 9 10 Average Lagging (AL) 26 27 28 29 BLEU Offline Dual Paths Single Path MMA Wait-k", "rule-based sequences to guide the read/write path (Zheng et al., 2019a; Zhang et al., 2020; Wilken et al., 2020; Alinejad et al., 2021).", "However, these methods rely too much on heuristic rules, and thus their performance is not comparable to jointly optimizing read/write path and translation.", "Our method internally explorers the duality between the read/write paths in two directions, and accordingly uses the duality to constrain the read/write paths, thereby obtaining better SiMT performance.", "We evaluated our method on four translation directions of the following two public datasets.", "IWSLT15 1 English Vietnamese (En Vi) (133K pairs) (Cettolo et al., 2015) We use TED tst2012 (1553 pairs) as validation set and TED tst2013 (1268 pairs) as test set.", "Following Raffel et al. (2017) and Ma et al. (2020), we replace tokens that the frequency less than 5 by unk .", "After replacement, the vocabulary sizes are 17K and 7.7K for English and Vietnamese, respectively.", "WMT15 2 German English (De En) (4.5M pairs) Following Ma et al. (2020), we use new-stest2013 (3000 pairs) as validation set and new-stest2015 (2169 pairs) as test set.", "BPE (Sennrich et al., 2016) is applied with 32K merge operations and the vocabulary is shared across languages.", "We conducted experiments on following systems.", "Offline Conventional Transformer (Vaswani et al., 2017) model for full-sentence translation.", "Wait-k Wait-k policy, the widely used fixed policy Ma et al. (2019), which first reads k source 1 nlp.stanford.edu/projects/nmt/ 2 www.statmt.org/wmt15/ tokens and then writes a target word and reads a word alternately.", "MMA 3 Monotonic multi-head attention (MMA) proposed by (Ma et al., 2020), the state-of-the-art adaptive policy for SiMT, which applies monotonic attention on each head in Transformer.", "Single Path SiMT model of one translation direction based on monotonic multi-head attention.", "To avoiding outlier heads 4 that are harmful for the read/write path, we slightly modified MMA for more stable performance.", "We no longer let the heads in all decoder layers independently determine the READ/WRITE action, but share the READ/WRITE action between the decoder layers.", "Dual Paths Dual-path SiMT described in Sec.3.", "The implementations of all systems are adapted from Fairseq Library (Ott et al., 2019), based on Transformer (Vaswani et al., 2017), where we apply Transformer-Small (4 heads) for En Vi, and Transformer-Base (8 heads) for De En.", "For Dual Paths', the forward and backward models are used to complete the SiMT on two translation directions at the same time.", "To perform SiMT under different latency, we set various lagging numbers 5 k for Wait-k', and set various latency weights 67 for MMA', Single Path' and Dual Paths'.", "master/examples/simultaneous_translation 4 Since MMA requires all heads in decoder layers to independently decide READ/WRITE action and starts translating only when all heads select WRITE action, some outlier heads that perform too many READ actions will result in higher latency.", "Ma et al. (2020) try to control this phenomenon by adding some loss functions, but it still cannot avoid some outlier heads waiting for too many words, which seriously affects the impair the necessity between the READ/WRITE actions in read/write path (Ma et al., 2020; Zaidi et al., 2021).", "5 For both En Vi and De En: k = 1 , 3 , 5 , 7 , 9 6 For En Vi: = 0 .", "01 , 0 .", "05 , 0 .", "1 , 0 .", "2 , 0 .", "3 , 0 .", "4 7 For De En: = 0 .", "1 , 0 .", "2 , 0 .", "25 , 0 .", "3 , 0 .", "4 2466 AL BLEU Dual Paths 7.69 29.23 -w/o Segment 7.61 27.24 -w/o B 8.57 28.66 -w/o F , B 8.31 28.12 Table 1: Ablation study with = 0 .", "et al., 2002) for translation quality and Average Lagging (AL) (Ma et al., 2019) for latency.", "Average lagging evaluates the number of words lagging behind the ideal policy.", "Given read/write path g i , AL is calculated as AL = 1 (cid:88) i =1 g i i 1 | y | / | x | , (12) where = argmax i ( g i = | x | ) , and | x | and | y | are the length of the source sentence and target sentence respectively.", "The results with more latency metrics are shown in Appendix D. 5.3 Main Results Figure 4 shows the comparison between our method and the previous methods on 4 translation directions.", "Dual Paths' outperforms the previous methods under all latency, and more importantly, the proposed duality constraints can improve the SiMT performance on both source-to-target and target-to-source directions concurrently.", "Compared to Wait-k', our method has significant improvement, especially under low latency, since the read/write path in Wait-k' is fixed and cannot be adjusted.", "Compared to MMA', the state-of-the-art adaptive policy, our Single Path' achieves comparable performance and is more stable under high latency.", "MMA' allows each head of each layer to independently predict a read/write path, where some outlier heads will affect the overall performance, resulting in a decline in translation quality under high latency (Ma et al., 2020).", "Our method applies a common read/write path instead of the heads in each layer to predict READ/WRITE, thereby reducing the possibility of outlier heads.", "Based on Single Path', Dual Paths' further improves the SiMT performance by modeling the duality constraints between read/write paths, especially under low latency.", "Besides, our method improves the SiMT performance even close to the full-sentence MT on En Vi, which shows that the more precise read/write path is the key to SiMT performance.", "Additionally, under the same latency weight , our method tends to have lower latency than MMA' on De En.", "The Single Path' reduces the unnecessary latency caused by outlier heads, and the duality constraints further improve the necessity of reading source content, thereby achieving lower latency.", "We conducted extensive analyses to understand the specific improvements of our method.", "Unless otherwise specified, all results are reported on De En.", "We conducted ablation studies on the duality constraints, where we use direct transposition to replace transposing process of read/write path, only constrain the forward single-path model or remove the duality constraints.", "As shown in Table 1, the proposed method of transposing the read/write path is critical to translation quality, showing the importance of the segment operation.", "Besides, mutual constraining between forward and backward single-path model is more conducive to SiMT performance than only constraining one of them or removing constraints.", "The read/write path needs to ensure sufficient content for translation and meanwhile avoid unnecessary latency, where the aligned source position 8 is always considered as the oracle position to perform WRITE in previous work (Wilken et al., 2020; Arthur et al., 2021).", "Therefore, we propose two metrics A Suf and A Nec to measure the sufficiency and necessity between the READ/WRITE actions in a path via alignments.", "We denote the ground-truth aligned source position of the i th target word as a i , and the read/write path is represented by g i , which is the number of source words read in when writing the i th target word.", "For sufficiency, A Suf is used to evaluate whether the aligned source word is read before writing the target word, calculated as A Suf = 1 II (cid:88) i =1 1 a i g i , (13) 8 For many-to-one alignment from source to target, we choose the furthest source word.", "where 1 a i g i counts the number of a i g i , and I is the target length.", "For necessity, A Nec is used to measure the distance between the output position g i and the aligned source position a i , calculated as A Nec = 1 | a i g i | (cid:88) i,a i g i a i g i , (14) where the best case is A Nec = 1 for g i = a i , performing WRITE just at the aligned position and there is no unnecessary waiting.", "The more detailed description please refers to Appendix A. As shown in Figure 5, we evaluate the A Suf and A Nec of read/write path on RWTH De En alignment dataset 9 , whose reference alignments are manually annotated by experts.", "The read/write paths of all methods perform similarly in sufficiency evaluation and our method performs slightly better at low latency.", "Except that the fixed policy 9 https://www-i6.informatik.rwth-aachen.", "Wait-k' may be forced to start translating before reading the aligned source word under the lower latency, MMA' and our method can almost cover more than 85% of the aligned source word when starting translating.", "In the necessity evaluation, our method surpasses Wait-k' and MMA', and starts translation much closer to the aligned source word, which shows that duality constraints make read/write path more precise, avoiding some unnecessary waiting.", "Note that while avoiding unnecessary waiting, our method also improves the translation quality (see Figure 4) under the same latency, which further shows the importance of a proper read/write path for SiMT performance.", "To verify that our method improves the duality of two read/write paths, we conduct duality evaluation between source-to-target and target-to-source read/write paths.", "Specifically, we first express both the original read/write path on target-to-source and the transposed path of source-to-target read/write path in the form of matrices, and then calculate the Intersection over Union score (IoU) between the area below them (see Figure 6), which is regarded as the duality between the read/write path in the two directions.", "The higher IoU score indicates that the two paths are more consistent on common segment pairs, i.e., stronger duality.", "Appendix B gives the detailed calculation of IoU score.", "The results of duality evaluation are reported in Table 2, where our method effectively enhances the duality of source-to-target and target-to-source read/write paths under all latency levels.", "This shows that with dual-path SiMT, the read/write paths in source-to-target and target-to-source are more in agreement on the sequence of segment pairs between the sentence pair.", "Figure 6 shows the read/write path visualization of a De En example.", "In Dual Paths', there is a 2468 Figure 6: Read/write path visualization of a De En example (De: die Lehr@@ er@@ bildung fand in Bam@@ berg statt . ' En: the teacher training course was in Bam@@ berg . ').", "strong duality between the read/write paths in two translation directions, where the target-to-source read/write path (Figure", "6(c)) and the transposed path of the source-to-target read/write path (Figure", "6(b)) have a high degree of overlap.", "In particular, the read/write paths in our method exhibit a clear division on segment pairs.", "To analyze the relationship between the forward and backward single-path SiMT model in terms of the latency setting, we set the latency weight ( in", "Eq.(5)) of the forward and backward single-path SiMT model to different values, denoted as F and B respectively (the greater the latency weight, the lower the model latency).", "Table 3 reports the effect of different settings of B on the performance of the forward single-path model.", "latency and similar translation quality compared with MMA' and Single Path'.", "As the latency of the backward model decreases ( B becomes larger), the latency of the forward model also gradually decreases, which shows that the latency of the forward and backward models are strongly correlated.", "Overall, regardless of the setting of F and B , Dual Paths' obtains a better trade-off between latency and translation quality.", "Furthermore, we can get a slightly larger or smaller latency by adjusting the combination of F and B .", "In this paper, we develop the dual-path SiMT to supervise the read/write path by modeling the duality constraints between SiMT in two directions.", "Experiments and analyses we conducted show that our method outperforms strong baselines under all latency and achieves a high-quality read/write path.", "We thank all the anonymous reviewers for their insightful and valuable comments.", "This work was supported by National Key R&D Program of China (NO. 2017YFE0192900)." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "result", "objective", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "method", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "other", "other" ]
[ "Informal romanization is an idiosyncratic process used by humans in informal digital communication to encode non-Latin script languages into Latin character sets found on common keyboards.", "Character substitution choices differ between users but have been shown to be governed by the same main principles observed across a variety of languages namely, character pairs are often associated through phonetic or visual similarity.", "We propose a noisy-channel WFST cascade model for deciphering the original non-Latin script from observed romanized text in an unsupervised fashion.", "We train our model directly on romanized data from two languages: Egyptian Arabic and Russian.", "We demonstrate that adding inductive bias through phonetic and visual priors on character mappings substantially improves the model's performance on both languages, yielding results much closer to the supervised skyline.", "Finally, we introduce a new dataset of romanized Russian, collected from a Russian social network website and partially annotated for our experiments.", "1 1 Introduction Written online communication poses a number of challenges for natural language processing systems, including the presence of neologisms, code-switching, and the use of non-standard orthography.", "One notable example of orthographic variation in social media is informal romanization 2 speakers of languages written in non-Latin alphabets encoding their messages in Latin characters, for convenience or due to technical constraints (improper rendering of native script or keyboard 1 The code and data are available at https://github.", "2 Our focus on informal transliteration excludes formal settings such as pinyin for Mandarin where transliteration conventions are well established.", "xopowo horosho [Phonetic] [Visual] [Cyrillic] [Phonetically romanized] [Visually romanized] [Underlying Cyrillic] [Underlying Cyrillic] [Visually romanized] [Phonetically romanized] Figure 1: Example transliterations of a Russian word horoxo [horoo, good'] (middle) based on phonetic (top) and visual (bottom) similarity, with character alignments displayed.", "The phonetic-visual dichotomy gives rise to one-to-many mappings such as x /S/ sh / w .", "layout incompatibility).", "An example of such a sentence can be found in Figure", "2. Unlike named entity transliteration where the change of script represents the change of language, here Latin characters serve as an intermediate symbolic representation to be decoded by another speaker of the same source language, calling for a completely different transliteration mechanism: instead of expressing the pronunciation of the word according to the phonetic rules of another language, informal transliteration can be viewed as a substitution cipher, where each source character is replaced with a similar Latin character.", "In this paper, we focus on decoding informally romanized texts back into their original scripts.", "We view the task as a decipherment problem and propose an unsupervised approach, which allows us to save annotation effort since parallel data for informal transliteration does not occur naturally.", "We propose a weighted finite-state transducer (WFST) cascade model that learns to decode informal romanization without parallel text, relying only on transliterated data and a language model over the original orthography.", "We test it on two languages, Egyptian Arabic and Russian, collecting our own dataset of romanized Russian from a Russian social network website vk.com.", "4to mowet bit' ly4we?", "[Romanized] Qto mo(cid:25)et byt(cid:126) luqxe?", "[Latent Cyrillic] Cto moet byt' luce?", "[Scientific] /Sto \"moZ1t b1t j \"lu>tSS1/ [IPA] What can be better?", "[Translated] Figure 2: Example of an informally romanized sentence from the dataset presented in this paper, containing a many-to-one mapping (cid:25) / x w .", "Since informal transliteration is not standardized, converting romanized text back to its original orthography requires reasoning about the specific user's transliteration preferences and handling many-to-one (Figure 2) and one-to-many (Figure 1) character mappings, which is beyond traditional rule-based converters.", "Although user behaviors vary, there are two dominant patterns in informal romanization that have been observed independently across different languages, such as Russian (Paulsen, 2014), dialectal Arabic (Dar-wish, 2014) or Greek (Chalamandaris et al., 2006): Phonetic similarity: Users represent source characters with Latin characters or digraphs associated with similar phonemes (e.g. m /m/ m , l /l/ l in Figure 2).", "This substitution method requires implicitly tying the Latin characters to a phonetic system of an intermediate language (typically, En-glish).", "Visual similarity: Users replace source characters with similar-looking symbols (e.g. q />tS j / 4 , u /u/ y in Figure 2).", "Visual similarity choices often involve numerals, especially when the corresponding source language phoneme has no English equivalent (e.g. Arabic (cid:138) /Q/ 3).", "Taking that consistency across languages into account, we show that incorporating these style patterns into our model as priors on the emission parametersalso constructed from naturally occurring resourcesimproves the decoding accuracy on both languages.", "We compare the proposed unsupervised WFST model with a supervised WFST, an unsupervised neural architecture, and commercial systems for decoding romanized Russian ( translit ) and Arabic ( Arabizi ).", "Our unsupervised WFST outperforms the unsupervised neural baseline on both languages.", "Prior work on informal transliteration uses supervised approaches with character substitution rules either manually defined or learned from automatically extracted character alignments (Dar-wish, 2014; Chalamandaris et al., 2004).", "Typically, such approaches are pipelined: they produce candidate transliterations and rerank them using modules encoding knowledge of the source language, such as morphological analyzers or word-level language models (Al-Badrashiny et al., 2014; Eskander et al., 2014).", "Supervised finite-state approaches have also been explored (Wolf-Sonkin et al., 2019; Hellsten et al., 2017); these WFST cascade models are similar to the one we propose, but they encode a different set of assumptions about the transliteration process due to being designed for abugida scripts (using consonant-vowel syllables as units) rather than alphabets.", "To our knowledge, there is no prior unsupervised work on this problem.", "Named entity transliteration, a task closely related to ours, is better explored, but there is little unsupervised work on this task as well.", "In particular, Ravi and Knight (2009) propose a fully unsupervised version of the WFST approach introduced by Knight and Graehl (1998), reframing the task as a decipherment problem and learning cross-lingual phoneme mappings from monolingual data.", "We take a similar path, although it should be noted that named entity transliteration methods cannot be straightforwardly adapted to our task due to the different nature of the transliteration choices.", "The goal of the standard transliteration task is to communicate the pronunciation of a sequence in the source language (SL) to a speaker of the target language (TL) by rendering it appropriately in the TL alphabet; in contrast, informal romanization emerges in communication between SL speakers only, and TL is not specified.", "If we picked any specific Latin-script language to represent TL (e.g. English, which is often used to ground phonetic substi-tutions), many of the informally romanized sequences would still not conform to its pronunciation rules: the transliteration process is character-level rather than phoneme-level and does not take possible TL digraphs into account (e.g. Russian sh /sx/ sh ), and it often involves eclectic visual substitution choices such as numerals or punctuation (e.g. Arabic (cid:13)(cid:24)(cid:11) [tHt, under'] 3 ta7t , Russian dl(cid:31) [dlja, for'] dl9| ).", "Finally, another relevant task is translating between closely related languages, possibly written in different scripts.", "An approach similar to ours is proposed by Pourdamghani and Knight (2017).", "They also take an unsupervised decipherment approach: the cipher model, parameterized as a WFST, is trained to encode the source language character sequences into the target language alphabet as part of a character-level noisy-channel model, and at decoding time it is composed with a word-level language model of the source language.", "Recently, the unsupervised neural architectures (Lample et al., 2018, 2019) have also been used for related language translation and similar decipherment tasks (He et al., 2020), and we extend one of these neural models to our character-level setup to serve as a baseline (5).", "We train a character-based noisy-channel model that transforms a character sequence o in the native alphabet of the language into a sequence of Latin characters l , and use it to decode the romanized sequence l back into the original orthography.", "Our proposed model is composed of separate transition and emission components as discussed in 3.1, similarly to an HMM.", "However, an HMM assumes a one-to-one alignment between the characters of the observed and the latent sequences, which is not true for our task.", "One original script character can be aligned to two consecutive Latin characters or vice versa: for example, when a phoneme is represented with a single symbol on one side but with a digraph on the other (Figure 1), or when a character is omitted on one side but explicitly written on the other (e.g. short vowels not written in unvocalized Arabic but written in transliteration, or the Russian soft sign (cid:126) representing palatalization being often omitted in the romanized version).", "To handle those alignments, we introduce insertions and deletions into the emission model and modify the emission transducer to limit the number of consecutive insertions and deletions.", "In our experiments, we compare the performance of the model with and without informative phonetic and visual similarity priors described in 3.2.", "3 The square brackets following a foreign word show its linguistic transliteration (using the scientific and the Buckwalter schemas for Russian and Arabic respectively) and its English translation.", "If we view the process of romanization as encoding a source sequence o into Latin characters, we can consider each observation l to have originated via o being generated from a distribution p ( o ) and then transformed to Latin script according to another distribution p ( l | o ) .", "We can write the probability of the observed Latin sequence as: p ( l ) = (cid:88) o p ( o ; ) p ( l | o ; ) p prior ( ; ) (1) The first two terms in (1) correspond to the probabilities under the transition model (the language model trained on the original orthography) and the emission model respectively.", "The third term represents the prior distribution on the emission model parameters through which we introduce human knowledge into the model.", "Our goal is to learn the parameters of the emission distribution with the transition parameters being fixed.", "We parameterize the emission and transition distributions as weighted finite-state transducers (WFSTs): Transition WFSA The n-gram weighted finite-state acceptor (WFSA) T represents a character-level n-gram language model of the language in the native script, producing the native alphabet character sequence o with the probability p ( o ; ) .", "We use the parameterization of Allauzen et al. (2003), with the states encoding conditioning history, arcs weighted by n-gram probabilities, and failure transitions representing backoffs.", "The role of T is to inform the model of what well-formed text in the original orthography looks like; its parameters are learned from a separate corpus and kept fixed during the rest of the training.", "Emission WFST The emission WFSTS trans-duces the original script sequence o to a Latin sequence l with the probability p ( l | o ; ) .", "Since there can be multiple paths through S that correspond to the input-output pair ( o, l ) , this probability is summed over all such paths (i.e. is a marginal over all possible monotonic character alignments): p ( l | o ; ) = (cid:88) e p ( l, e | o ; ) (2) We view each path e as a sequence of edit operations: substitutions of original characters with Latin ones ( c o c l ), insertions of Latin characters ( (cid:15) c l ), and deletions of original characters ( c o (cid:15) ).", "Each arc in S corresponds to one of the possible edit operations; an arc representing the edit c o c l is characterized by the input label c o , the output label c l , and the weight log p ( c l | c o ; ) .", "The emission parameters are the multinomial conditional probabilities of the edit operations p ( c l | c o ) ; we learn using the algorithm described in 3.3.", "To inform the model of which pairs of symbols are close in the phonetic or visual space, we introduce the priors on the emission parameters, increasing the probability of an original alphabet character being substituted by a similar Latin one.", "Rather than attempting to operationalize the notions of phonetic or visual similarity, we choose to read the likely mappings between symbols off human-compiled resources that use the same underlying principle: phonetic keyboard layouts and visually confusable symbol lists.", "Examples of mappings that we encode as priors can be found in Table", "1. Phonetic similarity Since we think of the informal romanization as a cipher, we aim to capture the phonetic similarity between characters based on association rather than on the actual grapheme-to-phoneme mappings in specific words.", "We approximate it using phonetic keyboard layouts , one-to-one mappings built to bring together similar-sounding characters in different alphabets.", "We take the character pairs from a union of multiple layouts for each language, two for Arabic 4 and four for Russian.", "5 The main drawback of using keyboard layouts is that they require every character to have a Latin counterpart, so some mappings will inevitably be arbitrary; we compensate for this effect by averaging over several layouts.", "Visual similarity The strongest example of visual character similarity would be homoglyphs symbols from different alphabets represented by the same glyph, such as Cyrillic a and Latin a .", "The fact that homoglyph pairs can be made indistinguishable in certain fonts has been exploited in phishing attacks, e.g. when Latin characters are replaced by virtually identical Cyrillic ones (Gabrilovich and Gontmakher, 2002).", "This led the Unicode Consortium to publish a list of symbols and symbol combinations similar enough to be po-4 http://arabic.omaralzabir.com/ , https://thomasplagwitz.com/2013/01/06/imrans-phonetic-keyboard-for-arabic/ 5 http://winrus.com/kbd_e.htm Original Latin Phon.", "tentially confusing to the human eye (referred to as confusables ).", "6 This list contains not only exact homoglyphs but also strongly homoglyphic pairs such as Cyrillic (cid:16) and Latin lO .", "We construct a visual prior for the Russian model from all CyrillicLatin symbol pairs in the Unicode confusables list.", "7 Although this list does not cover more complex visual associations used in informal romanization, such as partial similarity (Arabic Alif with Hamza (cid:1) 2 due to Hamza resembling an inverted 2) or similarity conditioned on a transformation such as reflection (Russian l v ), it makes a sensible starting point.", "However, this restrictive definition of visual similarity does not allow us to create a visual prior for Arabicthe two scripts are dissimilar enough that the confusables list does not contain any ArabicLatin character pairs.", "Proposing a more nuanced definition of visual similarity for Arabic and the associated prior is left for future work.", "We incorporate these mappings into the model as Dirichlet priors on the emission parameters: Dir ( ) , where each dimension of the parameter corresponds to a character pair ( c o , c l ) , and the corresponding element of is set to the number of times these symbols are mapped to each other in the predefined mapping set.", "We learn the emission WFST parameters in an unsupervised fashion, observing only the Latin side of the training instances.", "The marginal likelihood of a romanized sequence l can be computed by 6 https://www.unicode.org/Public/ security/latest/confusables.txt 7 In our parameterization, we cannot introduce a mapping from one to multiple symbols or vice versa, so we map all possible pairs instead: ( (cid:24) , lo ) ( (cid:24) , l ), ( (cid:24) , o ).", "summing over the weights of all paths through a lattice obtained by composing T S A ( l ) .", "Here A ( l ) is an unweighted acceptor of l , which, when composed with a lattice, constrains all paths through the lattice to produce l as the output sequence.", "The expectationmaximization (EM) algorithm is commonly used to maximize marginal likelihood; however, the size of the lattice would make the computation prohibitively slow.", "We combine online learning (Liang and Klein, 2009) and curriculum learning (Bengio et al., 2009) to achieve faster convergence, as described in 3.3.1.", "We use a version of the stepwise EM algorithm described by Liang and Klein (2009), reminiscent of the stochastic gradient descent in the space of the sufficient statistics.", "Training data is split into mini-batches, and after processing each mini-batch we update the overall vector of the sufficient statistics and re-estimate the parameters based on the updated vector.", "The update is performed by interpolating between the current value of the overall vector and the vector of sufficient statistics s k collected from the k -th mini-batch: ( k +1) (1 k ) ( k ) + k s k .", "The stepsize is gradually decreased, causing the model to make smaller changes to the parameters as the learning stabilizes.", "Following Liang and Klein (2009), we set it to k = ( k + 2) .", "However, if the mini-batch contains long sequences, summing over all paths in the corresponding lattices could still take a long time.", "As we know, the character substitutions are not arbitrary: each original alphabet symbols is likely to be mapped to only a few Latin characters, which means that most of the paths through the lattice would have very low probabilities.", "We prune the improbable arcs in the emission WFST while training on batches of shorter sentences.", "Doing this eliminates up to 66% and up to 76% of the emission arcs for Arabic and Russian respectively.", "We discourage excessive use of insertions and deletions by keeping the corresponding probabilities low at the early stages of training: during the first several updates, we freeze the deletion probabilities at a small initial value and disable insertions completely to keep the model locally normalized.", "We also iteratively increase the language model order as learning progresses.", "Once most of the emission WFST arcs have been pruned, we can afford to compose it with a larger language model WFST without the size of the resulting lattice rendering the computation impractical.", "The two steps of the EM algorithm are performed as follows: E-step At the E-step we compute the sufficient statistics for updating , which in our case would be the expected number of traversals of each of the emission WFST arcs.", "For ease of bookkeeping, we compute those expectations using finite-state methods in the expectation semiring (Eisner, 2002).", "Summing over all paths in the lattice is usually performed via shortest distance computation in log semiring; in the expectation semiring, we augment the weight of each arc with a basis vector, where the only non-zero element corresponds to the index of the emission edit operation associated with the arc (i.e. the input-output label pair).", "This way the shortest distance algorithm yields not only the marginal likelihood but also the vector of the sufficient statistics for the input sequence.", "To speed up the shortest distance computation, we shrink the lattice by limiting delay of all paths through the emission WFST.", "Delay of a path is defined as the difference between the number of the epsilon labels on the input and output sides of the path.", "Figure 3 shows the schema of the emission WFST where delay is limited.", "Substitutions are performed without a state change, and each deletion or insertion arc transitions to the next or previous state respectively.", "When the first (last) state is reached, further deletions (insertions) are no longer allowed.", "M-step The M-step then corresponds to simply re-estimating by appropriately normalizing the obtained expected counts.", "We also compare the performance of our model with the same model trained in a supervised way, using the annotated portion of the data that contains parallel o and l sequences.", "In the supervised case we can additionally constrain the lattice with an acceptor of the original orthography sequence: A ( o ) T S A ( l ) .", "However, the alignment between the symbols in o and l is still latent.", "To optimize this marginal likelihood we still employ the EM algorithm.", "As this constrained lattice is much smaller, we can run the standard EM without the modifications discussed in 3.3.1.", "Inference at test time is also performed using finite-state methods and closely resembles the E-step of the unsupervised learning: given a Latin sequence l , we construct the machine T S A ( l ) in the tropical semiring and run the shortest path algorithm to obtain the most probable path e ; the source sequence o is read off the obtained path.", "Here we discuss the data used to train the unsupervised model.", "Unlike Arabizi, which has been explored in prior work due to its popularity in the modern online community, a dataset of informally romanized Russian was not available, so we collect and partially annotate our own dataset from the Russian social network vk.com .", "We use the Arabizi portion of the LDC BOLT Phase 2 SMS/Chat dataset (Bies et al., 2014; Song et al., 2014), a collection of written informal conversations in romanized Egyptian Arabic annotated with their Arabic script representation.", "To prevent the annotators from introducing orthographic variation inherent to dialectal Arabic, compliance with the Conventional orthography for dialectal Arabic (CODA; Habash et al., 2012) is ensured.", "However, the effects of some of the normalization choices (e.g. expanding frequent abbreviations) would pose difficulties to our model.", "To obtain a subset of the data better suited for our task, we discard any instances which are not originally romanized (5% of all data), ones where the Arabic annotation contains Latin characters (4%), or where emoji/emoticon normalization was performed (12%).", "The information about the splits is provided in Table", "2. Most of the data is allocated to the language model training set in order to give the unsupervised model enough signal from the native script side.", "We choose to train the transition model on the annotations from the same corpus to make the language model specific to both the informal domain and the CODA orthography.", "We collect our own dataset of romanized Russian text from a social network website vk.com , adopting an approach similar to the one described by Darwish (2014).", "We take a list of the 50 most frequent Russian lemmas (Lyashevskaya and Sharov, 2009), filtering out those shorter than 3 characters, and produce a set of candidate roman-izations for each of them to use as queries to the vk.com API.", "In order to encourage diversity of romanization styles in our dataset, we generate the queries by defining all plausible visual and phonetic mappings for each Cyrillic character and applying all possible combinations of those substitutions to the underlying Russian word.", "We scrape public posts on the user and group pages, retaining only the information about which posts were authored by the same user, and manually go over the collected set to filter out coincidental results.", "Our dataset consists of 1796 wall posts from 1681 users and communities.", "Since the posts are quite long on average (248 characters, longest ones up to 15K), we split them into sentences using the NLTK sentence tokenizer, with manual correction when needed.", "The obtained sentences are used as data points, split into training, validation and test according to the numbers in Table", "2. The average length of an obtained sentence is 65 characters, which is 3 times longer than an average Arabizi sentence; we believe this is due to the different nature of the data (social media posts vs. SMS).", "Sentences collected from the same user are distributed across different splits so that we observe a diverse set of romanization preferences in both training and testing.", "Each sentence in the validation and test sets is annotated by one of the two native speaker annotators, following guidelines similar to those designed for the Arabizi BOLT data (Bies et al., 2014).", "For more details on the annotation guidelines and inter-annotator agreement, see Appendix A. Since we do not have enough annotations to train the Russian language model on the same corpus, we use a separate in-domain dataset.", "We take a portion of the Taiga dataset (Shavrina and Shapovalova, 2017), containing 307K comments scraped from the same social network vk.com , and apply the same preprocessing steps as we did in the collection process.", "Here we discuss the experimental setup used to determine how much information relevant for our task is contained in the character similarity mappings, and how it compares to the amount of information encoded in the human annotations.", "We compare them by evaluating the effect of the informative priors (described in 3.2) on the performance of the unsupervised model and comparing it to the performance of the supervised model.", "Methods We compare the performance of our model trained in three different setups: unsupervised with a uniform prior on the emission parameters, unsupervised with informative phonetic and visual priors (3.2), and supervised.", "We additionally compare them to a commercial online decoding system for each language (directly encoding human knowledge about the transliteration process) and a character-level unsupervised neural machine translation architecture (encoding no assumptions about the underlying process at all).", "We train the unsupervised models with the stepwise EM algorithm as described in 3.3.1, performing stochastic updates and making only one pass over the entire training set.", "The supervised models are trained on the validation set with five iterations of EM with a six-gram transition model.", "It should be noted that only a subset of the validation data is actually used in the supervised training: if the absolute value of the delay of the emission WFST paths is limited by n , we will not be able to compose a lattice for any data points where the input and output sequences differ in length by more than n (those constitute 22% of the Arabic validation data and 33% of the Russian validation data for n = 5 and n = 2 respectively).", "Since all of the Arabic data comes annotated, we can perform the same experiment using the full training set; surprisingly, the performance of the supervised model does not improve (see Table 3).", "The online transliteration decoding systems we use are translit.net for Russian and Yamli 8 for Arabic.", "The Russian decoder is rule-based, but the information about what algorithm the Arabic decoder uses is not disclosed.", "We take the unsupervised neural machine translation (UNMT) model of Lample et al. (2018) as the neural baseline, using the implementation from the codebase of He et al. (2020), with one important difference: since the romanization process is known to be strictly character-level, we tokenize the text into characters rather than words.", "Implementation We use the OpenFst library (Allauzen et al., 2007) for the implementation of all the finite-state methods, in conjunction with the OpenGrm NGram library (Roark et al., 2012) for training the transition model specifically.", "We train the character-level n-gram models with Witten Bell smoothing (Witten and Bell, 1991) of orders from two to six.", "Since the WFSTs encoding full higher-order models become very large (for example, the Russian six-gram model has 3M states and 13M arcs), we shrink all the models except for the bigram one using relative entropy pruning (Stolcke, 1998).", "However, since pruning decreases the quality of the language model, we observe most of the improvement in accuracy while training with the unpruned bigram model, and the subsequent order increases lead to relatively mi-nor gains.", "Hyperparameter settings for training the transition and emission WFSTs are described in Appendix B. We optimize the delay limit for each language separately, obtaining best results with 2 for Russian and 5 for Arabic.", "To approximate the mono-8 https://www.yamli.com/ Arabic Russian Unsupervised: uniform prior 0.735 0.660 Unsupervised: phonetic prior 0.377 0.222 Unsupervised: visual prior 0.372 Unsupervised: combined prior 0.212 Supervised 0.225* 0.140 UNMT 0.791 0.242 Commercial 0.206 0.137 Table 3: Character error rate for different experimental setups.", "tonic word-level alignment between the original and Latin sequences, we restrict the operations on the space character to only three: insertion, deletion, and substitution with itself.", "We apply the same to the punctuation marks (with specialized substitutions for certain Arabic symbols, such as ? ? ).", "This substantially reduces the number of arcs in the emission WFST, as punctuation marks make up over half of each of the alphabets.", "Evaluation We use character error rate (CER) as our evaluation metric.", "We compute CER as the ratio of the character-level edit distance between the predicted original script sequence and the human annotation to the length of the annotation sequence in characters.", "The CER values for the models we compare are presented in Table", "3. One trend we notice is that the error rate is lower for Russian than for Arabic in all the experiments, including the uniform prior setting, which suggests that decoding Arabizi is an inherently harder task.", "Some of the errors of the Arabic commercial system could be explained by the decoder predictions being plausible but not matching the CODA orthography of the reference.", "Effect of priors The unsupervised models without an informative prior perform poorly for either language, which means that there is not enough signal in the language model alone under the training constraints we enforce.", "Possibly, the algorithm could have converged to a better local optimum if we did not use the online algorithm and prune both the language model and the emission model; however, that experiment would be infeasibly slow.", "Incorporating a phonetic prior reduces the error rate by 0.36 and 0.44 for Arabic and Russian respectively, which provides a substantial improvement while maintaining the efficiency advantage.", "The visual prior for Russian appears to be slightly less helpful, improving CER by 0.29.", "We attribute the better performance of the model with the phonetic prior to the sparsity and restrictiveness of the visually confusable symbol mappings, or it could be due to the phonetic substitutions being more popular with users.", "Finally, combining the two priors for Russian leads to a slight additional improvement in accuracy over the phonetic prior only.", "We additionally verify that the phonetic and visual similarity-based substitutions are prominent in informal romanization by inspecting the emission parameters learned by the supervised model with a uniform prior (Table 4).", "We observe that:", "(a) the highest-probability substitutions can be explained by either phonetic or visual similarity, and", "(b) the external mappings we use for our priors are indeed appropriate since the supervised model recovers the same mappings in the annotated data.", "Error analysis Figure 4 shows some of the elements of the confusion matrices for the test predictions of the best-performing unsupervised models in both languages.", "We see that many of the frequent errors are caused by the model failing to disambiguate between two plausible decodings of a Latin character, either mapped to it through different types of similarity ( n /n/ [pho-netic] n [visual] p , n [visual] h [pho-netic] h /x/ ), or the same one (visual 8 8 v , phonetic /h/ h (cid:26) // ); such cases could be ambiguous for humans to decode as well.", "Other errors in Figure 4 illustrate the limitations of our parameterization and the resources we rely on.", "Our model does not allow one-to-many alignments, which leads to digraph interpretation errors such as x /s/ + /h/ sh M /S/ .", "Some artifacts of the resources our priors are based on also pollute the results: for example, the confusion between (cid:126) and h in Russian is explained by the Russian soft sign (cid:126) , which has no English phonetic equivalent, being arbitrarily mapped to the Latin x in one of the phonetic keyboard layouts.", "Comparison to UNMT The unsupervised neural model trained on Russian performs only marginally worse than the unsupervised WFST model with an informative prior, demonstrating that with a sufficient amount of data the neural architecture is powerful enough to learn the character substitution rules without the need for the inductive bias.", "However, we cannot say the same about Arabicwith a smaller training set (see Table 2), the UNMT model is outperformed by the unsupervised WFST even without an informative prior.", "The main difference in the performance between the two models comes down to the trade-off between structure and power: although the neural architecture captures long-range dependencies better due to having a stronger language model, it does not provide an easy way of enforcing character-level constraints on the decoding process, which the WFST model encodes by design.", "As a result, we observe that while the UNMT model can recover whole words more successfully (for Russian it achieves 45.8 BLEU score, while the best-performing unsupervised WFST is at 20.4), it also tends to arbitrarily insert or repeat words in the output, which leads to higher CER.", "This paper tackles the problem of decoding nonstandardized informal romanization used in social media into the original orthography without parallel text.", "We train a WFST noisy-channel model to decode romanized Egyptian Arabic and Russian to their original scripts with the stepwise EM algorithm combined with curriculum learning and demonstrate that while the unsupervised model by 73 26 20 0 7 3 28 1 29 0 88 123 3 0 0 2 8 155 101 Figure 4: Fragments of the confusion matrix comparing test time predictions of the best-performing unsupervised models for Arabic (left) and Russian (right) to human annotations.", "itself performs poorly, introducing an informative prior that encodes the notion of phonetic or visual character similarity brings its performance substantially closer to that of the supervised model.", "The informative priors used in our experiments are constructed using sets of character mappings compiled for other purposes but using the same underlying principle (phonetic keyboard layouts and the Unicode confusable symbol list).", "While these mappings provide a convenient way to avoid formalizing the complex notions of the phonetic and visual similarity, they are restrictive and do not capture all the diverse aspects of similarity that idiosyncratic romanization uses, so designing more suitable priors via operationalizing the concept of character similarity could be a promising direction for future work.", "Another research avenue that could be explored is modeling specific user preferences: since each user likely favors a certain set of character substitutions, allowing user-specific parameters could improve decoding and be useful for authorship attribution.", "This project is funded in part by the NSF under grants 1618044 and 1936155, and by the NEH under grant HAA256044-17.", "The authors thank John Wieting, Shruti Rijhwani, David Mortensen, Nikita Srivatsan, and Mahmoud Al Ismail for helpful discussion, Junxian He for help with the UNMT experiments, Stas Kashepava for data annotation, and the three anonymous reviewers for their valuable feedback." ]
[ "abstain", "abstain", "objective", "method", "objective", "objective", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "other", "method", "objective", "objective", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "result", "objective", "result", "other", "other", "abstain", "abstain", "other", "other", "method", "other", "method", "other", "other", "other", "abstain", "method", "method", "method", "other", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "method", "other", "abstain", "objective", "abstain", "abstain", "other", "other" ]
[ "We present K nowledge Enhanced M ultimodal BART (KM-BART), which is a Transformer-based sequence-to-sequence model capable of reasoning about commonsense knowledge from multimodal inputs of images and texts.", "We adapt the generative BART architecture (Lewis et al., 2020) to a multimodal model with visual and textual inputs.", "We further develop novel pretraining tasks to improve the model performance on the Visual Commonsense Generation (VCG) task.", "In particular, our pretraining task of Knowledge-based Commonsense Generation (KCG) boosts model performance on the VCG task by leveraging commonsense knowledge from a large language model pretrained on external commonsense knowledge graphs.", "To the best of our knowledge, we are the first to propose a dedicated task for improving model performance on the VCG task.", "Experimental results show that our model reaches state-of-the-art performance on the VCG task (Park et al., 2020) by applying these novel pretraining tasks.", "Early work on Vision-Language models has been largely focused on pure understanding tasks (Tan and Bansal, 2019; Lu et al., 2019).", "These models, although improving model performance on understanding tasks such as Visual Question Answering (Antol et al., 2015), are not capable of multimodal generation tasks (You et al., 2016).", "To ease this problem, researchers have proposed various models (Zhou et al., 2020; Li et al., 2020) for generating texts based on visual inputs.", "These models are mainly pretrained on general visual and language understanding tasks such as masked language modeling and masked region modeling, which enable the models to build an The first three authors contribute equally to this work.", "alignment between visual and language features.", "However, only feature alignments are inadequate to enhance the model's ability in conducting complex multimodal commonsense reasoning, which requires the model to understand the underlying relations and effects between objects.", "Commonsense reasoning was traditionally studied on natural language (Rajani et al., 2019; Trinh and Le, 2018), while recent works have paid attention to commonsense reasoning with joint visual and language inputs.", "For instance, Zellers et al. (2019) proposes the task of Visual Commonsense Reasoning (VCR).", "However, the task focuses on understanding instead of generating as it asks the model to answer multiple-choice questions.", "A newly introduced dataset, Visual Commonsense Generation (VCG) (Park et al., 2020), provides a more challenging task by requiring the model to generate commonsense inferences about what might happen before / after , and the present intents of characters (see Table 2 for an example).", "In this work, we propose to tackle the task of VCG by leveraging our K nowledge Enhanced M ultimodal BART (Lewis et al., 2020), which we call KM-BART .", "KM-BART is a Transformer-based model consisting of an encoder and a decoder and is pretrained on carefully designed tasks for VCG.", "Figure 1 presents our model architecture 1 .", "1. We extend the BART model to process multimodal data of images and texts, and enable multimodal reasoning by introducing task-relevant tokens.", "2. To improve the model performance on Visual Commonsense Generation (VCG), we implicitly incorporate commonsense knowledge from external knowledge graphs to our 1 https://github.com/FomalhautB/ KM-BART-ACLKM-BART by designing a novel pretraining task, which we call Knowledge-based Commonsense Generation (KCG).", "3. Besides KCG, we further equip our KM-BART with standard pretraining tasks including Masked Language Modeling (MLM), Masked Region Modeling (MRM), as well as Attribution Prediction (AP) and Relation Prediction (RP).", "Experimental results show that all pretraining tasks are effective, and combining these pretraining tasks enable our KM-BART to achieve state-of-the-art performance on the VCG task.", "Visual-Language (VL) tasks such as Visual Question Answering (VQA) (Antol et al., 2015) and Image-Text Matching (Li et al., 2019) require the models to process multimodal inputs and comprehend visual and textual information simultaneously.", "Inspired by successful pretrained language models like BERT (Devlin et al., 2019) and GPT-2 (Rad-ford et al., 2019), numerous multimodal image-text pretraining and representation learning models (Tan and Bansal, 2019; Lu et al., 2019; Chen et al., 2020; Yu et al., 2020) have been proposed.", "These multimodal pretrained models use Transformers as backbone and are denoising autoen-coders trained to predict the alignment of image-text pairs and the semantics of masked words and image regions.", "The models mentioned above typically focus more on understanding tasks.", "To further bridge the gap between visual and textual clues in multimodal data, in addition to cross-modal understanding, a model should also acquire abilities to complete generation tasks, for example, the image-to-text task of Image Captioning (You et al., 2016).", "However, directly transferring a model pretrained on VL understanding tasks to generation tasks is infeasible, as these models are merely Transformer-based encoders and are thus not suitable for generation tasks.", "Zhou et al. (2020) ease this problem by using a Transformer-based network as both an encoder and a decoder, making the model capable of generating texts based on visual and textual inputs.", "While Li et al. (2020) propose OSCAR, which improves the generation ability by introducing object tags as an additional clue during pretraining.", "These models achieve state-of-the-art performance in downstream multimodal generation tasks such as Image Captioning (You et al., 2016).", "Commonsense knowledge refers to the necessary level of practical knowledge and reasoning about everyday situations and events common among most people (Sap et al., 2020).", "For example, one should know that water is for drinking and sunshine makes people warm.", "Simple as it looks, enabling artificial intelligence to conduct commonsense reasoning has been difficult for learning-based models (Gunning, 2018).", "Researchers have resorted to knowledge graphs due to their exact graph-structured representation of knowledge to overcome this problem.", "For example, ConceptNet (Speer et al., 2017) is a knowledge graph with nodes representing general concepts and edges indicating relational knowledge between concepts.", "Another commonsense knowledge graph, ATOMIC (Sap et al., 2019), extends nodes to natural language phrases, and edges to relations such as intent , attribution , effect , etc.", "Despite improvements in modeling commonsense knowledge, graph-based methods require heavy human engineering, making it challenging to scale robustly.", "For instance, model performance usually deteriorates dramatically when retrieved contextual knowledge is noisy due to imperfect knowledge matching (Lin et al., 2019).", "Therefore, we implicitly leverage external knowledge using supervision signals inferred by COMET (Bosselut et al., 2019), which is a Transformer-based, generative model pretrained on commonsense knowledge graphs including ConceptNet and Atomic.", "Given a natural language phrase and a relation type, COMET generates natural language commonsense descriptions.", "In summary, on the one hand, existing cross-modal architectures not focusing on commonsense interpretation as their pretraining tasks are designed for multimodal understanding, making them unsuitable for the downstream VCG task.", "On the other hand, Transformer-based generative models such as COMET (Bosselut et al., 2019) cannot generate commonsense inferences from cross-modal inputs.", "Therefore, in this work, we propose KM-BART to conduct the task of Visual Commonsense Generation (VCG).", "Our KM-BART is pretrained on a Decoder Encoder v img scores e <img_feat> e <img> e <cls> e <img_feat> ... e </img> e <s> e A e cat e sits e under ... e <img> e <caption> v 1 v zero v 3 ... e </img> e <event> e </event> e <mlm> e A e <mask> e sits e under ... e </mlm> A cat sits under ... </s> A <mask> sits under ...", "dedicated pretraining task for VCG as well as other standard pretraining tasks.", "Experimental results show that our KM-BART achieves state-of-the-art performance on the VCG task.", "In this section, we describe our methodology for Visual Commonsense Generation.", "Section 3.1 gives our model architecture.", "Section 3.2 introduces our pretraining tasks as well as our self-training based data filtering technique.", "Figure 1 illustrates the architecture of our KM-BART.", "The backbone of our model is BART (Lewis et al., 2020), which is a Transformer-based sequence-to-sequence autoencoder.", "We modify the original BART to adapt the model to cross-modality inputs of images and texts.", "We add special tokens to adapt the model to different pretrain-ing/evaluation tasks.", "In the following subsections.", "We give the details of our visual feature extractor, the encoder, and the decoder.", "Following previous work on Vision-Language models (Tan and Bansal, 2019; Lu et al., 2019), we use a convolution neural network pretrained on the COCO dataset to extract visual embeddings, which are subsequently fed to the Transformer-based cross-modal encoder.", "Specifically, we use the pretrained Masked R-CNN (He et al., 2017) from detectron2 2 .", "For each image, the pretrained Masked R-CNN proposes the bounding boxes for detected objects.", "The area within a bounding box is a R egion o f I nterest (RoI).", "We leverage the intermediate representations of the RoIs in the Masked R-CNN to obtain fixed-size embeddings for RoIs V = { v 1 , . . . , v i , . . . , v N } , where i is the index to RoIs, and N is the number of RoIs for an image.", "The visual embedding of the i -th RoI v i is v i R d , where d is the embedding dimension.", "For each of the RoIs, the Masked R-CNN also outputs the class distribution p ( v i ) , which is later used for Masked Region Modeling.", "Following Lewis et al. (2020), the encoder of our model is based on a multi-layer bidirectional Transformer.", "We introduce special tokens to adapt it to our pretraining and downstream evaluation tasks.", "Specifically, each example starts with a special token indicating the task type of the current example.", "For our pretraining task of Knowledge-Based Commonsense Generation (see Section 3.2.1), we use <before> , <after> , or <intent> as the 2 https://github.com/facebookresearch/ detectron2 starting special token.", "For Attribution Prediction and Relation Prediction (Section 3.2.2), we use <region caption> .", "Finally, for Masked Language Modeling and Masked Region Modeling, we use <caption> .", "Furthermore, to inform the model of different modalities of inputs, we add three sets of different special tokens: For images, we use <img> and </img> to indicate the start and the end of visual embeddings, respectively.", "For texts, we introduce different special tokens to distinguish between two sets of textual inputs: events and captions .", "Events are image descriptions which the model uses for reasoning about future/past events or present intents of characters in the commonsense generation task, while captions are for Masked Language Modeling, where linguistic information plays a more important role.", "Hence, to inform the model of these two types of textual inputs, we use <event> and </event> for events, and <mlm> and </mlm> for captions.", "In the following sections, we denote textual inputs of words and specical tokens by W = { w 1 ,", ".., w T } , where T is the length of textual inputs.", "For a token w , its embedding is e R d , where d is the dimension of the embeddings.", "The decoder of our model is also a multi-layer Transformer.", "Unlike the encoder, which is bidirectional, the decoder is unidirectional as it is supposed to be autoregressive when generating texts.", "The decoder does not take the visual embeddings as inputs.", "Instead, we use embeddings of the special token <img feat> to replace the actual visual embeddings.", "For Masked Region Modeling and Masked Language Modeling, we use <cls> to replace the masked regions or words (see Figure 1).", "The model should predict the masked words and the class distribution of the masked regions during pretraining.", "To pretrain our model, we use four image-text datasets: Conceptual Captions Dataset (Sharma et al., 2018), SBU Dataset (Ordonez et al., 2011), Microsoft COCO Dataset (Lin et al., 2014) and Visual Genome (Krishna et al., 2017).", "In the remaining of this section, we use D to denote the individual datasets for each of the pretraining tasks.", "Statistics of the datasets are given in Table", "1. The above datasets consist of examples of parallel images and texts and are widely used in previous work (Tan and Bansal, 2019; Lu et al., 2019; Zhou et al., 2020; Yu et al., 2020).", "The knowledge-based commonsense generation (KCG) task aims to improve the performance of KM-BART on the VCG task.", "We leverage knowledge induced from COMET (Bosselut et al., 2019), which is a large language model pretrained on external commonsense knowledge graphs.", "Given a natural language phrase and a relation as inputs, COMET generates natural language phrases as commonsense descriptions.", "Relations of COMET include xIntent , xWant , xNeed , xReact and xEffect .", "We only use COMET to generate new commonsense descriptions on SBU and COCO datasets due to limits in computational power for pretraining.", "For each image-text pair, we use COMET to generate commonsense descriptions from the text using all five relations mentioned above.", "To adapt COMET generated commonsense knowledge to VCG, we consider relations xIntent and xWant from COMET as intent , xNeed as before , xReact and xEffect as after .", "In this way, we generate additional commonsense knowledge for SBU and COCO datasets.", "The newly generated dataset has more than 3.6 million examples (Table 3).", "However, the generated commonsense knowledge is not always reasonable as only textual information is used while the visual information is completely ignored.", "To ease this problem, we further filter the dataset by employing a self-training based data filtering strategy.", "Self-Training Based Data Filtering Our strategy aims to filter the generated commonsense knowledge dataset so that the examples in the filtered dataset closely resemble the examples in the VCG dataset.", "To achieve this goal, we first initialize our KM-BART with BART parameters and finetune KM-BART on the VCG dataset for 30 epochs.", "The finetuned KM-BART already has a good performance on the VCG dataset with a CIDER score of 39.13 (see Table 4).", "We then leverage this finetuned model to evaluate the quality of commonsense descriptions generated by COMET.", "We feed the corresponding images, texts, and relations as inputs to the finetuned KM-BART and then compute the cross-entropy (CE) loss of COMET generated commonsense descriptions.", "We observe that commonsense descrip-Event and image Task Model Generated Sentence 2 is holding an envelope without event give 1 some bad news reassure 1 contemplate what 1 is saying to her intent with event see what the letter said give mail to 1 open the envelope ground truth receive the envelope from 1 see what's inside the envelope without event walk up to 1 have seen 1 in the distance be interested in what 1 has to say before with event pick the envelope up call 1 to meet him walk to 1 ground truth receive mail be given an envelope bring the envelope with her without event finish telling 1 she has a difficult time ask 1 what the papers are for let go of 1 after with event open the envelope hand the envelope to 1 embrace 1 ground truth read the contents of the envelope to 1 hand the envelope to 1 read the love letter Table 2: An example from the VCG dataset.", "tions with a lower CE loss make more sense than those with a higher CE loss.", "Notice that when computing the CE loss of the COMET generated commonsense descriptions, our KM-BART leverages both the textual inputs and the visual inputs.", "We provide examples of our data filtering strategy in Supplementary Material.", "We compute CE loss for all the commonsense descriptions in the VCG dataset and the new dataset generated by COMET.", "Figure 2 shows the distributions of CE loss for the two datasets.", "We observe that commonsense descriptions generated by COMET result in higher CE losses, which are expected as images are completely ignored when using COMET to generate natural language commonsense descriptions.", "We only keep the examples of which CE loss is below 3.5.", "Table 3 shows the statistics of generated datasets before and after data filtering.", "By filtering, we keep only 1.46 million examples, roughly accounting for 40% of the original examples.", "Finally, we leverage the newly generated commonsense knowledge dataset by pretraining KM-BART on it.", "We expect by pretraining, the model reaches higher performance on the VCG dataset.", "Let S = { w 1 , ..., w L } be a commonsense description of the newly generated dataset D , the loss function for KCG is: LKCG ( ) = E ( W,V ) D L (cid:88) l =1 log( P ( w l | w <l , W, V )) (1) where L is the length of the generated sequence, l is the index to individual tokens in the target commonsense description S , V and W are visual inputs and textual inputs, respectively.", "represents model parameters to be optimized.", "The Visual Genome dataset consists of 2.3 million relationships and 2.8 million attributes.", "To utilize these data, we use the attribute prediction (AP) and Figure 2: The distribution of the average cross-entropy on 10000 samples in the VCG dataset and our enhanced dataset.", "the relation prediction (RP) as pretraining tasks, which enable the model to learn intrinsic properties among different objects in an image.", "In the AP task, we feed the output vectors of the decoder for each image feature into an MLP clas-sifier.", "In the RP task, we concatenate two output vectors of the decoder for each image feature pair and feed it into another MLP classifier.", "We use the cross-entropy loss for both tasks.", "We denote the indices for AP by 1 j A , the indices for RP by 1 k R , where A is the number of AP examples, and R is the number of RP examples.", "We denote the label for the j -th AP example by L a ( v j ) , and the label for the k -th RP example as L r ( v k 1 , v k 2 ) , where v k 1 and v k 1 are the two RoIs of the current RP example.", "The loss function for the AP task is: LAP ( ) = E ( W,V ) D A (cid:88) j =1 log( P ( L a ( v j ) | W, V )) (2) And the loss function for the RP task is: LRP ( ) = E ( W,V ) D R (cid:88) k =1 log( P ( L r ( v k 1 , v k 2 )) | W, V )) (3) 3.2.3 Masked Language Modeling Following previous works (Devlin et al., 2019; Liu et al., 2019), we randomly mask the input textual tokens with a probability of 15% in the Masked Language Modeling (MLM) task.", "Within this 15% of the tokens, we use <mask> to replace the masked token with a probability of 80%, use a random token to replace with a probability of 10%, and keep the masked token unchanged with a probability of 10%.", "We denote the mask indices by 1 m M , where M is the number of masked tokens.", "We denote the masked token by w m , and the remaining tokens that are not masked by w \\ m , the loss function for MLM is defined as: LMLM ( ) = E ( W,V ) D M (cid:88) m =1 log( P ( w m | w \\ m , W, V )) (4) 3.2.4 Masked Region Modeling In the Masked Region Modeling (MRM) task, we sample image regions and mask the corresponding feature vectors with a probability of 15%.", "The masked vector will be replaced by a vector filled with zeros.", "The model needs to predict the distribution over semantic classes for the masked regions.", "The loss function is to minimize the KL divergence of the output distribution and the distribution predicted by the Masked R-CNN used in visual features extraction.", "We denote the mask indices by 1 n N , where N is the number of masked regions.", "We let p ( v n ) denote the class distribution of the masked region v n detected by Masked R-CNN, q ( v n ) denote the class distribution output by our model, the loss function for MRM is then: LMRM ( ) = E ( W,V ) D N (cid:88) n =1 DKL ( p ( v n ) || q ( v n ))) (5) 3.2.5 Combining Losses To combine all the losses we described above, we weight each of the losses by WKCG , WAP , WRP , WMLM , WMRM R .", "The weights are chosen to roughly balance every term during the training phase.", "The final loss is: L = WKCGLKCG + WAPLAP + WRPLRP + WMLMLMLM + WMRMLMRM (6) Pretraining Task(s) Event BLEU-2 METEOR CIDER Unique Novel Random init w/o pretraining Y 22.28 14.55 36.49 27.81 29.71 KCG Y 22.16 14.52 37.06 33.01 31.20 KCG (before filtering) Y 22.24 14.43 37.08 33.64 31.37 AP & RP Y 22.49 14.64 37.18 28.97 30.28 MLM & MRM Y 22.44 14.70 37.44 31.16 31.64 Full Model Y--BART init w/o pretraining Y 22.86 15.17 39.13 27.41 28.32 KCG Y 23.47 15.02 39.76 27.28 27.97 KCG (before filtering) Y 22.90 14.98 39.01 26.59 27.13 AP & RP Y 22.93 14.99 39.18 28.06 28.88 MLM & MRM Y 23.13 14.93 38.75 28.68 28.74 Full Model Y 23.25 15.01 39.20 35.71 32.85 Random init w/o pretraining N 13.54 10.14 14.87 12.19 24.22 KCG N 13.64 10.12 15.34 15.95 25.79 KCG (before filtering) N 13.67 10.13 15.22 16.47 24.97 AP & RP N 13.83 10.28 15.48 14.60 24.75 MLM & MRM N 14.36 10.73 16.72 15.86 26.12 Full Model N 14.49 10.86 17.37 16.89 25.69 BART init w/o pretraining N 8.108 8.673 6.335 4.850 10.55 KCG N 13.28 10.06 14.17 13.08 25.70 KCG (before filtering) N 13.29 10.12 13.93 13.51 25.59 AP & RP N 12.17 9.503 12.49 20.98 29.01 MLM & MRM N 13.36 10.22 14.52 15.02 28.36 Full Model N--Table 4: Results of different pretraining tasks on VCG validation set.", "We describe our experiments in this section.", "Section 4.1 is the experimental settings of different pretraining and initialization strategies.", "Section 4.2 gives the evaluation task and metrics.", "We show our results in Section 4.3.", "In Section 4.4, we give example inferences generated by our model.", "We have the human evaluation results in Section 4.5.", "In our experiments, following the base model from Lewis et al. (2020), we fix the model architecture to a 6-layer encoder and a 6-layer decoder.", "To understand how each pretraining task helps model performance on the downstream task of VCG, we ablate on pretraining tasks.", "We use the following experimental settings: (1) Without any pretraining; (2) Only with Knowledge-based Commonsense Generation; (3) Only with Attribute Prediction and Relation Prediction; (4) Only with Masked Language Modeling and Masked Region Modeling; (4) With all the pretraining tasks combined.", "For only with Knowledge-based Commonsense Generation, we further compare the model performance before and after data filtering (see Section 3.2.1).", "For each of the above settings, we initialize the model from random or from BART weights, respectively.", "Besides, we are most interested in the model performance under two settings (see the second column of Table 4): (1) Only using images as inputs; (2) Using both images and event descriptions as inputs.", "Note that when only using images as inputs for evaluation, we also do not use textual inputs during pretraining/finetuning.", "We evaluate our model on the recently proposed Visual Commonsense Generation (VCG) Dataset (Park et al., 2020).", "Given an image and a description of the event in the image, the task aims to predict events which might happen before/after , and the present intents of the characters in the given image.", "The dataset consists of 1174K training examples and 146K validation examples.", "Some examples in the dataset share the same images or events, but with different inferences for events before/after or intents at present.", "Table 2 gives an example of the dataset.", "We report our model performance on the validation set as the test set is not available yet.", "Besides event descriptions, the VCG dataset also provides Place and Person information for each image.", "Note that although Park et al. (2020) also leverages the Place and Person information for training and evaluation, we argue that such information is not generally available in normal settings, where only images and event descriptions are given.", "Hence, we do not use the Place and Person information in our KM-BART.", "As an additional reference, we nevertheless show in Table 5 the best performed models from Park et al. (2020), which also use Place and Person information.", "We use three automatic evaluation metrics, including BLEU-2 (Papineni et al., 2002), METEOR (Denkowski and Lavie, 2014), and CIDER (Vedantam et al., 2015).", "Following Park et al. (2020), we also report Unique as the number of inference sentences unique in generated sentences divided by the total number of sentences, and Novel as the number of generated sentences not in the training data divided by the total number of sentences.", "We first ablate on different pretraining tasks to understand the effect of each task.", "We then combine all the pretraining tasks together to train our full Modalities Event BLEU-2 METEOR CIDER Unique Novel Park et al. (2020) a Image+Event+Place+Person N 10.21 10.66 11.86 33.90 49.84 Park et al. (2020) b Image N 6.79 7.13 5.63 26.38 46.80 Ours Image N 9.04 8.33 9.12 50.75 52.92 Park et al. (2020) c Image+Event+Place+Person Y 13.50 11.55 18.27 44.49 49.03 Park et al. (2020) d Image+Event Y 12.52 10.73 16.49 42.83 47.40 Ours Image+Event Y 14.21 11.19 21.23 57.64 58.22 Table 5: Results on VCG validation set with nucleus sampling.", "model.", "As a last step, we pick the best performed models to compare against previous state-of-the-art system (Park et al., 2020).", "Table 4 shows the effect of each pretraining task to our KM-BART on the VCG dataset.", "We can see that all our pretraining tasks help improve model performance.", "Most importantly, we observe that although filtering on the commonsense generation pretraining task reduces the dataset size by more than 60%, pretraining with KCG still reaches comparable or better performance than pretraining with KCG (before filtering).", "This demonstrates that our self-training based filtering technique is helpful, as it helps the model reach similar or even better performance with less training data.", "The advantage is most evident when we initialize from BART parameters and use both images and event descriptions as inputs.", "Under this setting, pretraining with KCG outperforms pretraining with KCG (before filtering) in terms of all the evaluation metrics.", "For using both images and event descriptions as inputs, the model performs better when initialized from pretrained BART parameters.", "As pretrained BART can better leverage the information in the event descriptions.", "Hence, to obtain our full KM-BART model for using images and events as inputs, we adopt the setting of initializing from BART parameters.", "Experimental results show that our full model reaches high performance on BLEU-2, METEOR and CIDER, and that the full model generates the most unique and novel inferences.", "For using only images as inputs, models initializing from random parameters outperforms those initialized from BART parameters.", "We argue that initializing from BART parameters results in optimization disadvantages where the model has to switch from pure textual inputs to pure visual inputs.", "This observation becomes evident as the model performs the worst when no pretraining is used, which indicates that the model has to entirely rely on finetuning on the VCG dataset to adapt to visual inputs.", "Therefore, for using only images as inputs, we obtain our full KM-BART model by initializing from random parameters.", "Our full model reaches best performance on BLEU-2, METEOR and CIDER, and is the second best in terms of Unique.", "In Table 5, we compare our full model to previous state-of-the-art (Park et al., 2020).", "3 We observe that although our full model taking as inputs images and event descriptions does not use Place and Person information, the model still outperforms previous state-of-the-art (Park et al. (2020) c ).", "For using only images as inputs, our model also performs better than previous results (Park et al. (2020) b ).", "Furthermore, our model reaches comparable performance to Park et al. (2020) a in terms of BLEU-2, METEOR and CIDER, with much higher performance on Uniqueness and Novelty, even though our model uses much less information during training compared to Park et al. (2020) a .", "In Table 2, we show example inferences and compare the results of our model predictions to the ground truths.", "The generated sentences from the model without event descriptions as inputs can already capture the most important information of commonsense.", "We also observe that adding event descriptions to the inputs helps the model generate more details.", "We gives more examples of our model in the Appendix.", "3 Note that model performance in Table 5 is not directly comparable to that of Table 4 as we use different decoding strategies to generate different number of inference sentences per example in these two tables.", "We conduct human evaluation to further understand how humans perceive the inferences generated by our KM-BART.", "We employ a comparison approach for a better assessment between our KM-BART and the model from Park et al. (2020).", "To be specific, we randomly sample 30 examples from the VCG validation set.", "For each example, we use our KM-BART or the baseline model to generate 5 sets of inferences, each of which consist of the task type before , after , and intent .", "We use two settings for our human evaluation: (1) With event: event descriptions are given as input during inference time; (2) Without event: event descriptions are not given during inference time.", "Under each of the settings we compare our KM-BART model with the mode from Park et al. (2020).", "We use the same 30 examples for each model under the two settings.", "For each example in a task type ( before , after , or intent ), we generate 5 inferences for one model of each setting.", "In total, we generate 450 inferences for each model of each setting during the human evaluation.", "For the same example, we use our KM-BART and the model from Park et al. (2020) to generate an inference under one of the three task types, then the workers choose the more reasonable inference from the two generated inferences.", "We hire three workers from Amazon Mechanical Turk 4 to evaluate each inference.", "We take the majority of the three workers as the final evaluation for an inference.", "Among all the inferences, we use the percentage of one model better than another model as the score of that model.", "For example, in Table 6, the score of our model ( Ours ) is 61.3 for the task type before when event descriptions are missing.", "This indicates that our model is better than the baseline model for the task type before in 61.3% of the cases.", "We also 4 https://www.mturk.com/ take the average over the three task types as the final score (see Total in Table 6).", "From Table 6, we can observe that our model outperforms Park et al. (2020) under both of the settings.", "To be specific, when event descriptions are not given, among all the inferences, our model is better than Park et al. (2020) in 66.7% of the cases.", "Furthermore, our model has a lead of at least 22.6% over Park et al. (2020) in each individual task.", "For example, our model generates better inferences in 68.7% of the cases in task type after , while the model from Park et al. (2020) is only better than our model in 31.3% of the cases.", "We can obtain similar results when looking at the task type before and intent .", "When event descriptions are given, our model is still better than Park et al. (2020) in 55.1% of all the cases.", "For each individual task, the advantage of our model is smaller when event descriptions are given than when event descriptions are not given, showing that our model can better capture information from the images.", "In this paper, we propose K nowledge Enhanced M ultimodal BART ( KM-BART ), which is a Transformer-based model capable of reasoning about and generating commonsense descriptions from cross modality inputs of images and texts.", "We propose the pretraining task of Knowledge-Based Commonsense Generation, which improves the reasoning ability of KM-BART by leveraging a large language model pretrained on external commonsense knowledge graphs.", "We use the self-training technique to filter the automatically generated commonsense descriptions.", "Experimental results on the VCG task show that our KM-BART pretrained on the pretraining tasks reaches state-of-the-art performance.", "Further human evaluation demonstrates that our KM-BART can generate commonsense inferences of high quality.", "For future work, we plan to further expand our pretraining dataset for Knowledge-Based Commonsense Generation by including the Conceptual Captions Dataset (Sharma et al., 2018).", "Furthermore, while we argue that Place and Person information is not generally available in practical scenarios, we still plan to add Place and Person information to our model in the future." ]
[ "method", "abstain", "objective", "method", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "other", "abstain", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "objective", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "objective", "objective", "method", "result", "objective", "abstain", "method" ]
[ "Building huge and highly capable language models has been a trend in the past years.", "Despite their great performance, they incur high computational cost.", "A common solution is to apply model compression or choose light-weight architectures, which often need a separate fixed-size model for each desirable computational budget, and may lose performance in case of heavy compression.", "This paper proposes an effective dynamic inference approach, called E-LANG, which distributes the inference between large accurate Super-models and light-weight Swift models.", "To this end, a decision making module routes the inputs to Super or Swift models based on the energy characteristics of the representations in the latent space.", "This method is easily adoptable and architecture agnostic.", "As such, it can be applied to black-box pre-trained models without a need for architectural manipulations, reassembling of modules, or re-training.", "Unlike existing methods that are only applicable to encoder-only backbones and classification tasks, our method also works for encoder-decoder structures and sequence-to-sequence tasks such as translation.", "The E-LANG performance is verified through a set of experiments with T5 and BERT backbones on GLUE, SuperGLUE, and WMT.", "In particular, we outperform T5-11B with an average computations speed-up of 3.3 on GLUE and 2.9 on SuperGLUE.", "We also achieve BERT-based SOTA on GLUE with 3.2 less computations.", "Code and demo are available here.", "With the introduction of influential language models such as BERT (Devlin et al., 2019), a trend in natural language processing (NLP) research has been to develop high capacity models and push their performance to new levels.", "Consequently, state-of-the-art (SOTA) results were achieved on various benchmarks using these models; GPT-3 (Brown et al., 2020), XLNet (Yang et al., 2019), RoBERTa (Liu et al., 2019), T5 (Raffel et al., 2020), ELECTRA (Clark et al., 2020), and DeBERTa (He et al., 2021) to name a few.", "A potential down-side, however, is that the number of parameters or float-ing point operations (FLOPs) for these models can get extremely large.", "For example, Gshard (Lep-ikhin et al., 2021) comes with 600B parameters with an enormous amount of computation.", "This in turn results in a higher inference latency, which is not desirable for latency-sensitive applications.", "A common solution to speed-up the large language models is to apply model compression (Gupta et al., 2020).", "Although generally successful, compression does come with a trade-off on accuracy, and may lose performance if compression is heavy.", "In addition, these methods usually compress a model to a fixed smaller size, where a separate model is required for each possible computational budget.", "An alternative approach explored in the literature is to leverage dynamic inferencing in a way that examples may be routed to different (potentially lower cost) paths throughout the network.", "For example, a temporal early-exit model (Shen et al., 2017; Yu et al., 2018) terminates the procedure of reading the input sequence when sufficient evidence has been found for accurate predictions.", "Instance-wise early-exiting (Xin et al., ACL 2020) is another technique, which allows a sample to adaptively choose from multiple available exit nodes if some conditions are met.", "Consequently, earlier exists require less computation and lead to a lower latency.", "Adjusting the size of the model at the inference time by choosing adaptive width and depth is also another approach employed for dynamic inference (Kim and Cho, 2021; Hou et al., 2020).", "There is a variety of adaptive/dynamic inference approaches proposed, however, a general down-side for many of these methods is that often times they require a careful architecture design, manipulation of network modules, or even re-training.", "In this paper, we propose a simple but rather effective approach of dynamically distributing the inference between the original large model (called the Super model) and a light-weight (e.g., compressed) model referred to as the Swift model.", "To this end, we design an energy-based decision making module that routes examples to the appropriate model based on the negative free energy of the latent space representations, such that the Swift model attains a high accuracy on the examples sent to it.", "The remaining samples are then forwarded to the Super model that is supposed to have a good performance on all examples.", "Since the Swift model can make highly accurate predictions over the majority of the samples, E-LANG significantly reduces the overall computational cost, while maintains the high accuracy of the Super model.", "Although simple, this strategy achieves SOTA results on multiple structures (e.g., T5 and BERT) and benchmarks (e.g., GLUE and SuperGLUE).", "Due to its desirable practical characteristics, this method is a strong candidate for the practical application of Super models.", "The main contributions of the paper are as follows: Combining Super models with high accuracy and latency and Swift models with lower accuracy and latency, to achieve high accuracy and low latency .", "In other words, by employing our method, we can achieve the high levels of accuracy provided by Super models, but at a lower computational cost.", "Our method is easily adoptable, architecture agnostic, and orthogonal to many other existing methods.", "It can be applied to black-box pre-trained models without a need for architectural manipulations, careful reassembling of modules, or re-training.", "An energy-based routing mechanism for directing examples to the Super or Swift.", "This provides a dynamic trade-off between the accuracy and computational cost that outperforms the previous works in both fixed-size and dynamic inference (with zero overhead for real-time adjustment of speed/accuracy).", "As such, E-LANG acts like a knob for adjusting the accuracy-latency trade-off in real-time during model serving.", "To the best of our knowledge, our method is the first generic approach to apply dynamic inference on both encoder-only and encoder-decoder architectures (e.g., T5) and also can extend the usage beyond classification tasks, to sequence-to-sequence tasks such as translation.", "As mentioned, compression is a widely used strategy to speed-up the large language models (Gupta et al., 2020; Gupta and Agrawal, 2022).", "This involves incorporating techniques such as quantization of weights and activations (Bai et al., 2021; Shen et al., 2020; Kim et al., 2021; Zhang et al., 2020; Jin et al., 2021), knowledge distillation (KD) (Hinton et al., 2015; Jiao et al., 2020; Sanh et al., 2019), pruning/sharing (Gordon et al., 2020; Chen et al., 2020), multi-device distribution (Banitalebi-Dehkordi et al., 2021), or a combination of these techniques (Cheng et al., 2017; Polino et al., 2018).", "Among all the compression techniques, creating a fixed-size small version of large models along with distillation has been popular in the recent years.", "Sanh et al. (2019) introduced DistillBERT, which was a smaller version of BERT trained with distillation for general purposes.", "Another compact variant of BERT was proposed by MobileBERT (Sun et al., 2020) in which inverted bottleneck structures and progressive knowledge transfer were used.", "TinyBERT (Jiao et al., 2020) also presented a novel two-stage transformer distillation for both pre-training and task-specific fine-tuning.", "In (Iandola et al., 2020), the usage of grouped convolutions was studied to design SqueezeBERT.", "ELM (Jiao et al., 2021), a layer mapping search framework, was also proposed for improving downstream BERT distillation.", "A recent method, GhostBERT (Huang et al., 2021), employed softmax-normalized 1D convolutions as ghost modules to generate more features with cheap operations.", "Although compression techniques in general are effective, they come with a trade-off on accuracy, and may lose performance in case of high ratio compression.", "In addition, an individual fixed-size model is required for each possible computational budget.", "As stated in the introduction, the alternative solution is dynamic inference, which can be achieved with either early-exit or length/depth-adaptive models.", "One of the first temporal early-exit strategies was proposed by ReasoNet (Shen et al., 2017), which stops its reading procedure when sufficient evidence has been found for answering a question.", "Similarly, in (Yu et al., 2018), an early stopping method applicable to classification tasks was presented.", "DeeBERT (Xin et al., ACL 2020) also proposed an instance-wise multi-exit method via the entropy of the output probability distribution to speed-up BERT inference.", "As a length-adaptive method, Kim and Cho (2021) introduced a dynamic inference framework with one-shot training of transformers for both sequenceand token-level classification.", "Also, in (Hou et al., 2020), an architecture named DynaBERT was proposed for adaptively adjusting the computations by choosing sub-networks of different widths and depths.", "Both Length-Adaptive and DynaBERT utilized knowledge distillation and data augmentation to improve their performance.", "Although early-exit and adaptive methods have made significant progress and work well in practice, they often require architectural manipulation and re-training.", "In addition, they are only applicable to encoder-only backbones and classification tasks.", "In contrast, our method can work with out-of-the-box pre-trained models without a need for re-training and are also applicable for encoder-decoder structures and sequence-to-sequence tasks.", "We propose a new energy-based joint inference method called E-LANG, where a large/accurate language model ( Super ) is jointly employed with a small/fast one ( Swift ) to achieve efficient inference without sacrificing the accuracy.", "To this end, inspired by the method in (Akbari et al., 2021), a routing mechanism empowered by energy-based models (EBM) is introduced to dynamically distribute the input samples between the Super and Swift models.", "Similar to the out-of-distribution (OOD) detection problem, our goal is to identify the OOD samples that are hard to handle for the Swift and forward them to the Super model.", "On the other hand, we have the in-distribution data for which the Swift can make highly reliable and accurate predictions.", "In other words, the routing mechanism needs to detect whether or not the input data fits in the Swift's distribution (i.e., the one the Swift has been trained with).", "Inspired by the success of EBMs in dealing with OOD detection problems (Lee et al., 2019), the energy characteristics of data samples for an efficient and effective routing are investigated in our work.", "The overall framework of E-LANG is shown in Figure", "1. 3.1 Energy-Based Models The goal of EBM is to build an energy function denoted by E ( x ) : RD R that maps an input data x RD to a non-probabilistic energy value y R .", "To turn a collection of arbitrary energies for all possible outputs (denoted by Y ) into a normalized probability distribution, Gibbs distribution can be used as follows (LeCun et al., 2006): p ( y | x ) = e E ( x ,y ) (cid:82) y (cid:48) Y e E ( x ,y (cid:48) ) , (1) where the negative log of the denominator expresses the Helmholtz free energy (LeCun et al., 2006) defined as F ( x ) = log (cid:0) (cid:82) y (cid:48) Y e E ( x ,y (cid:48) ) (cid:1) .", "In machine learning, there is a deep relationship between the EBMs and discriminative models, which can be seen by connecting the Gibbs distribution in Equation (1) and the categorical distribution derived for a discriminative model.", "A discriminative classifier is defined as a function for mapping the input x to C real-valued logits (i.e., for C number of class labels): f ( x ) : RD RC .", "In order to derive a categorical distribution over C possible outputs, the softmax function is utilized: p ( y | x ) = e f y ( x ) (cid:80) Ci e f i ( x ) , (2) where f y ( x ) denotes the logit (probability) of the y th class label.", "between the Gibbs and categorical distributions defined in (1) and (2), the energy function for a given input ( x , y ) can be defined as E ( x , y ) = f y ( x ) .", "The free energy function F ( x ; f ) can then be obtained by taking the negative log of the categorical distribution denominator as: F ( x ; f ) = log C (cid:88) i e f i ( x ) .", "Our goal is to detect the easy samples suitable for the Swift, which are indeed the ones with high likelihood in the density function.", "The energy-based density function for Swift is then defined as: p ( x ) = e F ( x ; f ) (cid:82) x e F ( x ; f ) , (4) where the denominator is the normalized densities, which can be intractable to compute or estimate.", "The log ( (cid:82) x e F ( x ; f ) ) term has no effect on the distribution of the overall energy values because it is constant for all x .", "As a result, F ( x ; f ) , i.e., the negative free energy, has a linear alignment with the log likelihood function, which makes it a well-suited solution to the easy vs. hard detection problem in our framework.", "To this end, lower energy values indicate higher likelihood and represent easier (more fit) samples for the Swift model.", "More precisely, for a threshold on the density function such that p ( x ) < , then a threshold t on the negative free energy can be calculated according to (5) as F ( x ; f ) < t = log ( (cid:82) x e F ( x ; f ) ) .", "In practice, for a given input, an energy function is applied to the outputs of the Swift model during inference time to calculate the energy score.", "Then, if the negative energy value is smaller than a threshold, the input is identified as a bad sample for the Swift, and is sent to the Super model.", "Given the energy threshold t , the Swift classifier f ( x ) , and the Super classifier defined as g ( x ) : RD RC , the joint inference function J ( x ; f, g, t ) [1 , C ] for a classification task with C classes can then be expressed by: J ( x ; f, g, t ) = (cid:40) f ( x ) if F ( x ; f ) t g ( x ) otherwise.", "The proposed energy-based joint inference solution can be directly applied to the encoder-only models such as BERT that are designed for text classification tasks.", "To this end, the energy scores corresponding to the BERT-based Swift model are obtained using Equation (3) and the joint inference is performed based on Equation 6.", "On the other hand, for the encoder-decoder (auto-encoder) architectures such as T5, which are usually considered as generative models, some mod-ifications are required.", "Encoder-decoder models are basically designed for sequence-to-sequence (e.g., text-to-text) problems such as translation or summarization.", "Although such models can also be employed for classification tasks, they still consider the task as a text generation (sequence-to-sequence) problem, where the target labels and the output predictions are treated as a sequence or a piece of text.", "In Section 3.1, it was discussed that there is an inherent connection between the discriminative clas-sifiers and the EBMs.", "In order to benefit from this characteristic for encoder-decoder architectures, we consider adding an extra classification head (i.e., a single linear layer) to the Swift model.", "As encoders are commonly considered as better feature extractors for training a classifier rather than the decoders, we place the extra head after the Swift encoder.", "While freezing the pre-trained encoder model (denoted by f E ), the extra energy head (de-noted by h ) is trained as a regular classifier head with C class labels.", "Note that the decoder is not required for training the head.", "The corresponding free energy function is then defined as follows: F ( x ; f E , h ) = log C (cid:88) i e h i (cid:0) f E ( x ) (cid:1) , (7) where f E ( x ) denotes the outputs of the encoder's last hidden state.", "These features are then fed to the extra head h to obtain the logits for the i th class required for computing the energy scores.", "In this approach, as the decoder part of the Swift model is not required for calculating the energy scores, less computations are involved and the joint inference is performed more efficiently.", "For text-to-text (or sequence-to-sequence) problems such as translation, the output is a sequence of M word-pieces from a vocabulary/dictionary of size N .", "To still utilize the relationship of discriminative models and EBMs in designing and training the extra energy head, we can treat the text-to-text 5232 models as M multi-class classifiers.", "In this case, the number of class labels, i.e., C in (7), is equal to N .", "The final energy score is then calculated as the average of M energy values as follows: F ( x ; f E , h ) = 1 M (cid:80) Mm (cid:0) log (cid:80) Ci e h m,i (cid:0) f E ( x ) (cid:1)(cid:1) , (8) where h m,i ( . ) denotes the logits corresponding to the m th word in the sequence and i th class label.", "Denote the Swift's decoder by f D , the joint inference function, J ( x ; f, g, h, t ) , based on energy scores in either Equation (7) or (8) is expressed as: J = (cid:40) f D (cid:0) f E ( x ) (cid:1) if F ( x ; f E , h ) t g ( x ) otherwise.", "In addition to energy, softmax and entropy (Xin et al., ACL 2020) scores can also be used for analyzing the Swift model's performance in the routing mechanism.", "In this sub-section, we study the mathematical connection of them with the energy score and their potential to solve our problem.", "The softmax score for a classifier is expressed by: max y p ( y | x ) = max y e f y ( x ) (cid:80) Ci e f i ( x ) = e f max ( x ) (cid:80) Ci e f i ( x ) .", "(10)", "By taking the logarithm of both sides, we see the connection between the log of the softmax and the free energy score formulated in Equation (3): log max y p ( y | x ) = log ( e f max ( x ) ) log C (cid:88) i e f i ( x ) = f max ( x ) + F ( x ; f ) , (11) where all logits are shifted by their maximum f max ( x ) .", "Plugging in the energy term to (5) yields: log max y p ( y | x ) = log ( p ( x )) + f max ( x ) log (cid:0) (cid:90) x e F ( x ; f ) (cid:1) .", "(12)", "It is observed that for the samples with high likelihood of being in the Swift's distribution, the free energy goes lower, but the max logit tends to go higher.", "Due to this shifting, unlike the energy score, the softmax score is not well-aligned with the probability density p ( x ) .", "As a result, the softmax score is less reliable for our routing module to analyze the performance of the Swift.", "The entropy score is a measure of randomness in the processed information, and is calculated as:", "where f i ( x ) is the probability (logit) corresponding to the i th class label.", "Let U be the internal energy, i.e., the expectation value of the energy function (Oh et al., 2020), defined by: U ( x ; f ) = C (cid:88) i E ( x , i ) f i .", "According to Oh et al. (2020), the entropy can be defined in terms of the internal and free energy functions as: H ( x ; f ) = U ( x ; f ) F ( x ; f ) , where all logits are shifted by the internal energy U .", "Substituting the free energy from (5) yields: H ( x ; f ) = log ( p ( x )) + U ( x ; f ) + log (cid:0) (cid:82) x e F ( x ; f ) (cid:1) , (15) which shows, due to the shifting caused by internal energy, the entropy is not reliably aligned with the probability density p ( x ) .", "Thus, it is a less suitable routing mechanism unlike the energy score.", "In this section, the performance of E-LANG on different architectures such as T5 and BERT; and benchmarks such as GLUE (Wang et al., 2019b), SuperGLUE (Wang et al., 2019a), and WMT (Bojar et al., 2016) is evaluated and compared with the Super models and previous works.", "In Table 1, the T5-based results on GLUE, SuperGLUE, and WMT benchmarks are reported.", "For all the tasks, we use T5-11B (with 87 10 11 FLOPs) and T5-large (with 4.25 10 11 FLOPs) as our Super and Swift models, respectively.", "The average GPU-based running time and accuracy of both models compared with E-LANG are also summarized in the table.", "Note that the T5 models used in this experiment have been separately fine-tuned on each of the downstream tasks given in Table", "1. The extra energy head for each of these tasks was also separately trained and used based on the task-specific number of classes, i.e., C in Equation (7).", "where N su and N sw are respectively the number of samples processed by the Super (with F su FLOPs) and Swift (with F Esw , F Dsw , and F h FLOPs for the encoder, decoder, and energy head).", "Note that F h is equal to 0.00001 10 11 , which has a very in-significant overhead in our framework.", "As presented in Table 1, E-LANG can reach the Super model's accuracy on all GLUE tasks with an average 3.3X in FLOPs and 1.8X in running time speed-ups.", "For some tasks such as QNLI, MRPC, and COLA, we even outperform the Super model, which leads to a higher average accuracy of 89.7% than the Super model with 89.5% on GLUE.", "For the SuperGLUE benchmark, with an average FLOPs and running time speed-up of 2.9X and 2.0X, our method achieves the same accuracy as the Super model on MRC and CB; and better accuracy on RTE and WIC.", "On BoolQ and COPA, although 99% and 97% of the Super's accuracy are respectively obtained, it is with 1.7X and 1.4X less FLOPs and latency, on average.", "In order to analyze the generality of E-LANG to other NLP problems rather than text classification (Section 3.2.1), we also apply our method to two text-to-text tasks including SuperGLUE's WSC and WMT's English-to-Romanian (EnRo) translation.", "As given in the table, our method achieves the Super model's accuracy on both WSC and EnRo with 4.2X and 1.4X less FLOPs, respectively.", "GLUE benchmarks.", "The curves related to all tasks are given in the supplementary materials.", "The tradeoff points on the curves are dynamically achieved at the inference time by selecting different thresholds, i.e., t in Equations (6) and (9).", "Larger values for t will result in routing more input data to the Super model, which consequently provides more accurate, but slower inference.", "As the Swift is able to make accurate predictions for the majority of input data, the dynamic inference with a small enough t can reach the Super model's accuracy but with a much lower computational cost and latency.", "Figure 3 illustrates the distribution of the energy scores across the input samples in GLUE tasks.", "For each task, the distributions of the samples processed by the Super and the Swift models are plotted.", "As shown, the samples routed to the Super model tend to have lower energy scores that are indeed considered as out-of-distribution samples for the Swift.", "On the other hand, in overall, higher scores are observed for the Swift distribution, that is for the samples handled by the Swift only.", "For some tasks such as MRPC and QNLI, the Swift is shown to be highly capable of handling the majority of the input samples.", "This is also supported by the results in Table 1 and Figure 2, where 91% (for MRPC) and 75% (for QNLI) of the samples are accurately processed by the Swift.", "In contrast, for other datasets including RTE and MNLI with Swift ratio of less than 50%, most of the samples are hard for the Swift, which are transferred to the Super model.", "Based on our experiments, the most optimal results for our joint inference framework is achieved when the crossing point of the two distributions (highlighted in green in the figures) is chosen as the threshold t in Equation (9).", "In Sections 3.3.1 and 3.3.2, the possibility of using softmax and entropy scores instead of energy score was theoretically analyzed.", "To support that analysis and also experimentally evaluate the performance of different routing mechanisms, an ablation study on GLUE is performed, which is presented in Table", "2. In this study, we report the joint inference results based on softmax, entropy, and random scores (i.e., randomly distributing the samples between Super and Swift).", "Our experiments show that, compared to the random score, softmax and entropy can result in satisfactory performance on routing the samples.", "However, as also discussed in Sections 3.3.1 and 3.3.2, the energy score is still a better mechanism with about 14% less FLOPs.", "Another potential mechanism is the perplexity (Chen et al., 1998), but since it provides the same information as entropy, we did not add any extra experiment on it.", "The results with the usage of different Swift models including T5-small (with 0.33 10 11 FLOPs) and T5-base (with 1.24 10 11 FLOPs) are also given in Table", "2. Using these models as Swifts can lead to good performance on some tasks, but not all of them.", "For example, on SST2, the joint inference with T5-small and T5-base Swifts can respectively reach the Super's accuracy with 1.9X and 2.X less computations.", "In general, although these models are smaller and require less FLOPs, our results in Table 2 indicate that they perform worse than T5-large in our joint inference structure.", "In Figure 2, the trade-off curves for different Swift models are shown for GLUE and SuperGLUE.", "Moreover, to show the effectiveness of the extra energy head for the Swift encoder, the E-LANG results based on last linear layer of the Swift decoder is also given and compared in Table", "2. As reported, the E-LANG empowered by the energy head on the Swift encoder outperforms the case with the decoder's head in both FLOPs (36.8% less) and accuracy (0.7% better).", "As explained in Section 3.2.1, this shows the deep connection between the encoder's features, discriminative models, and the proposed routing mechanism via the energy head.", "We observed that E-LANG can achieve a high performance even when applied to individually pre-trained Super and Swift models.", "But, more improvement can still be obtained by performing KD from the Super to the Swift model, especially at the fine-tuning process for downstream tasks.", "To study this, we apply the KD technique in (Sanh et al., 2019) to the Super and Swift models for some GLUE tasks.", "As summarized in Table 3, the Super model's accuracy for QNLI, SST2, and COLA is respectively attained by the distillation-based E-LANG with 29.2%, 48.5%, and 14.3% less FLOPs than E-LANG (without distillation).", "The results show the effectiveness of E-LANG along with other compression techniques such as distillation.", "The trade-off curves for this experiment will be provided in the supplementary materials.", "In this section, the proposed energy-based joint inference method is applied to the BERT architecture (Devlin et al., 2019) and compared with", "Super (11B) 87.0 / 91.7 87.0 / 95.9 87.0 / 96.6 87.0 / 92.4 87.0 / 91.7 87.0 / 69.1 87.0 / 89.5 Random (Encoder) 78.5 / 91.5 61.9 / 95.3 58.7 / 96.3 60.2 / 91.2 47.5 / 91.9 61.6 / 67.2 61.4 / 88.9 Softmax (Encoder) 57.7 / 91.6 36.5 / 95.9 34.6 / 96.5 52.0 / 92.3 13.8 / 92.1 45.7 / 69.3 40.1 / 89.6 Entropy (Encoder) 55.7 / 91.6 27.1 / 96.0 40.2 / 96.5 50.7 / 92.0 23.0 / 92.2 48.1 / 69.3 40.8 / 89.6 Energy (Swift small ) 71.3 / 91.0 58.8 / 95.6 47.0 / 96.6 71.2 / 88.5 55.0 / 91.4 75.3 / 68.3 63.1 / 88.5 Energy (Swift base ) 54.5 / 91.5 50.5 / 95.8 35.9 / 96.6 55.8 / 90.6 44.0 / 91.9 50.6 / 68.4 48.5 / 89.1 Energy (Decoder) 57.9 / 90.6 68.1 / 95.5 75.8 / 96.3 60.5 / 91.5 20.2 / 90.9 45.1 / 69.3 54.6 / 89.0 Energy (Encoder) 47.8 / 91.7 25.7 / 96.0 32.0 / 96.6 50.4 / 92.4 11.5 / 92.2 39.9 / 69.5 34.5 / 89.7", "BERT-based SOTA in both fixed-size and dynamic inference.", "The majority of the previous methods employ knowledge distillation and data augmentation techniques for training their student models.", "For a fair comparison, we follow the same practice and use the transformer distillation and augmentation strategies in TinyBERT (Jiao et al., 2020) to train and prepare our Swift model (i.e., BERT Tiny with 1 . 2 10 9 FLOPs).", "Moreover, similar to the other works, we use BERT Base (with 21 . 8 10 9 FLOPs) as our Super (i.e., teacher) model.", "In Table 4, the comparison results with the baseline BERT Base and SOTA on GLUE benchmark are presented in terms of accuracy, FLOPs, and latency.", "Compared to the Super model, E-LANG delivers better accuracy on SST2 and RTE with 3.5X and 2.0X FLOPs speed-up; and the same accuracy on QNLI, MRPC, and QQP with 2.4X, 2.7X, and 7.0X FLOPs speed-up, respectively.", "On MNLI and COLA, 99.8% and 97.3% of the Super model's accuracy are achieved, but with an average FLOPs speed-up of 2.3X.", "On average, E-LANG outperforms the Super model with 0.1% higher accuracy, 3.2X less FLOPs, and 1.6X less latency.", "Compared with SOTA, our method achieves the best performance on all GLUE tasks, except MRPC for which SqueezeBERT outperforms all due to having a more accurate teacher (Iandola et al., 2020).", "There are some works such as ELECTRA (Clark et al., 2020) and MobileBERT (Sun et al., 2020) that require less FLOPs than our method, but they only reach 95% of the baseline's accuracy.", "Compared to other methods, GhostBERT (Huang et al., 2021) and DynaBERT (Hou et al., 2020) give the closest performance to the baseline and even the same as ours on some tasks such as QNLI.", "However, on average, they still need about 30% more FLOPs on GLUE compared to E-LANG.", "The E-LANG accuracy vs. FLOPs trade-off curves compared to SOTA on some of GLUE tasks are shown in Figure", "4. The trade-off curves for all the tasks are reported in the supplementary materials.", "Among the SOTA methods presented in Table 5236 MNLI (m/mm) QNLI SST2 RTE MRPC COLA QQP Avg.", "4 and Figure 4, only DeeBERT (Xin et al., ACL 2020), Length-Adaptive (Kim and Cho, 2021), and DynaBERT (Hou et al., 2020) are in the category of dynamic inference, where a single model can operate at different trade-off points between accuracy and computational cost.", "The other approaches propose fixed-size smaller versions of BERT Base , which require re-training for every trade-off point.", "To investigate the orthogonality of E-LANG with others, we integrate our energy-based joint inference strategy with DynaBERT that is SOTA in BERT-based adaptive inference.", "In other words, we analyze whether E-LANG can be added on top of other efficient methods to benefit both from their designs and our approach.", "In this experiment, the DynaBERT configurations with the highest accuracy (i.e., width=0.75 & depth=1.0) and the lowest FLOPs (i.e., width=0.5 & depth=0.25) are respectively employed as the Super and Swift models in our framework.", "The corresponding joint inference results on MNLI, SST2, and QQP are reported in Table", "5. As observed, we accomplish the DynaBERT Super's accuracy for MNLI and SST2 with 1.7X and 3.1X less FLOPs.", "For QQP, our method combined with DynaBERT even outperforms DynaBERT by 0.1% with 2.6X FLOPs speed-up.", "In this paper, we introduced E-LANG, an energy-based joint inference approach, which integrates Super and Swift language models for achieving efficient inference without sacrificing the accuracy.", "Our method can work with both encoder-only (e.g., BERT) and encoder-decoder (e.g., T5) architectures, and is also applicable for text classification and sequence-to-sequence problems.", "The proposed joint inference strategy was theoretically and experimentally analyzed with an extensive set of experiments and ablation studies.", "Our results showed that E-LANG outperforms SOTA in both fixed-size and dynamic inference over different benchmarks such as GLUE and SuperGLUE.", "One future direction to this work is to apply E-LANG to multiple Super and Swift models with different sizes." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "objective", "objective", "objective", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "other", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "result", "method" ]
[ "Question-answering plays an important role in e-commerce as it allows potential customers to actively seek crucial information about products or services to help their purchase decision making.", "Inspired by the recent success of machine reading comprehension (MRC) on formal documents, this paper explores the potential of turning customer reviews into a large source of knowledge that can be exploited to answer user questions.", "We call this problem Review Reading Comprehension (RRC).", "To the best of our knowledge, no existing work has been done on RRC.", "In this work, we first build an RRC dataset called ReviewRC based on a popular benchmark for aspect-based sentiment analysis.", "Since ReviewRC has limited training examples for RRC (and also for aspect-based sentiment analysis), we then explore a novel post-training approach on the popular language model BERT to enhance the performance of fine-tuning of BERT for RRC.", "To show the generality of the approach, the proposed post-training is also applied to some other review-based tasks such as aspect extraction and aspect sentiment classification in aspect-based sentiment analysis.", "Experimental results demonstrate that the proposed post-training is highly effective 1 .", "For online commerce, question-answering (QA) serves either as a standalone application of customer service or as a crucial component of a dialogue system that answers user questions.", "Many intelligent personal assistants (such as Amazon Alexa and Google Assistant) support online shopping by allowing the user to speak directly to the assistants.", "One major hindrance for this mode of shopping is that such systems have limited capability to answer user questions about products (or 1 The datasets and code are available at https://www. cs.uic.edu/hxu/ . services), which are vital for customer decision making.", "As such, an intelligent agent that can automatically answer customers' questions is very important for the success of online businesses.", "Given the ever-changing environment of products and services, it is very hard, if not impossible, to pre-compile an up-to-date and reliable knowledge base to cover a wide assortment of questions that customers may ask, such as in factoid-based KB-QA (Xu et al., 2016; Fader et al., 2014; Kwok et al., 2001; Yin et al., 2015).", "As a compromise, many online businesses leverage community question-answering (CQA) (McAuley and Yang, 2016) to crowdsource answers from existing customers.", "However, the problem with this approach is that many questions are not answered, and if they are answered, the answers are delayed, which is not suitable for interactive QA.", "In this paper, we explore the potential of using product reviews as a large source of user experiences that can be exploited to obtain answers to user questions.", "Although there are existing studies that have used information retrieval (IR) techniques (McAuley and Yang, 2016; Yu and Lam, 2018) to find a whole review as the response to a user question, giving the whole review to the user is undesirable as it is quite time-consuming for the user to read it.", "Inspired by the success of Machine Reading Comphrenesions (MRC) (Rajpurkar et al., 2016, 2018), we propose a novel task called Review Reading Comprehension (RRC) as following.", "Problem Definition : Given a question q = ( q 1 , . . . , q m ) from a customer (or user) about a product and a review d = ( d 1 , . . . , d n ) for that product containing the information to answer q , find a sequence of tokens (a text span) a = ( d s , . . . , d e ) in d that answers q correctly, where 1 s n , 1 e n , and s e .", "A sample laptop review is shown in Table 1.", "We can see that customers may not only ask factoid Questions Q1: Does it have an internal hard drive ?", "Q2: How large is the internal hard drive ?", "Q3: is the capacity of the internal hard drive OK ?", "Review Excellent value and a must buy for someone looking for a Macbook .", "You ca n't get any better than this price and it come with A1 an internal disk drive .", "All the newer MacBooks do not .", "Plus you get 500GB A2 which is also a great A3 feature .", "Also , the resale value on this will keep .", "I highly recommend you get one before they are gone .", "questions such as the specs about some aspects of the laptop as in the first and second questions but also subjective or opinion questions about some aspects (capacity of the hard drive), as in the third question.", "RRC poses some domain challenges compared to the traditional MRC on Wikipedia, such as the need for rich product knowledge, informal text, and fine-grained opinions (there is almost no subjective content in Wikipedia articles).", "Research also shows that yes/no questions are very frequent for products with complicated specifica-tions (McAuley and Yang, 2016; Xu et al., 2018b).", "To the best of our knowledge, no existing work has been done in RRC.", "This work first builds an RRC dataset called ReviewRC, using reviews from SemEval 2016 Task 5 2 , which is a popular dataset for aspect-based sentiment analysis (ABSA) (Hu and Liu, 2004) in the domains of laptop and restaurant .", "We detail ReviewRC in Sec.", "5. Given the wide spectrum of domains (types of products or services) in online businesses and the prohibitive cost of annotation, ReviewRC can only be considered to have a limited number of annotated examples for supervised training, which still leaves the domain challenges partially unresolved.", "This work adopts BERT (Devlin et al., 2018) as the base model as it achieves the state-of-the-art performance on MRC (Rajpurkar et al., 2016, 2018).", "Although BERT aims to learn contextualized representations across a wide range of NLP tasks (to be task-agnostic), leveraging BERT alone still leaves the domain challenges un-2 http://alt.qcri.org/semeval2016/ task5/ .", "resolved (as BERT is trained on Wikipedia articles and has almost no understanding of opinion text), and it also introduces another challenge of task-awareness (the RRC task), called the task challenge .", "This challenge arises when the task-agnostic BERT meets the limited number of fine-tuning examples in ReviewRC (see Sec. 5) for RRC, which is insufficient to fine-tune BERT to ensure full task-awareness of the system 3 .", "To address all the above challenges, we propose a novel joint post-training technique that takes BERT's pre-trained weights as the initialization 4 for ba-sic language understanding and adapt BERT with both domain knowledge and task (MRC) knowledge before fine-tuning using the domain end task annotated data for the domain RRC.", "This technique leverages knowledge from two sources: unsupervised domain reviews and supervised (yet out-of-domain) MRC data 5 , where the former enhances domain-awareness and the latter strengthens MRC task-awareness.", "As a general-purpose approach, we show that the proposed method can also benefit ABSA tasks such as aspect extraction (AE) and aspect sentiment classification (ASC).", "The main contributions of this paper are as follows.", "(1) It proposes the new problem of review reading comprehension (RRC).", "(2) To solve this new problem, an annotated dataset for RRC is created.", "(3) It proposes a general-purpose post-training approach to improve RRC, AE, and ASC.", "Experimental results demonstrate that the proposed approach is effective.", "Many datasets have been created for MRC from formally written and objective texts, e.g., Wikipedia (WikiReading (Hewlett et al., 2016), SQuAD (Rajpurkar et al., 2016, 2018), Wiki-Hop (Welbl et al., 2018), DRCD (Shao et al., 2018), QuAC (Choi et al., 2018), HotpotQA (Yang et al., 2018)) news and other articles (CNN/Daily Mail (Hermann et al., 2015), NewsQA (Trischler et al., 2016), RACE (Lai et al., 2017)), fic-tional stories (MCTest (Richardson et al., 2013),", "3 The end tasks from the original BERT paper typically use tens of thousands of examples to ensure that the system is task-aware.", "4 Due to limited computation resources, it is impractical for us to pre-train BERT directly on reviews from scratch (Devlin et al., 2018).", "5 To simplify the writing, we refer MRC as a general-purpose RC task on formal text (non-review) and RRC as an end-task specifically focused on reviews.", "CBT (Hill et al., 2015), NarrativeQA (Kocisk`y et al., 2018)), and general Web documents (MS MARCO (Nguyen et al., 2016), TriviaQA (Joshi et al., 2017), SearchQA (Dunn et al., 2017) ).", "Also, CoQA (Reddy et al., 2018) is built from multiple sources, such as Wikipedia, Reddit, News, Mid/High School Exams, Literature, etc.", "To the best of our knowledge, MRC has not been used on reviews, which are primarily subjective.", "As such, we created a review-based MRC dataset called ReviewRC.", "Answers from ReviewRC are extractive (similar to SQuAD (Rajpurkar et al., 2016, 2018)) rather than abstractive (or generative) (such as in MS MARCO (Nguyen et al., 2016) and CoQA (Reddy et al., 2018)).", "This is crucial because online businesses are typically cost-sensitive and extractive answers written by humans can avoid generating incorrect answers beyond the contents in reviews by an AI agent.", "Community QA (CQA) is widely adopted by online businesses (McAuley and Yang, 2016) to help users.", "However, since it solely relies on humans to give answers, it often takes a long time to get a question answered or even not answered at all as we discussed in the introduction.", "Although there exist researches that align reviews to questions as an information retrieval task (McAuley and Yang, 2016; Yu and Lam, 2018), giving a whole review to the user to read is time-consuming and not suitable for customer service settings that require interactive responses.", "Knowledge bases (KBs) (such as Freebase (Dong et al., 2015; Xu et al., 2016; Yao and Van Durme, 2014) or DBpedia (Lopez et al., 2010; Unger et al., 2012)) have been used for question answering (Yu and Lam, 2018).", "However, the ever-changing nature of online businesses, where new products and services appear constantly, makes it prohibitive to build a high-quality KB to cover all new products and services.", "Reviews also serve as a rich resource for sentiment analysis (Pang et al., 2002; Hu and Liu, 2004; Liu, 2012, 2015).", "Although document-level (review) sentiment classification may be considered as a solved problem (given ratings are largely available), aspect-based sentiment analysis (ABSA) is still an open challenge, where alleviating the cost of human annotation is also a major issue.", "ABSA aims to turn unstructured reviews into structured fine-grained aspects (such as the battery of a laptop) and their associated opinions (e.g., good battery is positive about the aspect battery).", "Two important tasks in ABSA are aspect extraction (AE) and aspect sentiment classification (ASC) (Hu and Liu, 2004), where the former aims to extract aspects (e.g., battery) and the latter targets to identify the polarity for a given aspect (e.g., positive for battery ).", "Recently, supervised deep learning models dominate both tasks (Wang et al., 2016, 2017; Xu et al., 2018a; Tang et al., 2016; He et al., 2018) and many of these models use handcrafted features, lexicons, and complicated neural network architectures to remedy the insufficient training examples from both tasks.", "Although these approaches may achieve better performances by manually injecting human knowledge into the model, human baby-sat models may not be intelligent enough 6 and automated representation learning from review corpora is always preferred (Xu et al., 2018a; He et al., 2018).", "We push forward this trend with the recent advance in pre-trained language models from deep learning (Peters et al., 2018; Howard and Ruder, 2018; Devlin et al., 2018; Radford et al., 2018a,b).", "Although it is practical to train domain word embeddings from scratch on large-scale review corpora (Xu et al., 2018a), it is impractical to train language models from scratch with limited computational resources.", "As such, we show that it is practical to adapt language models pre-trained from formal texts to domain reviews.", "In this section, we briefly review BERT and derive its fine-tuning formulation on three (3) review-based end tasks.", "BERT is one of the key innovations in the recent progress of contextualized representation learning (Peters et al., 2018; Howard and Ruder, 2018; Radford et al., 2018a; Devlin et al., 2018).", "The idea behind the progress is that even though the word embedding (Mikolov et al., 2013; Pennington et al., 2014) layer (in a typical neural network for NLP) is trained from large-scale corpora, training a wide variety of neural architectures that encode contextual representations only from the limited supervised data on end tasks is insufficient.", "Unlike ELMo (Peters et al., 2018) and ULMFiT 6 http://www.incompleteideas.net/ IncIdeas/BitterLesson.html Figure 1: Overview of BERT settings for review reading comprehension (RRC), aspect extraction (AE) and aspect sentiment classification (ASC).", "(Howard and Ruder, 2018) that are intended to provide additional features for a particular architecture that bears human's understanding of the end task, BERT adopts a fine-tuning approach that requires almost no specific architecture for each end task.", "This is desired as an intelligent agent should minimize the use of prior human knowledge in the model design.", "Instead, it should learn such knowledge from data.", "BERT has two parameter intensive settings: BERTBASE : 12 layers, 768 hidden dimensions and 12 attention heads (in transformer) with the total number of parameters, 110M; BERTLARGE : 24 layers, 1024 hidden dimensions and 16 attention heads (in transformer) with the total number of parameters, 340M.", "We only extend BERT with one extra task-specific layer and fine-tune BERT on each end task.", "We focus on three (3) review-based tasks: review reading comprehension (RRC), aspect extraction (AE) and aspect sentiment classification (ASC).", "The inputs/outputs settings are depicted in Figure 1 and detailed in the following subsections.", "Following the success of SQuAD (Rajpurkar et al., 2016) and BERT's SQuAD implementation, we design review reading comprehension as follows.", "Given a question q = ( q 1 , . . . , q m ) asking for an answer from a review d = ( d 1 , . . . , d n ) , we formulate the input as a sequence x = ( [CLS] , q 1 , . . . , q m , [SEP] , d 1 , . . . , d n , [SEP] ) , where [CLS] is a dummy token not used for RRC and [SEP] is intended to separate q and d .", "Let BERT ( ) be the pre-trained (or post-trained as in the next section) BERT model.", "We first obtain the hidden representation as h = BERT ( x ) R r h | x | , where | x | is the length of the input sequence and r h is the size of the hidden dimension.", "Then the hidden representation is passed to two separate dense layers followed by softmax functions: l 1 = softmax ( W 1 h + b 1 ) and l 2 = softmax ( W 2 h + b 2 ) , where W 1 , W 2 R r h and b 1 , b 2 R .", "The softmax is applied along the dimension of the sequence.", "The output is a span across the positions in d (after the [SEP] token of the input), indicated by two pointers (indexes) s and e computed from l 1 and l 2 : s = arg max Idx [SEP] <s< | x | ( l 1 ) and e = arg max s e< | x | ( l 2 ) , where Idx [SEP] is the position of token [SEP] (so the pointers will never point to tokens from the question).", "As such, the final answer will always be a valid text span from the review as a = ( d s , . . . , d e ) .", "where I ( s ) and I ( e ) are one-hot vectors representing the ground truths of pointers.", "RRC may suffer from the prohibitive cost of annotating large-scale training data covering a wide range of domains.", "And BERT severely lacks two kinds of prior knowledge: (1) large-scale domain knowledge (e.g., about a specific product category), and (2) task-awareness knowledge (MRC/RRC in this case).", "We detail the technique of jointly incorporating these two types of knowledge in Sec.", "4. 3.3 Aspect Extraction As a core task in ABSA, aspect extraction (AE) aims to find aspects that reviewers have expressed opinions on (Hu and Liu, 2004).", "In supervised settings, it is typically modeled as a sequence labeling task, where each token from a sentence is labeled as one of { Begin , Inside , Outside } .", "A continuous chunk of tokens that are labeled as one B and followed by zero or more I s forms an aspect.", "The input sentence with m words is constructed as x = ( [CLS] , x 1 , . . . , x m , [SEP] ) .", "After h = BERT ( x ) , we apply a dense layer and a softmax for each position of the sequence: l 3 = softmax ( W 3 h + b 3 ) , where W 3 R 3 r h and b 3 R 3 (3 is the total number of labels ( BIO )).", "Softmax is applied along the dimension of labels for each position and l 3 [0 , 1] 3 | x | .", "The labels are predicted as taking argmax function at each position of l 3 and the loss function is the averaged cross entropy across all positions of a sequence.", "AE is a task that requires intensive domain knowledge (e.g., knowing that screen is a part of a laptop).", "Previous study (Xu et al., 2018a) has shown that incorporating domain word embeddings greatly improve the performance.", "Adapting BERT's general language models to domain reviews is crucial for AE, as shown in Sec.", "5. 3.4 Aspect Sentiment Classification As a subsequent task of AE, aspect sentiment classification (ASC) aims to classify the sentiment polarity (positive, negative, or neutral) expressed on an aspect extracted from a review sentence.", "There are two inputs to ASC: an aspect and a review sentence mentioning that aspect.", "Consequently, ASC is close to RRC as the question is just about an aspect and the review is just a review sentence but ASC only needs to output a class of polarity instead of a textual span.", "Let x = ( [CLS] , q 1 , . . . , q m , [SEP] , d 1 , . . . , d n , [SEP] ) , where q 1 , . . . , q m now is an aspect (with m tokens) and d 1 , . . . , d n is a review sentence containing that aspect.", "After h = BERT ( x ) , we leverage the representations of [CLS] h [CLS] , which is the aspect-aware representation of the whole input.", "The distribution of polarity is predicted as l 4 = softmax ( W 4 h [CLS] + b 4 ) , where W 4 R 3 r h and b 4 R 3 (3 is the number of po-larities).", "Softmax is applied along the dimension of labels on [CLS] : l 4 [0 , 1] 3 .", "Training loss is the cross entropy on the polarities.", "As a summary of these tasks, insufficient supervised training data significantly limits the performance gain across these 3 review-based tasks.", "Although BERT's pre-trained weights strongly boost the performance of many other NLP tasks on formal texts, we observe in Sec. 5 that BERT's weights only result in limited gain or worse performance compared with existing baselines.", "In the next section, we introduce the post-training step to boost the performance of all these 3 tasks.", "As discussed in the introduction, fine-tuning BERT directly on the end task that has limited tuning data faces both domain challenges and task-awareness challenge.", "To enhance the performance of RRC (and also AE and ASC), we may need to reduce the bias introduced by non-review knowledge (e.g., from Wikipedia corpora) and fuse domain knowledge (DK) (from unsupervised domain data) and task knowledge (from supervised MRC task but out-of-domain data).", "Given MRC is a general task with answers of questions covering almost all document contents, a large-scale MRC supervised corpus may also benefit AE and ASC.", "Eventually, we aim to have a general-purpose post-training strategy that can exploit the above two kinds of knowledge for end tasks.", "To post-train on domain knowledge, we leverage the two novel pre-training objectives from BERT: masked language model (MLM) and next sentence 7 prediction (NSP).", "The former predicts randomly masked words and the latter detects whether two sides of the input are from the same document or not.", "A training example is formulated as ( [CLS] , x 1: j , [SEP] , x j +1: n , [SEP] ) , where x 1: n is a document (with randomly masked words) split into two sides x 1: j and x j +1: n and [SEP] separates those two.", "MLM is crucial for injecting review domain knowledge and for alleviating the bias of the knowledge from Wikipedia.", "For example, in the Wikipedia domain, BERT may learn to guess the [MASK] in The [MASK] is bright as sun.", "But in a laptop domain, it could be screen.", "Further, if the [MASK] ed word is an opinion word in The touch screen is [MASK] , this objective challenges BERT to learn the representations for fine-grained opinion words like great or terri-ble for [MASK] .", "The objective of NSP further encourages BERT to learn contextual representation beyond word-level.", "In the context of reviews, 7 The BERT paper refers a sentence as a piece of text with one to many natural language sentences.", "NSP formulates a task of artificial review predic-tion, where a negative example is an original review but a positive example is a synthesized fake review by combining two different reviews.", "This task exploits the rich relationships between two sides in the input, such as whether two sides of texts have the same rating or not (when two reviews with different ratings are combined as a positive example), or whether two sides are targeting the same product or not (when two reviews from different products are merged as a positive exam-ple).", "In summary, these two objectives encourage to learn a myriad of fine-grained features for potential end tasks.", "We let the loss function of MLM be LMLM and the loss function of next text piece prediction be LNSP , the total loss of the domain knowledge post-training is LDK = LMLM + LNSP .", "To post-train BERT on task-aware knowledge, we use SQuAD (1.1), which is a popular large-scale MRC dataset.", "Although BERT gains great success on SQuAD, this success is based on the huge amount of training examples of SQuAD (100,000+).", "This amount is large enough to ameliorate the flaws of BERT that has almost no questions on the left side and no textual span predictions based on both the question and the document on the right side.", "However, a small amount of fine-tuning examples is not sufficient to turn BERT to be more task-aware, as shown in Sec.", "5. We let the loss on SQuAD be LMRC , which is in a similar setting as the loss LRRC for RRC.", "As a result, the joint loss of post-training is defined as L = LDK + LMRC .", "One major issue of post-training on such a loss is the prohibitive cost of GPU memory usage.", "Instead of updating parameters over a batch, we divide a batch into multiple sub-batches and accumulate gradients on those sub-batches before parameter updates.", "This allows for a smaller sub-batch to be consumed in each iteration.", "Algorithm 1 describes one training step and takes one batch of data on domain knowledge (DK) DDK and one batch of MRC training data DMRC to update the parameters of BERT.", "In line 1, it first initializes the gradients of all parameters as 0 to prepare gradient computation.", "Then in lines 2 and 3, each batch of training data is split into u sub-batches.", "Lines 4-7 spread the calculation of gradients to u iterations, where the data from each iteration of sub-batches are supposed Algorithm 1: Post-training Algorithm Input: DDK : one batch of DK data; DMRC one batch of MRC data; u : number of sub-batches.", "to be able to fit into GPU memory.", "In line 5, it computes the partial joint loss L partial of two sub-batches DDK ,i and DMRC ,i from the i -th iteration through forward pass.", "Note that the summation of two sub-batches' losses is divided by u , which compensate the scale change introduced by gradient accumulation in line", "6. Line 6 accumulates the gradients produced by backpropagation from the partial joint loss.", "To this end, accumulating the gradients u times is equivalent to computing the gradients on the whole batch once.", "But the sub-batches and their intermediate hidden representations during the i -th forward pass can be discarded to save memory space.", "Only the gradients are kept throughout all iterations and used to update parameters (based on the chosen optimizer) in line 8.", "We detail the hyper-parameter settings of this algorithm in Sec. 5.3.", "(RQs) in the experiment: RQ1: what is the performance gain of post-training for each review-based task, with respect to the state-of-the-art performance?", "RQ2: what is the performance of BERT's pre-trained weights on three review-based tasks without any domain and task adaptation?", "RQ3: upon ablation studies of separate domain knowledge post-training and task-awareness post-training, what is their respective contribution to the whole post-training performance gain?", "As there are no existing datasets for RRC and to be consistent with existing research on sentiment", "analysis, we adopt the laptop and restaurant reviews of SemEval 2016 Task 5 as the source to create datasets for RRC.", "We do not use SemEval 2014 Task 4 or SemEval 2015 Task 12 because these datasets do not come with the review(document)-level XML tags to recover whole reviews from review sentences.", "We keep the split of training and testing of the SemEval 2016 Task 5 datasets and annotate multiple QAs for each review following the way of constructing QAs for the SQuAD 1.1 datasets (Rajpurkar et al., 2016).", "To make sure our questions are close to real-world questions, 2 annotators are first exposed to 400 QAs from CQA (under the laptop category in Amazon.com or popular restaurants in Yelp.com) to get familiar with real questions.", "Then they are asked to read reviews and independently label textual spans and ask corresponding questions when they feel the textual spans contain valuable information that customers may care about.", "The textual spans are labeled to be as concise as possible but still human-readable.", "Note that the annotations for sentiment analysis tasks are not exposed to annotators to avoid biased annotation on RRC.", "Since it is unlikely that the two annotators can label the same QAs (the same questions with the same answer spans), they further mutually check each other's annotations and disagreements are discussed until agreements are reached.", "Annotators are encouraged to label as many questions as possible from testing reviews to get more test examples.", "A training review is encouraged to have 2 questions (training examples) on average to have good coverage of reviews.", "The annotated data is in the format of SQuAD 1.1 (Rajpurkar et al., 2016) to ensure compatibility with existing implementations of MRC models.", "The statistics of the RRC dataset (ReviewRC) are shown in Table", "2. Since SemEval datasets do not come with a validation set, we further split 20% of reviews from the training set for validation.", "Statistics of datasets for AE and ASC are given in Table", "3. For AE, we choose SemEval 2014 Task 4 for laptop and SemEval-2016 Task 5 for restaurant to be consistent with (Xu et al., 2018a) and other previous works.", "For ASC, we use SemEval 2014 Task 4 for both laptop and restaurant as existing research frequently uses this version.", "We use 150 examples from the training set of all these datasets for validation.", "For domain knowledge post-training, we use Amazon laptop reviews (He and McAuley, 2016) and Yelp Dataset Challenge reviews 8 .", "For laptop, we filtered out reviewed products that have appeared in the validation/test reviews to avoid training bias for test data (Yelp reviews do not have this issue as the source reviews of SemEval are not from Yelp).", "Since the number of reviews is small, we choose a duplicate factor of 5 (each review generates about 5 training examples) during BERT data pre-processing.", "This gives us 1,151,863 post-training examples for laptop domain knowledge.", "For the restaurant domain, we use Yelp reviews from restaurant categories that the SemEval reviews also belong to (Xu et al., 2018a).", "We choose 700K reviews to ensure it is large enough to generate training examples (with a duplicate factor of 1) to cover all post-training steps that we can afford (discussed in Section 5.3) 9 .", "This gives us 2,677,025 post-training examples for restaurant domain knowledge learning.", "For MRC task-awareness post-training, we leverage SQuAD 1.1 (Rajpurkar et al., 2016) that come with 87,599 training examples from 442 Wikipedia articles.", "We adopt BERTBASE (uncased) as the basis for all experiments 10 .", "Since post-training may take a large footprint on GPU memory (as BERT pre-training), we leverage FP16 computation 11 to reduce the size of both the model and hidden representations of data.", "We set a static loss scale of 2 in FP16, which can avoid any over/under-flow of floating point computation.", "The maximum length of post-training is set to 320 with a batch size of 16 for each type of knowledge.", "The number of sub-batch u is set to 2, which is good enough to store each sub-batch iteration into a GPU memory of 11G.", "We use Adam optimizer and set the learning rate to be 3e-5.", "We train 70,000 steps for the laptop domain and 140,000 steps for the restaurant 8 https://www.yelp.com/dataset/ challenge 9 We expect that using more reviews can have even better results but we limit the amount of reviews based on our computational power.", "10 We expect BERTLARGE to have better performance but leave that to future work due to limited computational power.", "As BERT outperforms existing open source MRC baselines by a large margin, we do not intend to exhaust existing implementations but focus on variants of BERT introduced in this paper.", "DrQA is a baseline from the document reader 12 of DrQA (Chen et al., 2017).", "We adopt this baseline because of its simple implementation for reproducibility.", "We run the document reader with random initialization and train it directly on ReviewRC.", "We use all default hyper-parameter settings for this baseline except the number of epochs, which is set as 60 for better convergence.", "DrQA+MRC is derived from the above baseline with official pre-trained weights on SQuAD.", "We fine-tune document reader with ReviewRC.", "We expand the vocabulary of the embedding layer from the pre-trained model on ReviewRC since reviews may have words that are rare in Wikipedia and keep other hyper-parameters as their defaults.", "our knowledge) for brevity.", "DE-CNN (Xu et al., 2018a) reaches the state-of-the-arts for AE by leveraging domain embeddings.", "MGAN (Li et al., 2018) reaches the state-of-the-art ASC on SemEval 2014 task", "4. Lastly, to answer RQ1, RQ2, and RQ3, we have the following BERT variants.", "12 https://github.com/facebookresearch/DrQA", "weights and fine-tunes on all 3 end tasks.", "We use this baseline to answer RQ2 and show that BERT's pre-trained weights alone have limited performance gains on review-based tasks.", "BERT-DK post-trains BERT's weights only on domain knowledge (reviews) and fine-tunes on the 3 end tasks.", "We use BERT-DK and the following BERT-MRC to answer RQ3.", "BERT-MRC post-trains BERT's weights on SQuAD 1.1 and then fine-tunes on the 3 end tasks.", "BERT-PT (proposed method) post-trains BERT's weights using the joint post-training algorithm in Section 4 and then fine-tunes on the 3 end tasks.", "To be consistent with existing research on MRC, we use the same evaluation script from SQuAD 1.1 (Rajpurkar et al., 2016) for RRC, which reports Exact Match (EM) and F1 scores.", "EM requires the answers to have exact string match with human annotated answer spans.", "F1 score is the averaged F1 scores of individual answers, which is typically higher than EM and is the major metric.", "Each individual F1 score is the harmonic mean of individual precision and recall computed based on the number of overlapped words between the predicted answer and human annotated answers.", "For AE, we use the standard evaluation scripts come with the SemEval datasets and report the F1 score.", "For ASC, we compute both accuracy and Macro-F1 over 3 classes of polarities, where Macro-F1 is the major metric as the imbalanced classes introduce biases on accuracy.", "To be consistent with existing research (Tang et al., 2016), examples belonging to the conflict polarity are dropped due to a very small number of examples.", "We set the maximum number of epochs to 4 for BERT variants, though most runs converge just within 2 epochs.", "Results are reported as averages of 9 runs (9 different random seeds for random batch generation).", "13 5.6 Result Analysis The results of RRC, AE and ASC are shown in Tables 4, 5 and 6, respectively.", "To answer RQ1, we observed that the proposed joint post-training (BERT-PT) has the best performance over all tasks in all domains, which show the benefits of having two types of knowledge.", "13 We notice that adopting 5 runs used by existing researches still has a high variance for a fair comparison.", "To answer RQ2, to our surprise we found that the vanilla pre-trained weights of BERT do not work well for review-based tasks, although it achieves state-of-the-art results on many other NLP tasks (Devlin et al., 2018).", "This justifies the need to adapt BERT to review-based tasks.", "To answer RQ3, we noticed that the roles of domain knowledge and task knowledge vary for different tasks and domains.", "For RRC, we found that the performance gain of BERT-PT mostly comes from task-awareness (MRC) post-training (as indicated by BERT-MRC).", "The domain knowledge helps more for restaurant than for laptop.", "We suspect the reason is that certain types of knowledge (such as specifications) of laptop are already present in Wikipedia, whereas Wikipedia has lit-tle knowledge about restaurant.", "We further investigated the examples improved by BERT-MRC and found that the boundaries of spans (especially short spans) were greatly improved.", "For AE, we found that great performance boost comes mostly from domain knowledge post-training, which indicates that contextualized representations of domain knowledge are very important for AE.", "BERT-MRC has almost no improvement on restaurant, which indicates Wikipedia may have no knowledge about aspects of restaurant.", "We suspect that the improvements on laptop come from the fact that many answer spans in SQuAD are noun terms, which bear a closer relationship with laptop aspects.", "For ASC, we observed that large-scale annotated MRC data is very useful.", "We suspect the reason is that ASC can be interpreted as a special MRC problem, where all questions are about the Domain Laptop Rest.", "polarity of a given aspect.", "MRC training data may help BERT to understand the input format of ASC given their closer input formulation.", "Again, domain knowledge post-training also helps ASC.", "We further investigated the errors from BERT-PT over the 3 tasks.", "The errors on RRC mainly come from boundaries of spans that are not concise enough and incorrect location of spans that may have certain nearby words related to the question.", "We believe precisely understanding user's experience is challenging from only domain post-training given limited help from the RRC data and no help from the Wikipedia data.", "For AE, errors mostly come from annotation inconsistency and boundaries of aspects (e.g., apple OS is predicted as OS).", "Restaurant suffers from rare aspects like the names of dishes.", "ASC tends to have more errors as the decision boundary between the negative and neutral examples is unclear (e.g., even annotators may not sure whether the reviewer shows no opinion or slight negative opinion when mentioning an aspect).", "Also, BERT-PT has the problem of dealing with one sentence with two opposite opinions (The screen is good but not for windows.).", "We believe that such training examples are rare.", "We proposed a new task called review reading comprehension (RRC) and investigated the possibility of turning reviews as a valuable resource for answering user questions.", "We adopted BERT as our base model and proposed a joint post-training approach to enhancing both the domain and task knowledge.", "We further explored the use of this approach in two other review-based tasks: aspect extraction and aspect sentiment classification.", "Experimental results show that the post-training approach before fine-tuning is effective.", "Bing Liu's work was partially supported by the National Science Foundation (NSF IIS 1838770) and by a research gift from Huawei." ]
[ "abstain", "objective", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "abstain", "abstain", "objective", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "objective", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "objective", "other", "other", "method", "method", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "other", "abstain", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "abstain", "other" ]
[ "Early exit mechanism aims to accelerate the inference speed of large-scale pre-trained language models.", "The essential idea is to exit early without passing through all the inference layers at the inference stage.", "To make accurate predictions for downstream tasks, the hierarchical linguistic information embedded in all layers should be jointly considered.", "However, much of the research up to now has been limited to use local representations of the exit layer.", "Such treatment inevitably loses information of the unused past layers as well as the high-level features embedded in future layers, leading to sub-optimal performance.", "To address this issue, we propose a novel Past-Future method to make comprehensive predictions from a global perspective.", "We first take into consideration all the linguistic information embedded in the past layers and further engage the future information which is originally inaccessible for predictions.", "Extensive experiments demonstrate that our method outperforms previous early exit methods by a large margin, yielding better and robust performance 1 .", "Pre-trained language models (PLMs), e.g., BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019) and XLNet (Yang et al., 2019), have obtained remarkable success in a wide range of NLP tasks.", "Despite their impressive performance, PLMs are usually associated with large memory requirement and high computational cost.", "Such drawbacks slow down the inference and further encumber the application of PLMs in the scenarios where inference time and computation budget are restricted.", "To address this issue, a growing number of studies focusing on improving model efficiency have Equal contribution 1 The code is available at https://github.com/ lancopku/Early-Exit emerged recently.", "Particularly, Kaya et al. (2019) point out that the current over-parameterized models conduct excessive computation for simple instances, which is actually undesirable and computationally wasteful.", "In light of this observation, an increasing amount of work seeks various early exit methods, of which the basic idea is to exit early without passing through the entire model during inference.", "Concretely, for NLP tasks, they couple branch classifiers with each layer of the pre-trained language models and stop forward propagation at an intermediate layer.", "Then the current branch classifier makes a prediction based on the representation of the token that is used as the aggregated sequence representation for classification tasks and is referred to as the state of the layer in this work.", "However, existing work on early exit has two major drawbacks.", "First, existing work (Xin et al., 2020; Zhou et al., 2020) uses only local states in the early exit framework.", "They inevitably lose valuable features that are captured by passed layers but are ignored for prediction, leading to less reliable prediction results.", "Moreover, these methods abandon the potentially useful features captured by the future layers that have not been passed, which may hurt the performance of the instances requiring high-level features embedded in the deep layers.", "Consequently, their performance dramatically declines when the inference exits earlier for a higher speed-up ratio.", "These two major drawbacks hinder the progress of early exit research and motivate us to develop a new mechanism using the hierarchical linguistic information embedded in all layers (Jawahar et al., 2019) from a global perspective.", "However, up to now, a global early exit mechanism remains a under-explored challenging problem.", "We extend the existing methods to their corresponding global versions and find that naive global strategies only result in poor performance.", "Meanwhile, the future states are originally inaccessible in the early exit Layer 1 Layer 6 Layer 12 Layer 7 0 1 2 Prediction Input 0.81 6 1", "framework, which also remains a bottleneck for a global prediction considering both past and future states.", "In this paper, we focus on the aforementioned problems and first put into practice a global Past-Future early exit mechanism.", "The term global is two-fold: (1) instead of using one or several local state(s) for prediction in previous work, all the available past states are effectively incorporated in our method; (2) furthermore, to grasp the features embedded in the deep layers, the originally inaccessible future states are approximated by imitation learning and are also engaged for prediction.", "The comparison of the previous method and our method is illustrated in Figure 1. By combining both past and future states, our model is able to make more accurate predictions for downstream tasks.", "Extensive experiments reveal that the proposal significantly outperforms previous early exit methods.", "Particularly, it surpasses the previous methods by a large margin when the speed-up ratio is relatively high.", "In addition, extensive experiments with different pre-trained language models as backbones demonstrate consistent improvement over the baseline methods, which verifies the generality of our method.", "To summarize, our contributions are as follows: We propose a set of global strategies which effectively incorporate all available states and they achieve better performance compared to the existing naive global strategies.", "Our early exit method first utilizes the future states which are originally inaccessible at the inference stage, enabling more comprehensive global predictions.", "Experiments show that our proposal achieves better performance compared to the previous state-of-the-art early exit methods.", "Large-scale pre-trained language models (Devlin et al., 2019; Liu et al., 2019) based on the Transformer (Vaswani et al., 2017) architecture demonstrate superior performance in various NLP tasks.", "However, the impressive performance is on the basis of massive parameters, leading to large memory requirement and computational cost during inference.", "To overcome this bottleneck, increasing studies work on improving the efficiency of over-parameterized pre-trained language models.", "Knowledge distillation (Hinton et al., 2015; Turc et al., 2019; Jiao et al., 2019; Li et al., 2020a) compacts the model architecture to obtain a smaller model that remains static for all instances at the inference stage.", "Sanh et al. (2019) focus on reducing the number of layers since their investigation reveals variations on hidden size dimension have a smaller impact on computation efficiency.", "Sun et al. (2019) learn from multiple intermediate layers of the teacher model for incremental knowledge extraction instead of only learning from the last hidden representations.", "Further, Wang et al. (2020) design elaborate techniques to drive the student model to mimic the self-attention module of teacher models.", "Xu et al. (2020) compress model by progressive module replacing, showing a new perspective of model compression.", "However, these static model compression methods treat the instances requiring different computational cost without distinction.", "Moreover, they have to distill a model from scratch to meet the varying speed-up ratio requirements.", "To meet different constraints for acceleration, another line of work studies instance-adaptive methods to adjust the number of executed layers for different instances.", "Li et al. (2020b) select models in different sizes depending on the difficulty of input instance.", "Besides, early exit is a practical method to adaptively accelerate inference and is first proposed for computer vision tasks (Kaya et al., 2019; Teerapittayanon et al., 2016).", "El-bayad et al. (2020); Xin et al. (2020); Schwartz et al. (2020) follow the essential idea and leverage the method in NLP tasks.", "To prevent the error from one single classifier, Zhou et al. (2020) make the model stop inference when a cross-layer consistent prediction is achieved.", "However, researches on the subject has been mostly restricted to only use the local states around the exit layer.", "We first introduce the strategies to incorporate multiple states and the imitation learning method for generating approximations of future states.", "Then we introduce the merging gate to adaptively fuse past and future states.", "At last, we show the training process and the exit condition during inference.", "Existing work (Xin et al., 2020) focuses on making exit decision based on a single branch classifier.", "The consequent unreliable result motivates the recent advance (Zhou et al., 2020) that uses consecutive states to improve the accuracy and robustness.", "However, the model prediction is still limited to use several local states.", "In contrast, we investigate how to incorporate all the past states from a global perspective.", "The existing strategy using consecutive consistent prediction labels can be easily extended to a global version that counts the majority of the predicted labels which is regarded as a voting strategy.", "Another alternative is the commonly-used ensemble strategy that averages the output probabilities for prediction.", "Besides these naive solutions, we explore the following strategies to integrate multiple states into a single one: Max-Pooling: The max-pooling operation is performed on all available states, resulting in the integrated state.", "Avg-Pooling: The average-pooling operation is performed on all available states, resulting in the integrated state.", "Attn-Pooling: The attentive-pooling takes the weighted summation of all available states as the integrated state.", "The attention weights are computed with the last state as the query.", "Concatenation: All available states are concatenated and then fed into a linear transformation layer to obtain the compressed state.", "Sequential Neural Network: All available states are sequentially fed into an LSTM and the hidden output of the last time-step is regarded as the integrated state.", "Formally, the state of the i -th layer is denoted as s i .", "When forward propagation proceeds to the i -th intermediate layer, all the past states s 1: i are incorporated into a global past state s p : s p = G ( s 1: i ) (1) where G ( ) refers to one of the state incorporation strategies.", "Existing work for early exit stops inference at an intermediate layer and ignores the underlying valuable features captured by the future layers.", "Such treatment is partly rationalized by the recent claim (Kaya et al., 2019) that shallow layers are adequate to make a correct prediction.", "However, Jawahar et al. (2019) reveal that the pre-trained language models capture a hierarchy of linguistic information from the lower to the upper layers, e.g., the lower layers learn the surface or syntactic features while the upper layers capture high-level information like the semantic features.", "We hypothesize that some instances not only rely on syntactic features but also require semantic features.", "It is actually undesirable to only consider features captured by shallow layers.", "Therefore, we propose to take advantage of both past and future states.", "Normally, we can directly fetch the past states, while using future information is intractable how since the future states are inaccessible before passing through the future layers.", "To bridge this gap, we propose a simple method to approximate the future states in light of imitation learning (Ross et al., 2011; Nguyen, 2016; Ho and Ermon, 2016).", "We couple each layer with an imitation learner.", "During training, the imitation learner is encouraged to mimic the representation of the real state of that layer.", "Through this layer-wise imitation, we can Input Layer 1 Layer m Layer N Layer m+1 FutureInfo Past Info 1 1 +1 +1 Computed Layers UncomputedLayers Figure 2: The illustration of the future imitation learning.", "obtain approximations of the future states with minimum cost.", "The illustration of the future imitation learning during inference is shown in Figure 2. To be precise, we intend to obtain a state approximation of the j -th layer if the forward pass exits at the intermediate i -th layer for any j > i .", "During training, we pass through the entire n -layer model but we simulate the situation that the forward pass ends up at the i -th layer for any i < n .", "The j -th learner corresponding to the j -th layer takes s i as input and outputs an approximation s ij of the real state s j .", "Then s j serves as a teacher to guide the j th imitation learner.", "We adopt cosine similarity as the distance measurement and penalize the discrepancy between the real state s j and the learned state s ij .", "Let L i cos denotes the imitation loss of the situation that the forward pass exits at the i -th layer, it is computed as the average of the similarity loss for any j > i .", "Since the exit layer i can be any number between 2 to n during inference, we go through all possible number i and average the corresponding L i cos , resulting the overall loss L cos : s ij = Learner j ( s i ) (2) l i,j cos ( s j , s ij ) = 1 s ij s j (cid:107) s ij (cid:107)(cid:107) s j (cid:107) (3) L i cos = 1 n i (cid:88) n j = i +1 l i,j cos ( s j , s ij ) (4) L cos = 1 n 1 (cid:88) n i =2 L i cos (5) where (cid:107) (cid:107) denotes the L 2 norm.", "simple feed-forward layer with learnable parameters W i and b i .", "During training, the forward propagation is computed on all layers and all imitation learners are encouraged to generate representations close to the real states.", "During inference, the forward propagation proceeds to the i -th intermediate layer and the subsequent imitation learners take the i -th real state as input to generate the approximations of future states.", "Then the approximations are incorporated into a comprehensive future state s f with one of the global strategies introduced before: s f = G ( s ii +1: n ) (6) where s ii +1: n denotes the approximations of the states from the ( i +1)-th layer to the n -th layer.", "We then explore how to adaptively merge the past information and future information.", "Intuitively, the past state s p and the future state s f are of different importance since the authentic past states are more reliable than our imitated future states.", "In addition, different instances depend differently on high-level features learned by future layers.", "Therefore, it is indispensable to develop an adaptive method to automatically combine the past state s p and the future state s f .", "In our work, we design an adaptive merging gate to automatically fuse the past state s p and the future state s f .", "As the forward propagation proceeds to the i -th layer, we compute the reliability of the past state s p , and the final merged representation is a trade-off between these two states: = sigmoid ( FFN ( s p )) (7) z i = s p + (1 ) s f (8) where z i is the merged final state and FFN ( ) is a linear feed forward layer of the merging gate.", "During training, each layer can generate the approximated states of future and obtain a merged final state which is used for prediction.", "Then the model will be updated with the layer-wise cross-entropy loss against the ground-truth label y .", "The merging gate adaptively learns to adjust the balance under the supervision signal given by ground-truth labels.", "However, with the layer-wise optimization objectives, the shallow layers will be updated more frequently since they receive more updating signals from higher layers.", "To address this issue, we heuristically re-weight the cross entropy loss of each layer depending on its depth i and get its weight w i .", "w i = i (cid:80) nj =1 j (9) p i = softmax ( z i ) (10) L i ce = (cid:88) l labels y ( l ) log ( p i ( l )) (11) L ce = (cid:88) n i =1 w i L i ce (12)", "The overall loss is computed as follows:", "Here we introduce the fine-tuning technique and the exit condition at the inference stage.", "Fine-tuning The representations learned by shallow layers have a big impact on performance in the early exit framework since the prediction largely depends on the states of shallow layers.", "Most existing work updates all of the model layers at each step during fine-tuning to adapt to the data of downstream tasks.", "However, we argue that such an aggressive updating strategy may undermine the well-generalized features learned in the pretraining stage.", "In our work, we try to balance the requirements of maintaining features learned in pre-training and adapting to data at the fine-tuning stage.", "Specifically, the parameters of a layer will be frozen with a probability p and the probability p linearly decreases from the first layer to the L -th layer in a range of 1 to 0 .", "Inference Following Xin et al. (2020), we quantify the prediction confidence e with the entropy of the output distribution p i of i -th layer: e ( p i ) = Entropy ( p i ) (14) The inference stops once the confidence e ( p i ) is lower than a predefined threshold .", "The hyper-parameter is adjusted according to the required speed-up ratios.", "If the exit condition is never reached, our model degrades into the common case of inference that the complete forward propagation is accomplished.", "Experimental Settings Following", "method on six classification datasets from the GLUE benchmark (Wang et al., 2019): SST-2, MRPC, QNLI, RTE, QQP, and MNLI.", "We perform a grid search over the sets of learning rate as {1e-5, 2e-5, 3e-5, 5e-5}, batch size as {16, 32, 128} and number of frozen layers during fine-tuning as {0,1,2,3}.", "The maximum sequence length is fixed to 128.", "We employ a linear decay learning rate scheduler and the AdamW optimizer.", "In addition, we use the concatenation strategy to incorporate all available states for its best performance on the GLUE dev set.", "Speed Measurement Since the measurement of runtime might not be stable, following Xin et al. (2020); Zhou et al. (2020), we manually adjust the exit threshold and calculate the speed-up ratio by comparing the actually executed layers in forward propagation and the required complete layers.", "For a n -layer model, the speed-up ratio is: speed-up ratio = (cid:80) ni =1 n m i (cid:80) ni =1 i m i (15) where m i is the number of examples that exit at the i -th layer of the model.", "The proposed method can be practical for a range of existing pre-trained language models.", "Without losing generality, we conduct experiments with several well-known PLMs as backbones, namely, BERT, RoBERTa, and ALBERT (Lan et al., 2019).", "Both BERT and RoBERTa suffer from the problem of over-parameterization.", "ALBERT largely alleviates this problem and is very efficient in terms of model size, the results on which verify the effectiveness on such parameter-efficient models.", "We mainly compare our method with other methods targeting on reducing the depth of models, including the recent early exit methods and the method directly reducing model depth to m layers which is denoted as (AL)BERTm L. 4.3 Overall Comparison We compare our model performance with the baseline methods when different backbone models are adopted and show the result in Table 1 and Table 2. Both PABEE (Zhou et al., 2020) and DeeBERT (Xin et al., 2020) accelerate inference with a highest 2 speed-up ratio.", "To be consistent, we adjust the exit threshold to obtain a 2 speed-up ratio and report the results in Table 1. As shown, our Model MNLI-m MNLI-mm QQP QNLI SST-2 MRPC RTE Macro Acc Spd-up Acc Spd-up F1/Acc Spd-up Acc Spd-up Acc Spd-up F1/Acc Spd-up Acc Spd-up BERT BERT-base (Devlin et al., 2019) 84.6 1.00 83.4 1.00 71.2/ -1.00 90.5 1.00 93.5 1.00 88.9/ -1.00 66.4 1.00 -BERT-6L 80.8 2.00 79.9 2.00 69.7/88.3 2.00 86.7 2.00 91.0 2.00 85.1/78.6 2.00 63.9 2.00 80.5 DeeBERT (Xin et al., 2020) --69.4/ -1.96 87.9 1.79 91.5 1.89 85.2/ -1.79 -DeeBERT 74.4 1.87 73.1 1.88 70.4/88.8 2.13 85.6 2.09 90.2 2.00 84.4/77.4 2.07 64.3 1.95 74.7 PABEE 79.8 2.07 78.7 2.08 70.4/88.6 2.09 88.0 1.87 89.3 1.95 84.4/77.4 2.01 64.0 1.81 80.0 Ours 83.3 1.96 82.7 1.96 71.2/89.4 2.18 89.8 1.97 92.8 2.02 87.0/81.8 1.98 64.5 2.04 82.5 RoBERTa RoBERTa-base (Xin et al., 2020) 87.0 1.00 86.3 1.00 71.8/ -1.00 92.4 1.00 94.3 1.00 90.4/ -1.00 67.5 1.00 -RoBERTa-6L 84.4 2.00 83.4 2.00 71.6/89.2 2.00 90.4 2.00 93.5 2.00 89.3/85.5 2.00 58.0 2.00 82.5 DeeBERT 64.2 1.87 64.7 1.87 72.0/89.3 2.05 83.8 2.01 86.9 2.02 88.7/84.3 1.86 60.8 1.90 75.4 Ours 86.6 1.92 86.2 1.93 72.0/89.3 2.54 91.7 2.11 94.5 1.98 89.3/85.5 1.95 58.0 2.11 83.6 ALBERT ALBERT-base 85.2 1.00 84.7 1.00 70.5/88.7 1.00 92.0 1.00 93.3 1.00 89.0/84.8 1.00 72.0 1.00 84.8 ALBERT-6L 82.4 2.00 81.7 2.00 69.8/88.3 2.00 90.0 2.00 91.8 2.00 87.0/82.4 2.00 65.8 2.00 82.2 PABEE 84.2 1.90 83.5 1.81 70.7/88.9 2.11 90.9 1.98 92.4 1.80 87.6/82.6 1.91 66.8 2.06 83.2 Ours 84.8 1.94 84.1 1.95 70.4/88.6 2.35 91.9 1.97 92.8 2.13 88.3/84.6 1.95 72.0 1.93 84.5 Table 1: Model performance on the GLUE test set with different PLMs as backbone.", "method maintains a comparable result with the original models on most datasets.", "We also notice that directly reducing layers performs well and serves as a strong baseline.", "Nevertheless, our proposal significantly outperforms such a method as well as the other two early exit methods.", "We then adopt a more aggressive 3.00 speedup ratio to verify the effectiveness of our method.", "According to Table 2, the performance of PABEE and DeeBERT deteriorates badly.", "In contrast, our model exhibits more robust and stable performance, showing its superiority over previous early exit methods.", "Particularly, ALBERT is already very efficient in model size owing to its layer-sharing mechanism.", "Results shown in the bottom of Table 2 suggest that our model can obtain a good result with minimum performance loss on such a parameter-efficient model.", "The success of our proposal might be attributed to the global perspective for prediction.", "DeeBERT makes prediction with the help of the state of a single branch classifier, leading to less reliable results.", "Although PABEE employs cross-layer prediction to prevent error from one single classifier, they ignore much available information of past states as well as the high-level semantic features captured by future layers.", "Different from those methods, our method jointly takes into consideration the hierarchical linguistic information embedded in all layers and thus is able to produce more accurate results.", "To further verify the robustness and efficiency of our method, we visualize the performance-efficiency trade-off curves in Figure 3 on a representative subset of the GLUE dev set.", "The backbone", "model is BERT.", "Please refer to the Appendix A for results of RoBERTa and ALBERT.", "As can be seen from Figure 3, the performance of previous state-of-the-art early exit methods drops dramatically when the speed-up ratio increases, which limits their practicality for higher acceleration requirements.", "By comparison, our method demonstrates more tolerance of speed-up ratio.", "It significantly improves performance compared to previous best-performing early exit models under the same speedup ratio, especially in the case that the speed-up ratio is high, indicating that it can be applied in a wider range of acceleration scenarios.", "The results of different global strategies on a representative subset of GLUE dev are shown in Table 3. The naive global strategies including voting and ensemble perform poorly, which demonstrates that existing global strategies can only achieve suboptimal performance.", "In contrast, we design simple yet effective global strategies to incorporate past states which bring significant improvement compared to baselines.", "In addition, we empirically find that the concatenation strategy works best from an overall point of view.", "We assume that such a strategy allows interaction among different states, yielding better performance.", "In addition, the effect of the merging gate can be found in Appendix B. 4.5.2 Analysis of Future Information To assess whether and how future information contributes to the prediction, we first evaluate the Global Future version of our early exit method where all the approximations of futures states are incorporated through the concatenation strategy.", "Effect of future information is backed with the results shown in Table 4. We observe that the Global Future mechanism brings improvement on most datasets for both 2 speed-up ratio and 3 speedup ratio, which confirms that the approximations of future states help enhance the model ability in prediction.", "Beyond that, the future states can be especially advantageous for the models with a higher speed-up ratio.", "Recall that approximations of future states complement the high-level semantic information and the exit at shallow layers loses more semantic information in comparison with the exit at deep layers.", "Therefore, the benefit of future information is more significant compared to the exit at shallow layers, which is validated by the larger improvement gap with a 3 speed-up ratio.", "We also investigate the effect of future information on exit time.", "Figure 4 demonstrates the distribution of exit layers with and without future information.", "When future information is engaged, we observe that the proportion of exit at shallow layers increases.", "The observation conforms with our intuition: with the approximations of future states supplemented for prediction, the merged state at a shallow layer is able to make a confident and correct prediction.", "Thus the exit time is earlier compared to situations without future states, result-Model MNLI-m MNLI-mm QQP QNLI SST-2 MRPC RTE Macro Acc Spd-up Acc Spd-up F1/Acc Spd-up Acc Spd-up Acc Spd-up F1/Acc Spd-up Acc Spd-up 2.00 speed-up BERT-local 81.97 1.96 82.47 1.96 88.18/91.21 1.85 89.90 2.00 92.09 2.00 86.84/80.39 2.08 66.43 1.96 83.73 +Global Future 82.14 2.04 82.93 1.96 88.01/91.12 1.89 90.04 2.04 92.32 2.08 87.19/80.64 2.08 66.78 1.92 83.96 3.00 speed-up BERT-local 76.72 2.78 77.64 2.78 85.80/89.53 3.03 86.62 2.86 90.25 2.94 85.35/77.45 2.94 62.82 2.86 80.45 +Global Future 79.06 2.70 78.86 2.70 85.65/89.61 3.03 86.93 2.86 91.40 2.94 86.12/78.43 2.86 62.45 2.86 81.23 Table 4: Effect of the approximated future states.", "ing in a higher speed-up ratio.", "To be more specific, for MRPC, the speed-up ratios with and without future states are 1 .", "69 and 1 .", "99 , and are 1 .", "92 and 2 .", "04 for MNLI, respectively.", "Meanwhile, we observe a performance boost with future states involved.", "It confirms our assumption that the high-level semantic features embedded in future states help improve performance in early exit framework.", "As an alternative method to accelerate inference, knowledge distillation also exhibits promising performance for NLP tasks.", "We provide comparison with typical knowledge distillation methods in Table 5. Existing model TinyBERT (Jiao et al., 2019) exerts multiple elaborate strategies to achieve the state-of-the-art results, including the expensive general distillation process and a vast amount of augmented data for fine-tuning.", "We remove these two techniques to exclude the effect of extra training data.", "Under the same settings, we observe that our method outperforms the distillation methods with the same speed-up ratio.", "tives.", "The distillation methods are more efficient in saving memory usage, but the downside is that such static methods suffer from high computation cost to adapt to different speed-up ratios.", "A new student model has to be trained from scratch if the speedup requirement changes.", "By contrast, dynamic methods are more flexible to meet different acceleration requirements.", "Concretely, simple instances will be processed by passing through fewer layers and complex instances may require more layers.", "Moreover, the speed-up ratio can be easily adjusted depending on the acceleration requests.", "Nevertheless, early exit and distillation accelerate inference from different perspectives and these two kinds of techniques can be integrated to further compress the model size and accelerate the inference time.", "We propose a novel Past-Future early exit method from a global perspective.", "Unlike previous work using only local states for prediction, our model employs all available past states for prediction and propose a novel approach to engage the future states which are originally inaccessible for prediction.", "Experiments illustrate that our method achieves significant improvement over baseline methods with different models as backbones, suggesting the superiority of our early exit method.", "We thank all the anonymous reviewers for their constructive comments.", "This work is partly supported by National Key R&D Program of China No. 2019YFC1521200 and Beijing Academy of Artificial Intelligence (BAAI).", "Xu Sun is the corresponding author." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "method", "method", "abstain", "abstain", "objective", "objective", "objective", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "result", "other", "other", "other" ]
[ "Even to a simple and short news headline, readers react in a multitude of ways: cognitively (e.g. inferring the writer's intent), emotionally (e.g. feeling distrust), and behaviorally (e.g. sharing the news with their friends).", "Such reactions are instantaneous and yet complex, as they rely on factors that go beyond interpreting factual content of news.", "We propose Misinfo Reaction Frames (MRF), a pragmatic formalism for modeling how readers might react to a news headline.", "In contrast to categorical schema, our free-text dimensions provide a more nuanced way of understanding intent beyond being benign or malicious.", "We also introduce a Misinfo Reaction Frames corpus, a crowdsourced dataset of reactions to over 25k news headlines focusing on global crises: the Covid-19 pandemic, climate change, and cancer.", "Empirical results confirm that it is indeed possible for neural models to predict the prominent patterns of readers' reactions to previously unseen news headlines.", "Additionally, our user study shows that displaying machine-generated MRF implications alongside news headlines to readers can increase their trust in real news while decreasing their trust in misinformation.", "Our work demonstrates the feasibility and importance of pragmatic inferences on news headlines to help enhance AI-guided misinformation detection and mitigation.", "Many objects, persons, and experiences in the world are framed in terms of their potential role in supporting, harming, or enhancing people's lives or interests.", "We can know that this is so if we know how to interpret expressions in which such things are evaluated... Charles J. Fillmore (1976) Epidemics and cases of disease in the 21st century are \"staged\" Headline Reader \"they are lying to us.\"", "Effectively predicting how a headline may influence a reader requires knowledge of how readers perceive the intent behind real and fake news.", "While most prior NLP research on misinformation has focused on fact-checking, preventing spread of misinformation goes beyond determining veracity (Schuster et al., 2020; Ren et al., 2021).", "For example, in Figure 1, mistrust in the government may lead readers to share pandemic conspiracy headlines like Epidemics and cases of disease in the 21st century are staged even if they suspect it is misinformation.", "The widespread circulation of misinformation can have serious neg-3108 (cid:90) News Headline (cid:98) Writer's Intent (cid:98) Reader Reaction Spread Real News?", "ative repercussions on readers it can reinforce sociopolitical divisions like anti-Asian hate (Vid-gen et al., 2020; Abilov et al., 2021), worsen public health risks (Ghenai and Mejova, 2018), and undermine efforts to educate the public about global crises (Ding et al., 2011).", "We introduce Misinfo Reaction Frames (MRF), a pragmatic formalism to reason about the effect of news headlines on readers.", "Inspired by Frame semantics (Fillmore, 1976), our frames distill the pragmatic implications of a news headline in a structured manner.", "We capture free-text explanations of readers reactions and perceived author intent, as well as categorical estimates of veracity and likelihood of spread (Table 2).", "We use our new formalism to collect the MRF corpus, a dataset of 202.3k news headline/annotated dimension pairs (69.8k unique implications for 25.1k news headlines) from Covid-19, climate and cancer news.", "We train reaction inference models to predict MRF dimensions from headlines.", "As shown by Table 1, reaction inference models can correctly label the veracity of headlines (85% F1) and infer commonsense knowledge like a cat being arrested for disobeying curfew = lockdowns are enforced.", "However, models struggle with more nuanced implications a cat arrested for disobeying curfew = government incompetence.", "We test generalization of reaction frame inference on a new cancer domain and achieve 86% F1 by finetuning our MRF model on 574 annotated examples.", "To showcase the usefulness of the MRF framework in user-facing interventions, we investigate the effect of MRF explanations on reader trust in headlines.", "Notably, in a user study our results show that machine-generated MRF inferences affect readers' trust in headlines and for the best model there is a statistically significant correlation (Pearson's r =0.24, p =0.018) with labels of trustworthiness (5.3).", "Our framework and corpus highlight the need for reasoning about the pragmatic implications of news headlines with respect to reader reactions to help combat the spread of misinformation.", "We publicly release the MRF corpus and trained models to enable further work ( https://github.com/ skgabriel/mrf-modeling ).", "1 We explore promising future directions (and limitations) in (6).", "Motivation for Our Formalism In contrast to prior work on misinformation detection (Ott et al., 2011; Rubin et al., 2016; Rashkin et al., 2017; Wang, 2017; Hou et al., 2019; Volkova et al., 2017; Jiang and Wilson, 2018) which mostly focuses on linguistic or social media-derived features, we focus on the potential impact of a news headline by modeling readers' reactions.", "This approach is to better understand how misinformation can be countered, as it has been shown that interventions from AI agents are better at influencing readers than strangers (Kulkarni and Chi, 2013).", "In order to model impact, we build upon prior work that aims to describe the rich interactions involved in human communication, including semantic frames (Fillmore, 1976), the encoder-decoder theory of media (Hall, 1973) 2 , Grice's conversational maxims (Grice, 1975) and the rational speech act model (Goodman and Frank, 2016) 3 .", "By describing these interactions with free-text implications invoked by a news headline, we also follow from prior work on pragmatic frames of connotation and social biases (Speer and Havasi, 2012; 2 This theory proposes that before an event is communicated, a narrative discourse encoding the objectives of the writer is generated. 3 Here pragmatic interpretation is framed as a probabilistic reasoning problem. Rashkin et al., 2018; Sap et al., 2019, 2020; Forbes et al., 2020).", "While approaches like rational speech acts model both a pragmatic speaker and listener, we take a reader-centric approach to interpreting in-tent of a news headline given that the writer's intent is challenging to recover in the dynamic environment of social media news sharing (Starbird et al., 2019).", "By bridging communication theory, data annotation schema and predictive modeling, we define a concrete framework for understanding the impact of a news headline on a reader.", "Defining the Frame Structure Table 1 shows real and misinformation news examples from our dataset with headlines obtained from sources described in 3.1.", "We pair these headline examples with generated reaction frame annotations from the MRF corpus.", "Each reaction frame contains the dimensions in Table 2.", "We elicit annotations based on a news headline , which summarizes the main message of an article.", "We explain this further in 3.1.", "An example headline is Covid-19 may strike more cats than believed .", "To simplify the task for annotators and ground implications in real-world concerns, we define these implications as relating to one of 7 common themes (e.g. technology or government 3110 entities) appearing in Covid and climate news.", "4 We list all the themes in Table 3, with some themes being shared between topics.", "To construct a corpus for studying reader reactions to news headlines, we obtain 69,885 news implications (See 3.1) by eliciting annotations for 25,164 news headlines (11,757 Covid related articles, 12,733 climate headlines and 674 cancer headlines).", "There are two stages for collecting the corpus (1) news data collection and (2) crowdsourced annotation.", "A number of definitions have been proposed for labeling news articles based on reliability.", "To scope our task, we focus on false news that may be unintentionally spread (misinformation).", "This differs from disinformation, which assumes a malicious intent or desire to manipulate (Fallis, 2014).", "We examine reliable and unreliable headline extracted from two domains with widespread misinformation: Covid-19 (Hossain et al., 2020) and climate change (Lett, 2017).", "We additionally test on cancer news (Cui et al., 2020) to measure out-of-domain performance.", "Climate Change Dataset We retrieve both trustworthy and misinformation headlines related to climate change from NELA-GT-2018-2020 (Gruppi et al., 2020; Norregaard et al., 2019), a dataset of news articles from 519 sources.", "Each source in this dataset is labeled with a 3-way trustworthy score (reliable / sometimes reliable / unreliable).", "We discard articles from sometimes reliable sources since the most appropriate label under a binary labeling scheme is unclear.", "To identify headlines related to climate change, we use keyword filtering.", "5 We also use claims from the SciDCC dataset (Mishra and Mittal, 2021), which consists of 11k real news articles from ScienceDaily, 6 and Climate-FEVER (Diggelmann et al., 2020), which consists of more than 1,500 true and false climate claims 4 We use a subset of the data (approx. 200 examples per news topic) to manually identify themes.", "Note that themes are not disjoint and a news article may capture aspects of multiple themes.", "5 We kept any article headline that contained at least one of environment, climate, greenhouse gas, or carbon tax.", "We remove noisy examples obtained using these keywords with manual cleaning.", "6 https://www.sciencedaily.com/ Theme Climate Covid Climate Statistics Natural Disasters Entertainment Ideology Disease Transmission Disease Statistics Health Treatments Protective Gear Government Entities Society Technology Table 3: Themes present in articles by each news topic.", "Covid-19 Dataset For trustworthy news regarding Covid-19, we use the CoAID dataset (Cui and Lee, 2020) and a Covid-19 related subset of NELA-GT-2020 (Gruppi et al., 2020).", "CoAID contains 3,565 news headlines from reliable sources.", "These headlines contain Covid-19 specific keywords and are scraped from nine trustworthy outlets (e.g. the World Health Organization).", "For unreliable news (misinformation), we use The CoronaVirusFacts/DatosCoronaVirus Alliance Database, a dataset of over 10,000 mostly false claims related to Covid-19 and the ESOC Covid19 Misinformation Dataset, which consists of over 200 additional URLs for (mis/dis)information examples.", "89 These claims originate from social media posts, manipulated media, and news articles, that have been manually reviewed and summarized by fact-checkers.", "Cancer Dataset We construct an evaluation set for testing out-of-domain performance using cancer real and misinformation headlines from the 7 The data also includes some claims for which there is not enough info to infer a label.", "In this section we outline the structured annotation interface used to collect the dataset.", "Statistics for the full dataset are provided in Table", "4. Annotation Task Interface We use the Amazon Mechanical Turk (MTurk) crowdsourcing platform.", "10 We provide Figure 2 in the Appendix to show the layout of our annotation task.", "For ease of readability during annotation, we present a headline summarizing the article to annotators, rather than the full text of the article.", "Annotators then rate veracity and likelihood of spread based on the headline, as well as providing free-text responses for writer intent, reader perception and reader action.", "11 We structure the annotation framework around the themes described in 2.", "Quality Control We use a three-stage annotation process for ensuring quality control.", "In the initial pilot, we select a pool of pre-qualified workers by restricting to workers located in the US who have had at least 99% of their human intelligence tasks (hits) approved and have had at least 5000 hits approved.", "We approved workers who consistently submitted high-quality annotations for the second stage of our data annotation, in which we assessed the ability of workers to discern between misinformation and real news.", "We removed workers whose accuracy at predicting the label (real/misinfo) of news headlines fell below 70%.", "Our final pool consists of 80 workers who submitted at least three annotations during the pilot tasks.", "We achieve pairwise agreement of 79% on the label predicted by annotators during stage 3, which is comparable to prior work on Covid misinformation (Hossain et al., 2020).", "To account for chance agreement, we also measure Cohen's Kappa = .", "51 , which is considered moderate agreement.", "Additional quality control measures were taken as part of our extensive annotation post-processing.", "For details, see Appendix A.2.", "Annotator Demographics We provided an optional demographic survey to MTurk workers during annotation.", "Of the 63 annotators who reported ethnicity, 82.54% identified as White, 9.52% as Black/African-American, 6.35% as Asian/Pacific 10 https://www.mturk.com/ 11 These news events are either article headlines or claims.", "Islander, and 1.59% as Hispanic/Latino.", "For self-identified gender, 59% were male and 41% were female.", "Annotators were generally well-educated, with 74% reporting having a professional degree, college-level degree or higher.", "Most annotators were between the ages of 25 and 54 (88%).", "We also asked annotators for their preferred news sources.", "New York Times, CNN, Twitter, Washington Post, NPR, Reddit, Reuters, BBC, YouTube and Facebook were reported as the 10 most common news sources.", "We test the ability of large-scale language models to predict Misinfo Reaction Frames.", "For free-text inferences (e.g. writer intent, reader perception), we use generative language models, specifically T5 encoder-decoder (Raffel et al., 2020) and GPT-2 decoder-only models (Radford et al., 2019).", "For categorical inferences (e.g. the gold label), we use either generative models or BERT-based discriminative models (Devlin et al., 2019).", "We compare neural models to a simple retrieval baseline ( BERT-NN ) where we use gold implications aligned with the most similar headline from the training set.", "12 4.1 Controlled Generation For generative models, we use the following input sequence x = h 1 ... h T || s d || s t , where h is a headline of length T tokens, s t { [ covid ] , [ climate ] } is a special topic control token, and s d is a special dimension control token representing one of six reaction frame dimensions.", "Here || represents concatenation.", "The output is a short sequence representing the predicted inference (e.g. to protest for reader action, misinfo for the gold label).", "For GPT-2 models we also append the gold output inference y = g 1 ... g N during training, where N is the length of the inference.", "Inference We predict each token of the output inference starting from the topic token s t until the [ eos ] special token is generated.", "In the case of data with unknown topic labels, this allows us to jointly predict the topic label and output inference.", "We decode using beam search, since generations by beam search are known to be less diverse but more 12 Similarity is measured between headlines embedded with MiniLM, a distilled transformer model (Wang et al., 2020).", "We use the Sentence-BERT package (Reimers and Gurevych, 2019).", "For discriminative models, we use the following input sequence", "where [CLS] and [SEP] are model-specific special tokens.", "The output is a categorical inference.", "All our models are optimized using cross-entropy loss, where generally for a sequence of tokens t", "Here P is the probability given a particular language model .", "Since GPT-2 does not explicitly distinguish between the input and output (target) sequence during training, we take the loss with respect to the full sequence.", "For T5 we take the loss with respect only to the output.", "To improve generalization of MRF models, we use an additional masked fine-tuning step.", "We first train a language model on a set of Covid19 training examples D covid and climate training examples D climate .", "Then we use the Textrank algorithm (Mihalcea and Tarau, 2004) to find salient keyphrases in D covid and D climate , which we term k covid and k climate respectively.", "We determine domain-specific keyphrases by looking at the complement of k covid k climate k covid = k covid \\ k covid k climate k climate = k climate \\ k covid k climate , and only keep the top 100 keyphrases for each domain.", "We mask out these keyphrases in the training examples from D covid and D climate by replacing them with a < mask > token.", "Then we continue training by fine-tuning on the masked examples.", "A similar approach has been shown to improve generalization and reduce shortcutting of reasoning in models for event detection (Liu et al., 2020).", "In this section, we evaluate the effectiveness of our proposed framework at predicting likely reactions, countering misinformation and detecting misinformation.", "We first describe setup for experiments (5.1), as well as evaluation metrics for classification and generation experiments using our corpus (5.2.1,5.2.2).", "We also show the performance of large-scale language models on the task of generating reaction frames (5.3) and provide results for classification of news headlines (5.4).", "We determine the test split according to date to reduce topical and news event overlap between train and test sets.", "13 We use the HuggingFace Transformers library (Wolf et al., 2020).", "Hyperparameters are provided in Appendix A.3.", "We compare reaction inference systems using common automatic metrics.", "We also use human evaluation to assess quality and potential use of generated writer intent inferences.", "These metrics include the BLEU (-4) ngram overlap metric (Papineni et al., 2002) and BERTScore (Zhang et al., 2020), a model-based metric for measuring semantic similarity between generated inferences and references.", "For classification we report macro-averaged precision, recall and F1 scores.", "1415 We use publicly available implementations for all metrics (nltk 16 for BLEU).", "For human evaluation, we assess generated inferences using the same pool of qualified workers who annotated the original data.", "We randomly sample model-generated writer's intent implications from T5 models and GPT-2 large over 196 headlines where generated implications were unique for each model type.", "17 We elicit 3 unique judgements per headline.", "Implications are templated in the 13 We use news articles from 2021 and the last two months of 2020 for the test set.", "We ensure there is no exact overlap between data splits.", "14 We compute these using scikit-learn: https:// scikit-learn.org/stable/index.html 15 For measuring likelihood of spread, predicted and averaged values are rounded to the nearest integer.", "16 https://www.nltk.org/ 17 98 misinfo and 98 real headlines in the dev.", "set 3113 Writer Intent Reader Perception Reader Action Model BLEU-4 BERTScore BLEU-4 BERTScore BLEU-4 BERTScore BERT-NN 31.45 86.29 35.69 91.04 45.47 84.76 T5-base 51.48 88.03 31.98 92.87 53.55 85.27 dev.", "Overall Quality We ask the annotators to assess the overall quality of generated implications on a 1-5 Likert scale (i.e. whether they are coherent and relevant to the headline without directly copying).", "Influence on Trust We measure whether generated implications impact readers' perception of news reliability by asking annotators whether a generated implication makes them perceive the news headline as more (+) or less (-) trustworthy.", "Perceived Sociopolitical Acceptability We ask annotators to rate their perception of the beliefs invoked by an implication in terms of whether they represent a majority (mainstream) or minority (fringe) viewpoint.", "18 A/B Testing For A/B testing, annotators are initially shown the headline with the generated implication hidden.", "We ask annotators to rate trustworthiness of headlines on a 1-5 Likert scale, with 1 being clearly misinformation and 5 being clearly real news.", "After providing this rating, we reveal the generated implication to annotators and have them rate the headline again on the same scale.", "Annotators were not told whether or not implications were machine-generated, and we advised annotators to 18 We refer to minority viewpoint broadly in terms of less frequently adopted or extreme social beliefs, rather than in terms of viewpoints held by historically marginalized groups.", "Results We found that the T5-large model was rated as having slightly higher quality generations than the other model variants (Table 6).", "Most model generations were rated as being socially acceptable .", "However in as many as 25.34% of judgements, generations were found to be not socially acceptable.", "Interestingly, all models were rated capable of influencing readers to trust or distrust headlines, but effectiveness is dependent on the quality of the generated implication.", "In particular for T5-base, we found a consistent correlation between the actual label and shifts in trustworthiness scores before and after annotators see the generated writer's intent.", "Annotators reported that writer intents made real news appear more trustworthy and misinformation less trustworthy.", "19 5.4 Detecting Misinformation To test if we can detect misinformation using propagandistic content like loaded or provocative lan-19 While for most models the trend is a decrease in trust for both real news and misinformation, for the T5-base model there is a statistically significant correlation of Pearson's r = .", "24 showing shifts in trust align with gold labels.", "guage (e.g. Covid-19 vaccines may be the worst threat we face ), we use a pre-trained BERT propaganda detector (Da San Martino et al., 2019) which we denote here as ( Prop-BERT ).", "20 For our zero-shot setting, we classify a news event as real if it is not associated with any propaganda techniques and misinformation otherwise.", "As shown by Table 7, F1 results are considerably lower than task-specific models.", "This is likely due to the fact both real and misinformation news uses propaganda techniques.", "Neural misinformation detection models are able to outperform humans at identifying misinformation (achieving a max F1 of 85.26 compared to human performance F1 of 75.21 21 ), but this is still a nontrivial task for large-scale models.", "When we use Covid-BERT (Muller et al., 2020), a variant of BERT pretrained on 160M Covid-related tweets, we see an improvement of 5.46% over BERT without domain-specific pretraining (Table 7).", "This indicates greater access to domain-specific data helps in misinformation detection, even if the veracity of 20 The model predicts if any of 18 known propaganda techniques are used to describe a news event.", "See the paper for the full list.", "21 We count disagreements as being labeled misinformation here, discarding disagreements leads to F1 of 74.97.", "Performance on Out-of-Domain Data We test the ability of reaction frames to generalize using 100 cancer-related real and misinformation health news headlines (Cui et al., 2020), see Table 7.", "For the misinformation detection task, we evaluate gold F1 using the Prop-BERT zero-shot model, MRF-finetuned BERT-large, Covid-BERT , T5-large and GPT-2 large models.", "We observe that after one epoch of re-training, masked fine-tuning substantially boosts unsupervised performance of generative MRF models ( GPT-2 large + masked and T5-large + masked ), making them more robust than BERT variants.", "We compare this performance against the T5-large and GPT-2 large model finetuned on only 574 cancer examples ( GPT-2 large + sup and T5-large + sup ), and observe that this leads to a performance increase of up to 43.49 %, achieving similar F1 performance to our domains with full data supervision.", "Our framework presents new opportunities for studying perceived intent and impact of misinfor-3115", "We can estimate content virality.", "Given the user-annotated labels for likelihood of reading or sharing, we can estimate whether the information in the associated article is likely to propagate.", "headlines.", "Using annotated writer intents, we can determine common themes and perceived intentions in misinformation headlines across domains (e.g. mistrust of vaccination across medical do-mains).", "Given the performance of predictive models highlighted by Tables 5 and 6, we can also extend this analysis to unseen headlines.", "We can categorize headlines by severity of likely outcomes.", "False headlines that explicitly incite violence, or otherwise encourage actions that lead to psychological or physical harm (e.g. not vaccinating) may be deemed more malicious than false headlines with more benign consequences (e.g. some examples of satire).", "Future work may explore categorizing severity of headlines based on potential harms resulting from implications.", "headlines may fool readers.", "We can use these labels to determine which types of misinformation headlines appear most like real news to generally knowledgeable readers.", "These may also help in designing misinformation countering systems and better adversarial examples to improve robustness of misinformation detection models.", "We can generate counter-narratives to misinformation.", "Our results indicate it is possible to generate effective explanations for the intent of headlines that discourage trust in misinformation (Section 5.3), see Appendix A.5 for examples.", "We encourage future work that further improves performance of these models (e.g. through integration of domain knowledge).", "Limitations.", "Given these future directions, we also consider key limitations which must be addressed if we move beyond viewing Misinfo Reaction Frames as a proof-of-concept and use the dataset as part of a large-scale system for evaluating or countering misinformation.", "Since we focus on news headlines, the context is limited.", "The intent of a headline may be different from the actual intent of the corresponding article, especially in the case of clickbait.", "We find headlines to be suitable as online readers often share headlines without clicking on them (Gabielkov et al., 2016), however future work may explore extending reaction frames to full news articles.", "There is also annotator and model bias.", "Readers involved in our data curation and human evaluation studies are generally knowledgeable , as proved by their ability to discern misinformation from real news.", "We see this bias as a potential strength as it allows us to find ways to counter misinformation in cases where readers are well-informed but still believe false information.", "However, annotators may have undesirable political or social biases.", "In such cases, gender bias may lead an annotator to assume that a politician mentioned in a headline is male or to dismiss inequality concerns raised by a scientist belonging to a minority group as play-ing the race card.", "These biases can also appear in pre-training data, leading to model bias.", "22 Subjectivity in annotation is a point of discussion in many pragmatic-oriented tasks, e.g. social norm prediction (Jiang et al., 2021) and toxicity detection (Halevy et al., 2021; Sap et al., 2021).", "We encourage conscious efforts to recruit diverse pools of annotators so multiple perspectives are considered, and future work on modeling reaction frames can consider learning algorithms that mitigate harmful effects of biases, depending on use case (Khalifa et al., 2021; Gordon et al., 2022).", "Lastly, we only consider English-language news and annotate with workers based in the US.", "It may be that news headlines would be interpreted differently in other languages and cultures.", "We introduced Misinfo Reaction Frames, a pragmatic formalism for understanding reader perception of news reliability.", "We show that machine-generated reaction frames can change perceptions of readers, and while large-scale language models are able to discern between real news and misinformation, there is still room for future work.", "Generated reaction frames can potentially be used in a number of downstream applications, including better understanding of event causality, empathetic response generation and as counter-narratives.", "22 Removing these examples from data curation or trying to control for annotator neutrality does not erase the causes that lead to the existence of these biases.", "The fact that harmful biases can manifest in the viewpoints of informed readers speaks to the pervasiveness of certain stereotypes.", "There is a risk of frame-based machine-generated reader interpretations being misused to produce more persuasive misinformation.", "However, understanding the way in which readers perceive and react to news is critical in determining what kinds of misinformation pose the greatest threat and how to counteract its effects.", "Furthermore, while transformer models have contributed to much of the recent algorithmic progress in NLP research and are the most powerful computational models available to us, work has highlighted shortcomings in their performance on domain-specific text (Moradi et al., 2021) and noted that these models can easily detect their own machine-generated misinformation (Zellers et al., 2019).", "Therefore, we do not see this potential dual-use case as an imminent threat, but urge implementation of systemic changes that would discourage such an outcome in the future -e.g. regulation that would lead to required safety and fairness measures before large-scale systems are deployed in the wild (European Commission, 2021).", "We emphasize that annotations may reflect perceptions and beliefs of annotators, rather than universal truths (Britt et al., 2019).", "Especially considering demographic homogeneity of online crowd-source workers, we urge caution in generalizing beliefs or taking beliefs held in certain social/cultural contexts to be factual knowledge.", "We obtained an Institutional Review Board (IRB) exemption for annotation work, and ensured annotators were fairly paid given time estimations.", "Broader impact.", "The rapid dissemination of information online has led to an increasing problem of falsified or misleading news spread on social media like Twitter, Reddit and Facebook (Vosoughi et al., 2018; Geeng et al., 2020).", "We specifically designed the Misinfo Reaction Frames formalism to allow us to identify and predict high-impact misinformation that is more likely to spread.", "This can allow for future research on factors that make misinformation particularly dangerous, as well as systems that are more effective at mitigating spread.", "The authors thank members of the DARPA Se-maFor program, UW NLP, the UW CSE 599 social computing class and Amy X. Zhang for helpful discussions, as well as the anonymous reviewers", "and Akari Asai for comments on the draft.", "This research is supported in part by NSF (IIS-1714566), NSF (2041894), DARPA MCS program through NIWC Pacific (N66001-19-2-4031), DARPA Se-maFor program, and Allen Institute for AI." ]
[ "abstain", "abstain", "objective", "objective", "method", "abstain", "result", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "objective", "objective", "result", "result", "other", "objective", "other", "abstain", "abstain", "other", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "other", "other", "other" ]
[ "Recent approaches to question generation have used modifications to a Seq2Seq architecture inspired by advances in machine translation.", "Models are trained using teacher forcing to optimise only the one-step-ahead prediction.", "However, at test time, the model is asked to generate a whole sequence, causing errors to propagate through the generation process (ex-posure bias).", "A number of authors have suggested that optimising for rewards less tightly coupled to the training data might counter this mismatch.", "We therefore optimise directly for various objectives beyond simply replicating the ground truth questions, including a novel approach using an adversarial discriminator that seeks to generate questions that are indistinguishable from real examples.", "We con-firm that training with policy gradient methods leads to increases in the metrics used as rewards.", "We perform a human evaluation, and show that although these metrics have previously been assumed to be good proxies for question quality, they are poorly aligned with human judgement and the model simply learns to exploit the weaknesses of the reward source.", "Posing questions about a document in natural language is a crucial aspect of the effort to automatically process natural language data, enabling machines to ask clarification questions (Saeidi et al., 2018), become more robust to queries (Yu et al., 2018), and to act as automatic tutors (Heilman and Smith, 2010).", "Recent approaches to question generation have used Seq2Seq (Sutskever et al., 2014) models with attention (Bahdanau et al., 2014) and a form of copy mechanism (Vinyals et al., 2015; Gulcehre et al., 2016).", "Such models are trained to generate a plausible question, conditioned on an input document and answer span within that document (Zhou et al., 2018; Du et al., 2017; Du and Cardie, 2018; Yuan et al., 2017).", "There are currently no dedicated question generation datasets, and authors have used the context-question-answer triples available in SQuAD (Rajpurkar et al., 2016).", "Only a single question is available for each context-answer pair, and models are trained using teacher forcing (Williams and Zipser, 1989).", "This lack of diverse training data combined with the one-step-ahead training procedure exacerbates the problem of exposure bias (Ranzato et al., 2015).", "The model does not learn how to distribute probability mass over sequences that are valid but different to the ground truth; during inference, the model must predict the whole sequence, and may not be robust to mistakes during decoding.", "Recent work has investigated training the models directly on a performance based objective, either by optimising for BLEU score (Kumar et al., 2018a) or other quality metrics (Yuan et al., 2017).", "By decoupling the training procedure from the ground truth data, the model is able to explore the space of possible questions and learn to recover from suboptimal predictions during decoding.", "While the metrics used seem to be intuitively good choices, there is an assumption that they are good proxies for question quality which has not yet been confirmed.", "Our contributions are as follows.", "We perform fine tuning using a range of rewards, including a novel adversarial objective that directly estimates the probability that a question was generated or came from the ground truth data.", "We show that although fine tuning leads to increases in reward scores, the resulting models perform worse when evaluated by human workers.", "We also demonstrate that the generated questions exploit weaknesses in the reward models.", "although united methodist practices and interpretation of beliefs have evolved over time , these practices and beliefs can be traced to the writings of the church 's founders , especially john wesley and charles wesley ( anglicans ) , but also", "Many of the advances in natural language generation have been led by machine translation (MT) (Sutskever et al., 2014; Bahdanau et al., 2014; Gulcehre et al., 2016).", "Previous work on question generation has made extensive use of MT techniques.", "Du et al. (2017) use a Seq2Seq based model to generate questions conditioned on context-answer pairs, and build on this work by preprocessing the context to resolve coreferences and adding a pointer network (Du and Cardie, 2018).", "Similarly, Zhou et al. (2018) use a part-of-speech tagger to augment the embedding vectors.", "Both authors perform a human evaluation of their models, and show significant improvement over their baseline.", "Kumar et al. (2018a) use a similar model, but apply it to the task of generating questions without conditioning on a specific answer span.", "Song et al. (2018) use a modified context encoder based on multi-perspective context matching (Wang et al., 2016).", "Kumar et al. (2018b) propose a framework for fine tuning using policy gradients and perform a human evaluation showing promising results.", "However, they use as rewards various similarity metrics that are still coupled to the ground truth.", "Yuan et al. (2017) describe a Seq2Seq model with attention and a pointer network, with an additional encoding layer for the answer.", "They also describe a method for further tuning their model using policy gradients, with rewards given by an external language model and question answering (QA) system.", "Unfortunately they do not perform any human evaluation to determine whether this tuning led to improved question quality.", "For the related task of summarisation, Paulus et al. (2017) propose a framework for fine tuning a summarisation model using reinforcement learning, with the ROUGE similarity metric used as the reward.", "The task is to generate a natural language question, conditioned on a document and the location of an answer within that document.", "For example, given the input document this paper investigates rewards for question generation and answer question generation, the model should produce a question such as what is investigated in the pa-per? 3.1 Model description We use the model architecture described by Yuan et al. (2017).", "Briefly, this is a Seq2Seq model (Sutskever et al., 2014) with attention (Bah-danau et al., 2014) and copy mechanism (Vinyals et al., 2015; Gulcehre et al., 2016).", "Yuan et al. (2017) also add an additional answer encoder layer, and initialise the decoder with a hidden state constructed from the final state of the encoder.", "Beam search (Graves, 2012) is used to sample from the model at inference time.", "We train the model using maximum likelihood before fine tuning.", "Our implementation achieves a BLEU-4 score (Papineni et al., 2002) of 13.5 on the test set used by Du et al. (2017), before fine tuning.", "Generated questions should be formed of language that is both fluent and relevant to the context and answer.", "Following (Yuan et al., 2017), we perform fine tuning on a trained model, using rewards given either by the negative perplexity under a LSTM language model, or the F1 score attained by a question answering (QA) system, or a weighted combination of both.", "The language model is a standard recurrent neural network formed of a single LSTM layer.", "For the QA system, we use QANet (Yu et al., 2018) as implemented by Kim (2018).", "Additionally, we propose a novel approach by learning the reward directly from the training data, using a discriminator detailed in Appendix A. We generate questions for each context-answer pair in the training set using a generator trained by maximum likelihood, and train the discriminator to predict whether an input question was generated by our model, or originated from the training data.", "Keeping the discriminator fixed, we then fine-tune the generator, using as reward the probability estimated by the discriminator that a generated question was in fact real.", "In other words, the generator is rewarded for successfully fooling the discriminator.", "We also experiment with interleaving updates to the discriminator within the fine tuning phase, allowing the discriminator to become adversarial and adapt alongside the generator.", "The rewards described above are used to update the model parameters via the REINFORCE policy gradient algorithm (Williams, 1992).", "We teacher force the decoder with the generated sequence to reproduce the activations calculated during beam search, to enable backpropagation.", "All rewards are normalised with a simple form of PopArt (Hasselt et al., 2016), with the running mean R and standard deviation R updated online during training.", "We continue to apply a maximum likelihood training objective during this fine tuning.", "We report the negative log-likelihood (NLL) of the test set under the different models, as well as the corpus level BLEU-4 score (Papineni et al., 2002) of the generated questions compared to the ground truth.", "We also report the rewards achieved on the 1 2 3 4 5 Relevance score 0.0 0.2 0.4 0.6 0.8 1.0 QA s c o r e QA scores against relevance", "(a) QA scores plotted against human relevance scores for all rated questions.", "(b) LM scores plotted against human fluency scores for all rated questions.", "For the human evaluation, we follow the standard approach in evaluating machine translation systems (Koehn and Monz, 2006), as used for question generation by Du and Cardie (2018).", "We ask three workers to rate 300 generated questions between 1 (poor) and 5 (good) on two separate criteria: the fluency of the language used, and the relevance of the question to the context document and answer.", "Table 2 shows the changes in automatic metrics for models fine tuned on various combinations of rewards, compared to the model without tuning.", "In all cases, the BLEU score reduces, as the training objective is no longer closely coupled to the training data.", "In general, models achieve better scores on the metrics on which they were fine tuned.", "Jointly training on a QA and LM reward results in better LM scores than training on only a LM reward; the LM score did not increase smoothly when used as the sole objective, and we believe the additional QA reward acts as a form of regular-isation.", "We conclude that fine tuning using policy gradients can be used to attain higher rewards, as expected.", "Table 3 shows the human evaluation scores for a subset of the fine tuned models.", "The model fine tuned on a QA and LM objective is rated as significantly worse by human annotators, despite achieving higher scores in the automatic metrics.", "In other words, the training objective given by these reward sources does not correspond to true question quality, despite them being intuitively good choices.", "ratings, with the discriminator model unable to learn a useful reward source.", "Although the training process was stable and robust to different ini-tialisations, and the outputs do not appear to be significantly worse, we conclude that the discriminator was unable to learn a sufficiently useful distinction between generated and real questions, and the additional fine tuning procedure simply added unwanted noise to the model predictions.", "Table 1 shows an example where fine tuning has not only failed to improve the quality of generated questions, but has caused the model to exploit the reward source.", "The model fine tuned on a LM reward has degenerated into producing a loop of words that is evidently deemed probable, while the model trained on a QA reward has learned that it can simply point at the location of the answer.", "This observation is supported by the metrics; the model fine tuned on a QA reward has suffered a catastrophic worsening in LM score of +226.", "Figure 1 shows the automatic scores against human ratings for all rated questions.", "The correlation coefficient between human relevance and automatic QA scores was 0.439, and between fluency and LM score was only 0.355.", "While the automatic scores are good indicators of whether a question will achieve the lowest human rating or not, they do not differentiate clearly between the higher ratings: training a model on these objectives will not necessarily learn to generate better questions.", "A good question will likely attain a high QA and LM score, but the inverse is not true; a sequence may exploit the weaknesses of the metrics and achieve a high score despite being unintelligible to a human.", "We conclude that fine tuning a question generation model on these rewards does not lead to better quality questions.", "In this paper, we investigate the use of external reward sources for fine tuning question generation models to counteract the lack of task-specific training data.", "We show that although fine tuning can be used to attain higher rewards, this does not equate to better quality questions when rated by humans.", "Using QA and LM rewards as a training objective causes the generator to expose the weaknesses in these models, which in turn suggests a possible use of this approach for generating adversarial training examples for QA models.", "The QA and LM scores are well correlated with human ratings at the lower end of the scale, suggesting they could successfully be used as part of a reranking or filtering system.", "We plan to research overgen-erating questions and using the reward signals to rerank the outputs, thereby including the inductive bias the rewards represent without allowing the model to exploit them." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "result", "objective", "abstain", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "abstain", "abstain", "method" ]
[ "Natural language inference requires reasoning about contradictions, negations, and their commonsense implications.", "Given a simple premise (e.g., I'm mad at you), humans can reason about the varying shades of contradictory statements ranging from straightforward negations (I'm not mad at you) to commonsense contradictions (I'm happy).", "Moreover, these negated or contradictory statements shift the commonsense implications of the original premise in nontrivial ways.", "For example, while I'm mad implies I'm unhappy about something, negating the premise ( i.e. , I'm not mad) does not necessarily negate the corresponding commonsense implications.", "In this paper, we present the first comprehensive study focusing on commonsense implications of negated statements and contradictions.", "We introduce ANION 1 , a new commonsense knowledge graph with 624K if-then rules focusing on negated and contradictory events.", "We then present joint generative and discriminative inference models for this new resource, providing novel empirical insights on how logical negations and commonsense contradictions reshape the commonsense implications of their original premises.", "Humans reason about underlying causes and effects of events described in text.", "For example, in Figure 1, the event X wears a mask is associated with many causal inferences such as X is seen as responsible, or Others get protected.", "Hypothesizing and reasoning about commonsense inferences is used for understanding complex situations encountered in everyday life (Sap et al., 2019; Bisk et al., 2020; Bhagavatula et al., 2020; Sakaguchi et al., 2020).", "This ability eludes AI systems, and has motivated the design of a wealth of 1 Data and code available at https://github.com/ liweijiang/anion X takes his mask off Original Commonsense Contradiction X wears a mask X loses his mask responsible get protected to throw the used mask away to have a mask to breath freely nervous to avoid XX is seen as As a result, others want As a result, X feels As a result, X wants Before, X needed As a result, others want As a result, others then Because X wanted Because X wanted Before, X needed saves money from buying a mask As a result, X then X doesn't wear a mask Logical Negation carefree X is seen as X is seen as Figure 1: Commonsense inferences for the event X wears a mask, its logical negation and commonsense contradiction events, and their associated inferences.", "commonsense knowledge resources, such as Cyc (Lenat, 1995), ConceptNet (Speer et al., 2017), and ATOMIC (Sap et al., 2020; Hwang et al., 2020), to provide structured reasoning capabilities to AI systems (Lin et al., 2019; Feng et al., 2020).", "However, reasoning about negated observations remains a challenge (Hossain et al., 2020).", "While negation is often considered a poorer form of meaning than affirmation 2 (Ackrill, 1963; Horn and Wansing, 2020), negated statements can still imply expressive commonsense inferences.", "In Figure 1, the negated event X doesn't wear a mask, 2 Following Horn and Wansing (2020), we classify declarative expressions as affirmations or negations/contradictions based on whether they affirm or deny an action or object.", "is connected to rich commonsense inferences, despite describing the absence of action.", "However, negated observations are rarely found in commonsense knowledge resources.", "For example, negated examples make up only 3% of examples in the ConceptNet knowledge graph (Li et al., 2016).", "This scarcity poses downstream issues for systems that must understand negated situations.", "Commonsense knowledge models (Bosselut et al., 2019; Hwang et al., 2020) trained on resources of largely affirmative instances struggle particularly with negation examples.", "Their ability to hypothesize inferences for negated events is 35% lower than for affirmative events (4.2).", "Furthermore, since negated statements are asymmetrically mentioned in text compared to affirmative statements (Jowett et al., 1892; Horn and Wansing, 2020), large-scale pretraining does not implicitly learn negation scoping (Kim et al., 2019).", "As a result, when presented with negated concepts, pretrained neural language models (PLMs) often exhibit the same associations as affirmative statements (Kassner et al., 2020).", "Motivated by these observations, our work focuses on improving the ability of knowledge models to make commonsense inferences about events that convey denial, rejection or contradiction of actions.", "We define our contributions as follows.", "First, we crowdsource a new large scale resource, A rray of commonse N se I nferences for O ppositions and N egations (ANION ), which contains inferences for different types of negated events.This new resource can be used to train knowledge models on commonsense inferences associated with the absence of actions.", "Second, we propose a new class of negation discriminators that can be applied to generated commonsense inferences.", "These discriminators partition inferences based on logical consistency, thereby mitigating the effects of common affirmative associations that violate negation constraints.", "Discriminators are trained using contrastive samples from paired affirmative and negated events in ANION .", "Finally, we conduct an empirical study of both of these techniques and show that using trainingand discriminator-based approaches for modeling negation cuts the performance difference between affirmative and negated events by 73% -85% depending on the negation variety.", "statements into affirmation and negation, which respectively affirms or denies observations about an event (Ackrill, 1963).", "Despite this seeming simplicity, natural language often expresses negation in complex and subtle ways, using diverse syntactic, semantic and pragmatic formulations (Horn and Wansing, 2020).", "For example, syntactically, different negation determiners ( i.e. , negation cues) such as no , few and only result in distinct explicit and implicit negative perceptions (Xiang et al., 2016).", "Despite their diversity, however, negated language expressions are much less likely to appear in text than affirmative statements (Reitan et al., 2015).", "Consequently, PLMs, which rely on large-scale textual corpora as training data, are prone to decreased performance when confronted with negated constructions.", "In machine translation, for example, the presence of negation may heavily affect the quality of produced translations (Fancellu and Webber, 2015; Hossain et al., 2020).", "In factual knowledge understanding tasks, PLMs memorize positive and negative sentences seen during training, but generalize more poorly to unseen negated instances (Kassner and Schtze, 2020).", "Negation in Commonsense Reasoning Understanding negation and oppositional expressions is critical for reasoning about commonsense knowledge, particularly in counterfactual scenarios (Qin et al., 2019).", "However, negation is rarely explicitly modeled in NLP studies on commonsense reasoning.", "As a result, in many NLP tasks, these models experience a performance drop when presented with examples exhibiting negated characteristics.", "As a case study, the ATOMIC (Sap et al., 2020) knowledge graph encodes social commonsense knowledge about event pre-conditions, event post-conditions, and static attributes in the form of natural language if-then rules.", "However, despite the fact that ATOMIC provides a rich set of seed events, it comprises an unbalanced set of affirmative events (97.9%) and negated events (2.1%).", "As a result, when systems link to ATOMIC to retrieve relevant social commonsense inferences, they are likely to recover inferences of affirmative events even when searching for negated instances.", "Furthermore, knowledge models that use this resource ( e.g. , COMET; Bosselut et al., 2019) are unlikely to learn implicit differences between inferences of affirmative and negated events.", "When given negated events, these models often produce associations of counterpart affirmative events.", "For example, for Types Example Negation Cues Example Sentences Affixes un-, ir-, non-, il-, im-, -less, etc.", "the negated event, X opposes racism, COMET infers X intends to be a racist, an association of the affirmative statement, X supports racism.", "At the heart of this problem is that inferring commonsense knowledge about negations often requires implicit reasoning.", "In factual knowledge reasoning, applying logical rules over statements can be effective for handling negative queries (Asai and Hajishirzi, 2020; Ren and Leskovec, 2020).", "However, directly manipulating affirmative forms with logic-guided rules may fail for commonsense reasoning: the boundary of commonsense inferences between affirmative and negated statements is not always wholly contrastive.", "Many inferences can be relevant to both forms.", "The events X puts the potato in the oven and X doesn't put the potato in the oven, could both have an associated inference: X wants to make dinner.", "The affirmative event clearly implies this inference.", "For the negated event to be worth mentioning on its own (Grice et al., 1975), an implicit complementary event ( e.g. , X puts the potato in the microwave) would likely hold, which might validate the inference w.r.t. the negated event.", "To model the defeasibility of commonsense reasoning (Pratt, 1994; Rudinger et al., 2020), modeling both common and contrastive inferences of negated forms is necessary.", "To provide a rich resource of commonsense inferences for opposition and negation events, we design ANION .", "Using the same schema as the ATOMIC knowledge graph (Sap et al., 2020), we initialize 22,483 negated forms paired to original ATOMIC events and crowdsource 627,042 new inferences for these negated events.", "Consistent with ATOMIC , ANION is constructed using English formulations of events and inferences.", "We briefly recap ATOMIC and describe the construction of ANION below.", "ATOMIC Background The ATOMIC knowledge graph contains 24K base events ( e.g. , X plays the piano) with 877K accompanying social commonsense inferences ( e.g. , Before, X needs to buy a piano.) along nine dimensions ( e.g. , xNeed ).", "The full description of ATOMIC relation types can be found in Table 12 in the Appendix.", "Our knowledge construction pipeline consists of two steps.", "First, we collect negated and contradictive events by deriving oppositions of events in ATOMIC .", "Inspired by the distinction made between negation contributed by semantic assertion (explicit negation) or non-asserted content (implicit negation) (Xiang et al., 2016), we define three varieties of negated events: logical negations, semi-logical negations, and commonsense contradictions, which we describe in detail below.", "Logical and semilogical negations were heuristically formulated from ATOMIC events.", "Commonsense contradiction events were crowdsourced from Amazon Mechanical Turk (MTurk).", "Negated events in ANION are assigned to the same data split as the corresponding affirmative event from which they are derived ( e.g. , negated events for ATOMIC training set events are found in the ANION training set).", "Once a list of negated events is compiled, we crowdsource inferences of these new events on MTurk using similar annotation templates as Sap et al. (2020).", "We design qualifying tasks to filter out unreliable workers and screen their answers manually for quality control purposes.", "Logical Negation We define logical negation events as events with the negation cue not added to their original formulation ( e.g. , X does not play the piano).", "However, different positions of the not modifier in a clause can result in different negation scopes , which can alter the semantics of the event (Councill et al., 2010).", "To be consistent, we systematically insert not after the subject of the event clause.", "If necessary, we change verb forms and add auxiliary words ( e.g. , do, does, did, is, was, can, could, would, should, may, might).", "For quality control, we have human workers validate each logically negated event form and exclude events that annotators identify as uninterpretable or awkwardly worded.", "For each created event, we then collect the same nine dimensions of inferences as defined in ATOMIC .", "Consequently, we collected 8,285 logically negated events with 225K corresponding inferences (as shown in Table 2).", "Appendix A.1 provides further details of the compilation of logical negation events.", "Semi-logical Negation We define semi-logical negation using explicit cues other than not .", "We categorize these negation cues (words or phrases) into four subtypes: affixes ( e.g. , legal/illegal), single-word cues ( e.g. , never), multi-word cues ( e.g. , no longer), and negative verbs ( e.g. , refuse).", "See Table 1 for examples.", "We create semi-logical negation events by heuristically adding these cues to different positions of ATOMIC events.", "Similar to logically-negated events, we avoid grammatically incorrect or semantically awkward events by removing auto-generated instances of low quality.", "The final set of data includes 5,019 semilogical negation events.", "We then crowdsource a total of 138K inferences for these new events.", "Appendix A.1 provides further details of the compilation of semi-logical negation events.", "Commonsense Contradiction We formulate commonsense contradiction as contradictory statements without negation cues.", "Commonsense contradiction events are not identifiable as negations on their own, but demonstrate reversed semantic or pragmatic meaning when paired with their affirmative counterparts ( e.g. , X eats a hamburger vs. X eats a salad).", "To obtain commonsense contradictions, we crowdsource two oppositional events for each ATOMIC event, excluding events with blank placeholders representing generic objects, resulting in 40K new commonsense contradiction events.", "For 9,179 of these events, we crowdsource an additional 262K commonsense inferences.", "Appendix A.1 provides further details of the crowdsourcing of commonsense contradiction events.", "ANION can be used as training data for commonsense models to make inferences about negated events.", "Here, we recap COMET (Bosselut et al., 2019), a commonsense knowledge model, and evaluate how training knowledge models on ANION affects their ability to hypothesize commonsense knowledge for negated and oppositional events.", "Commonsense transformers (COMET) are generative knowledge models that learn to hypothesize commonsense inferences by training on examples from a knowledge graph.", "Specifically, COMET receives knowledge tuples in { h, r, t } form during training, where h is a head entity, r is a relation type, and t is a tail entity.", "The model is trained to maximize the conditional loglikelihood of predicting the tokens of the tail entity t given the tokens of the head entity h and relation r : LG = (cid:88) log P ( t | h, r ) (1) In ATOMIC and ANION , h corresponds to events, such as X has a nightmare, t corresponds to commonsense inferences about those events, such as X wakes up, and r corresponds to commonsense inference types, such as As a result, X does....", "Following Bosselut et al. (2019) and Sap et al. (2020), for each event and relation type in ATOMIC , 10 candidate inferences are decoded from COMET using beam search with b =10.", "As oppositional instances remain challenging to knowledge models such as COMET, we evaluate how ANION can be used to augment the type of examples seen by COMET during training.", "Evaluation Metrics Following Bosselut et al. (2019), we evaluate the quality of generated inferences using BLEU-2 (Papineni et al., 2002) as an automatic evaluation.", "We also compute the perplexity of models on their reference generations.", "For the human evaluation, we employ human judges from MTurk to identify whether generated commonsense inferences are plausible.", "We randomly sample 100 events from the original ATOMIC test set along with their negated counterparts from ANION .", "For each event, we present every decoded inference to five crowdworkers and ask them to identify whether the inference is plausible given the event.", "For each model trained on a different combination of ATOMIC and ANION ( i.e. , ANION-L, ANION-S, ANION-C), we evaluate the same events for comparison.", "We calculate Precision @ 10 (P@10) across these human ratings, i.e. , the average number of correct options per event-relation prompt.", "Specifically, we average the results from 45K ratings to compute the final human score (100 events 9 relations 10 options 5 annotators).", "The pairwise agreement score of human evaluation is 63.6, which is on par with other similar commonsense reasoning annotation tasks (Rashkin et al., 2016).", "Does negated event training improve commonsense inference for negated situations?", "We train a COMET model on the events from ATOMIC ( i.e. , COMET-ATOMIC ), and another on the examples from both ATOMIC and ANION ( i.e. , COMET-FULL ).", "The combined dataset is shuffled so that the original and negated examples are uniformly mixed during training.", "We report our comparison of these two models in Table 4.", "The performance of the original COMET model trained only on the ATOMIC knowledge graph drops significantly across all types of oppositional instances.", "Most surprisingly, a drop in performance is also observed on commonsense contradictions (ANION-C), which have no explicit negation cues.", "However, commonsense contradiction events can often be richer in content (see Table 3), making them more challenging for knowledge models.", "Meanwhile training on all negated examples in the ANION knowledge graph produces significant improvements across all negation categories (ANION -{L,S,C}), though we do observe a slight drop in human ratings on the examples from the original ATOMIC test set.", "Does negated event training deteriorate commonsense inference of affirmative situations?", "We note in Table 4 that training on ATOMIC + ANION hurts inference performance on the original ATOMIC evaluation set.", "To analyze why COMET-FULL does not improve on this set of examples, we perform a case study on inferences generated by COMET-ATOMIC and COMET-FULL under the same event and relation prompt, and note two qualitative patterns.", "First, we observe that COMET-FULL tends to generate inferences that are less generic, but that may require additional implicit context.", "For example, for the event X is really sad and the relation xEffect ( i.e. , the effect of the event on X), COMET-ATOMIC generates inferences such as cries, gets depressed and takes medication.", "Conversely, COMET-FULL generates context-specific inferences such as thinks about the past and thinks about what they did, which, while plausible in some context, may be less straightforward when evaluated broadly (not all feelings of sadness lead to reflection on the past or one's own actions).", "Second, we find an overall improvement for certain compositional events in ATOMIC that contain conjunction words: and or but.", "On these examples, COMET-FULL outperforms COMET-ATOMIC with 12.41 and 12.22 BLEU-2 scores respectively.", "For example, for the event X is hot and humid and the relation xEffect , COMET-ATOMIC 's generation includes correct inferences, such as to take a shower, to cool down, to drink some water, to go outside, and incorrect inferences, such as to turn on the heat and to drink a hot tea.", "COMET-FULL generates all of COMET-ATOMIC 's correct inferences, but none of the incorrect inferences, demonstrating that training COMET jointly on ATOMIC and ANION can help avoid incorrect inferences involving commonsense mismatch in more compositional situations.", "In summary, the ability to generate richer, contextual inferences for COMET-FULL is beneficial when handling complex events, but may not be necessary for many of the simple events in ATOMIC , and may backfire when subtler inferences are made.", "Which variety of negated events are most crucial to include in training sets?", "As ablations, we train additional models using different subsets of ANION : logical negations (ATOMIC + ANION-L), semi-logical negations (ATOMIC + ANION-S), and commonsense contradictions (ATOMIC + ANION-C).", "These ablations evaluate whether knowledge models can adapt to certain types of negation more efficiently with additional data.", "on the evaluation set related to that negation type.", "Interestingly, though, training on certain types of negation examples can also yield benefits downstream on other negation types.", "For example, training on commonsense contradictions (ANION-C) provides a clear benefit when evaluating on semi-logically negated events (ANION-S) as opposed to merely training on ATOMIC .", "Notably, the knowledge model trained with logically negated examples (ATOMIC + ANION-L) outperforms the model trained only on ATOMIC on all test sets.", "While training on examples of negated events helps knowledge models generate commonsense inferences for these event types, there is still a large gap compared to their performance on affirmative events.", "To address this discrepancy, we introduce a discriminator-based approach for distinguishing inconsistent inferences of negated events.", "Our inference discriminator learns to identify plausible and invalid inferences of events by learning from contrastive samples from ATOMIC and ANION .", "We fine-tune the RoBERTa-base model (Liu et al., 2019) as a binary classifier to identify whether a given knowledge tuple { h, r, t } is logically valid.", "The model is trained on paired original and negated events as described below.", "Such training examples inject implicit commonsense nuances that differ between oppositional events to teach the discriminator to identify logical pitfalls.", "Training details for discriminators can be found in Appendix A.3.", "Data The paired events used to train the negation discriminator are automatically constructed from the ATOMIC and ANION knowledge graphs.", "Positive examples can be constructed by sampling tuples from each knowledge graph.", "To construct negative training samples, we introduce the concept of common and contrast sets among inferences of events and their oppositions.", "Common and contrast sets distinguish how commonsense inferences are not necessarily negated in the same manner as their corresponding events.", "While certain inferences of events are also in opposition to a negated event, some may be common.", "For the events X eats a cheeseburger and X eats a salad, an inference such as X is hungry might be common to both events while inferences such as X is unhealthy or X is healthy would be viewed as contrastive.", "Specifically, we assume two head events in ATOMIC and ANION , and their respective set of tail inferences regarding a common relation type.", "We define the common set of these inferences as the intersection of the two sets of tail inferences connected to each head event by applying the exact match of string forms.", "The contrast set is formed by distinct tail inferences connected to the two head events.", "Logically valid ( i.e. , positive) training examples consist of knowledge tuples from ATOMIC and ANION .", "Logically invalid ( i.e. , negative) training examples are formed by swapping the set of contrast set inferences between paired original and negated events.", "3 To balance the training set, we sample the same number of positive and negative tuples for original and negation events.", "Statistics of the resulting training sets are in Table 6.", "Using different portions of ANION for training yields four unique discriminators ( i.e. , L , S , C and", "3 We note that annotations in ATOMIC and ANION are finite ( i.e. , not covering the full space of possible commonsense inferences about events).", "As a result, it is possible that in a more expansive annotation, elements of the contrast sets would in fact be part of the common set of an event and its negation.", "For the purpose of this work, however, contrast sets were an efficient way of acquiring high-quality semantically negative examples for training discriminators.", "LSC ) that we apply to commonsense inferences generated by COMET.", "The discriminators classify each option as either logically valid or invalid , partitioning the candidates into two sets, which we evaluate with human judgements.", "As a baseline, we also record the precision of not using a discriminator, which assumes all generated inferences are valid candidates ( i.e. , the all set).", "Metrics We evaluate and compare the quality of the all , valid and invalid sets using BLEU-2 and the same human evaluation as in 4.", "The all set contains the full set of 10 candidates, while the valid and invalid sets have varying number of elements depending on how discriminators classify them, summing to 10.", "To compute statistical significance between valid and all sets, we use a permutation test with 100K permutations.", "Details are provided in Appendix A.4.", "Do discriminators effectively distinguish inconsistent inferences?", "The results in Table 7 demonstrate that the discriminator trained on all subsets of ANION ( LSC ) can select subsets of inferences ( i.e. , Event + Rel Generation V PX does not skate around xAttr athletic (cid:55) (cid:55) careless (cid:55) (cid:55) lazy (cid:51) (cid:51) uncoordinated (cid:51) (cid:51) unskilled (cid:51) (cid:51) X does not sit behind Y xIntent to be alone (cid:51) (cid:51) to be left alone (cid:51) (cid:51) to avoid Y (cid:51) (cid:51) to sit (cid:55) (cid:55) to wait (cid:51) (cid:55) X does not look angry xNeed to calm down (cid:55) (cid:51) to watch a movie (cid:51) (cid:55) to have been provoked (cid:55) (cid:55) to not be angry (cid:51) (cid:51) to be calm (cid:51) (cid:51) X refuses to hear a scary noise xWant to run away (cid:55) (cid:55) to go to sleep (cid:51) (cid:51) to be safe (cid:51) (cid:51) to keep quiet (cid:51) (cid:51) to avoid the noise (cid:51) (cid:51) X never brings Y into conflicts oWant to avoid X (cid:55) (cid:55) to be left alone (cid:55) (cid:51) to thank X (cid:51) (cid:51) to fight back (cid:55) (cid:55) to avoid conflict (cid:55) (cid:51) X scarcely gets sunburned xReact burned (cid:55) (cid:55) hurt (cid:55) (cid:55) sick (cid:55) (cid:55) sad (cid:55) (cid:55) satisfied (cid:51) (cid:51) X under no circumstancesforgetsY'swallet oReact upset (cid:55) (cid:55) sad (cid:55) (cid:55) angry (cid:55) (cid:55) thankful (cid:51) (cid:51) grateful (cid:51) (cid:51) X has trouble with advertising X's business xEffect loses money (cid:51) (cid:51) loses clients (cid:51) (cid:51) gets fired (cid:51) (cid:51) gets sued (cid:55) (cid:55) cries (cid:51) (cid:51) X puts Y out of mind oEffect has a better day (cid:55) (cid:55) becomes sad (cid:51) (cid:51) cries (cid:51) (cid:51) is grateful towards X (cid:55) (cid:55) feels better (cid:55) (cid:55) Table 8: Inferences of randomly selected ANION events by COMET-ATOMIC .", "the valid set) that are more logically consistent with their seed event.", "This observation holds across all evaluation subsets of ANION , as well as the original ATOMIC evaluation set.", "Table 8 shows examples of valid and invalid candidates for negated and contradicted events from ANION as specified by the LSC discriminator.", "The discriminator is notably Eval Disc L S C LSCATOMIC all 55.69 55.93 56.94 58.30 valid 55.65 56.18 57.26 59.07 %iprv -0.07 0.44 0.57 1.32 ANION-L all 39.46 37.85 36.43 39.45 valid **46.39 **41.93 37.54 **45.59 %iprv 17.55 10.78 3.03 15.57 ANION-S all 37.13 39.29 37.72 38.55 valid 37.48 **44.58 39.03 **44.93 %iprv 0.96 13.47 3.45 16.56 ANION-C all 46.92 47.32 48.26 48.81 valid 46.83 47.68 48.79 *51.45 %iprv -0.20 0.75 1.09 5.40 Table 9: P@{# valid} scores of the all and valid sets determined by the L , S , C and LSC discriminators.", "good at identifying invalid inferences wrongly associated to corresponding affirmative events ( e.g. , athletic and careless for the event X does not skate around under the relation, xAttr ).", "However, this analysis leaves open the possibility that we are generating too many inferences for each event, but that the decoder could rank correct inferences higher among the full set of generated candidates.", "To evaluate this possibility, we count the number of elements in the valid sets for each example and only keep the same number of the top-scoring elements from the all set (scored using generation perplexity).", "In Table 9, we see the average precision score for the pruned all sets (P@{# valid}) still underperforms the precision of their corresponding valid sets.", "to provide a discriminator for?", "To examine the generalization effects of each negation type, we also train discriminators on a single negation subset of ANION examples ( i.e. , L , S , C ) and compare the P@{# valid} score of the all and valid sets.", "Results in Table 9 indicate that each discriminator is best for identifying valid inferences for the types of events on which it was trained.", "The L , S , and C discriminators all achieve improvements when partitioning events similar to their training.", "However, the LSC discriminator trained on all negation forms shows the largest valid set improvement across all discriminators on ATOMIC , ANION-S, and ANION-C.", "On ANION-L, the LSC discrimina-Eval Disc L S C LSCATOMIC all 54.16 54.49 55.03 55.68 valid 54.20 54.64 55.71 **57.58 %iprv 0.08 0.28 1.23 3.41 ANION-L all 46.54 46.26 46.15 46.39 valid **50.71 **48.36 46.16 **49.85 %iprv 8.98 4.54 0.03 7.45 ANION-S all 46.90 47.73 47.47 47.53 valid 47.14 **50.42 48.20 **50.62 %iprv 0.51 5.65 1.55 6.50 ANION-C all 50.80 51.29 51.28 51.83 valid 50.94 51.52 52.65 *53.91 %iprv 0.28 0.45 2.67 4.02 Table 10: P@{# valid} scores of the all and valid sets determined by the L , S , C and LSC discriminators.", "tor still yields a significantly improved valid set.", "Are learning-based and discriminator-based approaches complementary?", "We apply our discriminators to the generations of the COMET model trained on ANION .", "In Table 10, we see that the LSC discriminator, when applied to generations of COMET trained on ANION , achieves significant improvements over all evaluation sets, including the original events.", "The full evaluation of the P@{# valid} and P@3 scores of applying different discriminators to generations of COMET trained on different data over all evaluation sets are shown in Table 13 and 14 in Appendix A. Can discriminators be used to more aggressively generate inferences?", "While applying discriminators to generated inferences yields a valid subset with higher accuracy, we are left with fewer correct inferences in total.", "Thus, we investigate the efficiency of using discriminators to expand the number of inferences generated.", "We decode inferences from COMET with beam size 25, and then apply the discriminator to this larger candidate set.", "Table 11 shows that for logical negation, the valid set of beam 25 has higher accuracy and more correct options than the all set of beam 10.", "Thus, when we have a larger and potentially more noisy set of candidates, applying the negation discriminator yields a set of options that have higher quality than using all the candidates from a smaller set of initial generations.", "We present the first comprehensive study on commonsense implications of negations and contradictions.", "To expand commonsense resources for the challenge of negation modeling, we introduce ANION , a large scale commonsense knowledge graph for negated and contradicted events.", "We use ANION to train commonsense knowledge models and demonstrate that it effectively enriches machine commonsense inference capabilities around negation.", "Lastly, we propose a negation discriminator capable of identifying logical flaws in commonsense inferences.", "By combining the model trained on ANION with the negation discriminator, we achieve a further performance boost.", "We select English as the base language of ANION so that our resource may be directly linked with the original ATOMIC knowledge graph.", "We acknowledge, however, that resources in English are more likely to reflect the mindsets and behaviors of English speakers.", "Furthermore, and in our case specifically, our annotators were primarily from the US.", "Consequently, this language choice biases the content of the knowledge graph toward North American perspectives, which affects what models trained on these resources would learn about social norms (Acharya et al., 2021).", "Future works may also include other languages and cultures to make the ANION resource more culturally and ideologically inclusive.", "We recruit crowdworkers from MTurk who are located within the US with HIT approval rates higher", "than 98%.", "To ensure high quality task completions, we post pilot batches and manually examine tens of thousands of responses to identify users who provide high quality annotations.", "We select 834 qualified users for the formal data collection and human evaluation tasks.", "Since the entire study spans multiple months, we regularly sample responses to re-examine their quality during the formal study, and remove HITs from crowdworkers who provide decreased-quality responses over time.", "We are particularly cautious about the human evaluation tasks, so even with qualified users, we still comprehensively examine tens of thousands of human evaluation tasks by grouping HITs per users, and look at their responses together to identify potential spamming behaviors and inconsistencies.", "For the data collection and human evaluation tasks, we aimed to compensate crowdworkers with an average of $15 per hour.", "To ensure a fair payment, we first post a pilot task to evaluate average time cost of a specific task, and pay users at a high rate in this round to avoid underpayment during the pilot study.", "We then calculate new payment from the pilot task such that approximately 75% of the HITs would have been paid with more than $15 per hour at the adjusted rate in the pilot round.", "We then adopt this new rate for the formal study.", "We repeat the above procedure of determining payment periodically during the study to ensure the crowdworkers are consistently well-paid.", "The authors thank Elisa Kreiss for helpful discussions.", "We also thank the anonymous reviewers and meta-reviewers for their helpful feedback.", "This research was supported in part by DARPA under the MCS program through NIWC Pacific (N66001-19-2-4031), and the Allen Institute for AI (AI2)." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "method", "method", "method", "result", "abstain", "method", "objective", "abstain", "method", "method", "abstain", "objective", "abstain", "method", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "result", "abstain", "method", "abstain", "method", "abstain", "method", "objective", "abstain", "objective", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other" ]
[ "Training models to map natural language instructions to programs, given target world supervision only, requires searching for good programs at training time.", "Search is commonly done using beam search in the space of partial programs or program trees, but as the length of the instructions grows finding a good program becomes difficult.", "In this work, we propose a search algorithm that uses the target world state, known at training time, to train a critic network that predicts the expected reward of every search state.", "We then score search states on the beam by interpolating their expected reward with the likelihood of programs represented by the search state.", "Moreover, we search not in the space of programs but in a more compressed state of program executions, augmented with recent entities and actions.", "On the SCONE dataset, we show that our algorithm dramatically improves performance on all three domains compared to standard beam search and other baselines.", "Training models that can understand natural language instructions and execute them in a real-world environment is of paramount importance for communicating with virtual assistants and robots, and therefore has attracted considerable attention (Branavan et al., 2009; Vogel and Jurafsky, 2010; Chen and Mooney, 2011).", "A prominent approach is to cast the problem as semantic parsing, where instructions are mapped to a high-level programming language (Artzi and Zettlemoyer, 2013; Long et al., 2016; Guu et al., 2017).", "Because annotating programs at scale is impractical, it is desirable to train a model from instructions, an initial world state, and a target world state only, letting the program itself be a latent variable.", "Learning from such weak supervision results in a difficult search problem at training time.", "The model must search for a program that when executed leads to the correct target state.", "Early work employed lexicons and grammars to constrain the search space (Clarke et al., 2010; Liang et al., 2011; Krishnamurthy and Mitchell, 2012; Berant et al., 2013; Artzi and Zettlemoyer, 2013), but recent success of sequence-to-sequence models (Sutskever et al., 2014) shifted most of the burden to learning.", "Search is often performed simply using beam search, where program tokens are emitted from left-to-right, or program trees are generated top-down (Krishnamurthy et al., 2017; Yin and Neubig, 2017; Cheng et al., 2017; Ra-binovich et al., 2017) or bottom-up (Liang et al., 2017; Guu et al., 2017; Goldman et al., 2018).", "Nevertheless, when instructions are long and complex and reward is sparse, the model may never find enough correct programs, and training will fail.", "In this paper, we propose a novel search algorithm for mapping a sequence of natural language instructions to a program, which extends the standard beam-search in two ways.", "First, we capitalize on the target world state being available at training time and train a critic network that given the language instructions, current world state, and target world state estimates the expected future reward for each search state.", "In contrast to traditional beam search where states representing partial programs are scored based on their likelihood only, we also consider expected future reward, leading to a more targeted search at training time.", "Second, rather than search in the space of programs, we search in a more compressed execution space, where each state is defined by a partial program's execution result.", "We evaluated our method on the SCONE dataset, which includes three different domains where long sequences of 5 instructions are mapped to programs.", "We show that while standard beam search gets stuck in local optima and is unable to discover good programs for many examples, our model is able to bootstrap, improving final performance by 20 points on average.", "We also perform extensive analysis and show that both value-based search as well as searching in execution space contribute to the final performance.", "Our code and data are available at http://gitlab.com/ tau-nlp/vbsix-lang2program .", "Mapping instructions to programs invariably involves a context , such as a database or a robotic environment, in which the program (or logical form) is executed.", "The goal is to train a model given a training set { ( x ( j ) = ( c ( j ) , u ( j ) ) , y ( j ) ) } Nj =1 , where c is the context, u is a sequence of natural language instructions, and y is the target state of the environment after following the instructions, which we refer to as denotation .", "The model is trained to map the instructions u to a program z such that executing z in the context c results in the denotation y , which we denote by (cid:74) z (cid:75) c = y .", "Thus, the program z is a latent variable we must search for at both training and test time.", "When the sequence of instructions is long, search becomes hard, particularly in the early stages of training.", "Recent work tackled the training problem using variants of reinforcement learning (RL) (Suhr and Artzi, 2018; Liang et al., 2018) or maximum marginal likelihood (MML) (Guu et al., 2017; Goldman et al., 2018).", "We now briefly describe MML training, which we base our training procedure on, and outperformed RL in past work under comparable conditions (Guu et al., 2017).", "We denote by ( ) a model, parameterized by , that generates the program z token by token from left to right.", "The model receives the context c , instructions u and previously predicted tokens z 1 ...t 1 , and returns a distribution over the next program token z t .", "The probability of a program prefix is defined to be: p ( z 1 ...t | u , c ) = (cid:81) ti =1 ( z i | u , c, z 1 ...i 1 ) .", "The model is trained to maximize the MML objective: JMML = log (cid:89) ( c, u ,y ) p ( y | c, u ) = (cid:88) ( c, u ,y ) log( (cid:88) z p ( z | u , c ) R ( z )) , where R ( z ) = 1 if (cid:74) z (cid:75) c = y , and 0 otherwise (For brevity, we omit c and y from R ( ) ).", "Typically, 1. The person in an orange hat moves to the left of the person in a red hat 2. He then disappears 3. Then a person in orange appears on the far right X: y: Figure 1: Example from the SCENE domain in SCONE (3 utterances), where people with different hats and shirts are located in different positions.", "the MML objective is optimized with stochastic gradient ascent, where the gradient for an example ( c, u , y ) is: JMML = (cid:88) z q ( z ) R ( z ) log p ( z | c, u ) q ( z ) := R ( z ) p ( z | c, u ) (cid:80) z R ( z ) p ( z | c, u ) .", "The search problem arises because it is impossible to enumerate the set of all programs, and thus the sum over programs is approximated by a small set of high probability programs, which have high weights q ( ) that dominate the gradient.", "Search is commonly done with beam-search , an iterative algorithm that builds an approximation of the highest probability programs according to the model.", "At each time step t , the algorithm constructs a beam B t of at most K program prefixes of length t .", "Given a beam B t 1 , B t is constructed by selecting the K most likely continuations of prefixes in B t 1 , according to p ( z 1 ..t | ) .", "The algorithm runs L iterations, and returns all complete programs discovered.", "In this paper, our focus is the search problem that arises at training time when training from denotations, i.e., finding programs that execute to the right denotation.", "Thus, we would like to focus on scenarios where programs are long, and standard beam search fails.", "We next describe the SCONE dataset, which provides such an opportunity.", "The SCONE dataset Long et al. (2016) presented the SCONE dataset, where a sequence of instructions needs to be mapped to a program consisting of a sequence of executable commands.", "The dataset has three domains, where each domain includes several objects (such as people or beakers ), each with different properties (such as shirt color or chemical color ).", "SCONE provides a good environment for stress-testing search algorithms because a long sequence of instructions needs to be mapped to a program.", "Figure 1 shows an example from the SCENE domain.", "Formally, the context in SCONE is a world specified by a list of positions, where each position may contain an object with certain properties.", "A formal language is defined to interact with the world.", "The formal language contains constants (e.g., numbers and colors), functions that allow to query the world and refer to objects and intermediate computations, and actions , which are functions that modify the world state.", "Each command is composed of a single action and several arguments constructed recursively from constants and functions.", "E.g., the command MOVE ( HASHAT ( YELLOW ), LEFTOF ( HASSHIRT ( BLUE ))), contains the action MOVE , which moves a person to a specified position.", "The person is computed by HASHAT ( YELLOW ), which queries the world for the position of the person with a yellow hat, and the target position is computed by LEFTOF ( HASSHIRT ( BLUE )).", "We refer to Guu et al. (2017) for a full description of the language.", "Our goal is to train a model that given an initial world w 0 and a sequence of natural language utterances u = ( u 1 , . . . , u M ) , will map each utterance u i to a command z i such that applying the program z = ( z 1 , . . . , z M ) on w 0 will result in the target world, i.e., (cid:74) z (cid:75) w 0 = w M = y .", "To present our algorithm, we first formulate the problem as a Markov Decision Process (MDP), which is a tuple ( S , A , R, ) .", "To define the state set S , we assume all program prefixes are executable, which can be easily done as we show below.", "The execution result of a prefix z in the context c , denoted by (cid:74) z (cid:75) exc , contains its denotation and additional information stored in the executor.", "Let Z pre be the set of all valid programs prefixes.", "The set of states is defined to be S = { ( x, (cid:74) z (cid:75) exc ) | z Z pre } , i.e., the input paired with all possible execution results.", "The action set A includes all possible program tokens 1 , and the transition function : SA S is computed by the executor.", "Last, the reward R ( s, a ) = 1 iff the action a ends the program and leads to a state ( s, a ) where the denotation is equal to the target y .", "The model ( ) is a parameterized policy that provides a distribution over the program vocabulary at each time step.", "SCONE as an MDP We define the partial execution result (cid:74) z (cid:75) exc in SCONE, as described by Guu et al. (2017).", "We assume that SCONE's formal language is written in postfix notations, e.g., the instruction MOVE ( HASHAT ( YELLOW ), LEFTOF ( HASSHIRT ( BLUE ))) is written as YELLOW HASHAT BLUE HASSHIRT LEFTOF MOVE .", "1 Decoding is constrained to valid program tokens only.", "With this notation, a partial program can be executed left-to-right by maintaining a program stack , .", "The executor pushes constants ( YELLOW ) to , and applies functions ( HASHAT ) by popping their arguments from and pushing back the computed result.", "Actions ( MOVE ) are applied by popping arguments from and performing the action in the current world.", "To handle references to previously executed commands, SCONE's formal language includes functions that provide access to actions and arguments in previous commands.", "To this end, the executor maintains an execution history , h i = ( e 1 , . . . , e i ) , a list of executed actions and their arguments.", "Thus, the execution result of a program prefix is (cid:74) z (cid:75) exw 0 = ( w i 1 , , h i 1 ) .", "We adopt the model from Guu et al. (2017) (ar-chitecture details in appendix A): The policy observes and u i , the current utterance being parsed, and predicts a token.", "When the model predicts an action token that terminates a command, the model moves to the next utterance (un-til all utterances have been processed).", "The model uses functions to query the world w i 1 and history h i 1 .", "Thus, each MDP state in SCONE is a pair s = ( u i , (cid:74) z (cid:75) exw 0 ) .", "Figure 2 illustrates a state transition in the SCENE domain.", "Importantly, the state does not store the full program's prefix, and many different prefixes may lead to the same state.", "Next, we describe a search algorithm for this MDP.", "Model improvement relies on generating correct programs given a possibly weak model.", "Standard beam-search explores the space of all program token sequences up to some fixed length.", "We propose two technical contributions to improve search:", "(a) We simplify the search problem by searching for correct executions rather than correct programs;", "(b) We use the target denotation at training time to better estimate partial program scores in the search space.", "We describe those next.", "Program space can be formalized as a directed tree T = ( VT , ET ) , where vertices VT are program prefixes, and labeled edges ET represent prefixes' continuations: an edge e = ( z, z (cid:48) ) labeled with the token a , represents a continuation of the prefix z with the token a , which yields the prefix z (cid:48) .", "The red yellow hasHat hasShirt blue hasShirt leftOf move blue hasShirt leftOf move 1 move Program search space Execution search space red hasHat hasShirt 1 blue hasShirt leftOf move yellow 1 move Figure 3: A set of commands represented in program space (left) and execution space (right).", "root of the graph represents the empty sequence.", "Similarly, Execution space is a directed graph G = ( VG , EG ) induced from the MDP described in Section 3. Vertices VG represent MDP states, which express execution results, and labeled edges EG represent transitions.", "An edge ( s 1 , s 2 ) labeled by token a means that ( s 1 , a ) = s 2 .", "Since multiple programs have the same execution result, execution space is a compressed representation of program space.", "Figure 3 shows a few commands in both program and execution space.", "Execution space is smaller, and so searching in it is easier.", "Each path in execution space represents a different program prefix, and the path's final state represents its execution result.", "Program search can therefore be reduced to execution search: given an example ( c, u , y ) and a model , we can use to explore in execution space, discover correct terminal states , i.e. states corresponding to correct full programs, and extract paths leading to those states.", "As the number of paths may be exponential in the size of the graph, we can use beam-search to extract the most probable correct programs (ac-cording to the model) in the discovered graph.", "Our approach is similar to the DPD algorithm (Pasupat and Liang, 2016), where CKY-style search is performed in denotation space, followed by search in a pruned space of programs.", "However, DPD was used without learning, and the search was not guided by a trained model, which is a major part of our algorithm.", "search modified for searching in execution space, that scores search states with a value-based network.", "VBSIX is formally defined in Algorithm 1, which we will refer to throughout this section.", "Standard beam search is a breadth-first traversal of the program space tree, where a fixed number of vertices are kept in the beam at every level of the tree.", "The selection of vertices is done by scoring their corresponding prefixes according to p ( z 1 ...t | u , c ) .", "VBSIX applies the same traversal in execution space (lines 10-21).", "However, since each vertex in the execution space represents an execution result and not a particular prefix, we need to modify the scoring function.", "Let s be a vertex discovered in iteration t of the search.", "We propose two scores for ranking s .", "The first is the actor score , the probability of reaching vertex s after t iterations 2 according to the model .", "The second and more novel score is the value-based critic score , an estimate of the state's expected reward.", "The AC-Score is the sum of these two scores (lines 23-24).", "The actor score, p t ( s ) , is the sum of probabilities of all prefixes of length t that reach s (rather than the probability of one prefix as in beam search).", "VBSIX approximates p t ( s ) by performing the summation only over paths that reach s via states in the beam B t 1 , which can be done efficiently with a dynamic programming (DP) chart P t [ s ] that keeps actor score approximations in each iteration (line 18).", "This lower-bounds 2 We score paths in different iterations independently to avoid bias for shorter paths.", "An MDP state that appears in multiple iterations will get a different score in each iteration.", "the true p t ( s ) since some prefixes of length t that reach s might have not been discovered.", "Contrary to standard beam-search, we want to score search states also with a critic score E p [ R ( s )] , which is the sum of the suffix probabilities that lead from s to a correct terminal state: E p [ R ( s )] = (cid:88) ( s ) p ( ( s ) | s ) R ( ( s )) , where ( s ) are all possible trajectories starting from s and R ( ( s )) is the reward observed when taking the trajectory ( s ) from s .", "Enumerating all trajectories ( s ) is intractable and so we will approximate E p [ R ( s )] with a trained value network V ( s, y ) , parameterized by .", "Importantly, because we are searching at training time, we can condition V on both the current state s and target denotation y .", "At test time we will use only, which does not need access to y .", "Since the value function and DP chart are used for efficient ranking, the asymptotic run-time complexity of VBSIX is the same as standard beam search ( O ( K |A| L )).", "The beam search in Line 3, where we extract programs from the constructed execution space graph, can be done with a small beam size, since it operates over a small space of correct programs.", "Thus, its contribution to the algorithm complexity is negligible.", "Figure 4 visualizes scoring and pruning with the actor-critic score in iteration t .", "Vertices in B t are discovered by expanding vertices in B t 1 , and each vertex is ranked by the AC-scorer.", "The highlighted vertex score is 0 .", "6 , a sum of the actor score ( 0 . 5 ) and the critic score ( 0 . 1 ).", "The actor score is the sum of its prefixes ( (0 . 05 + 0 . 55) 0 .", "7 + 0 .", "01 0 .", "08 = 0 .", "5 ) and the critic score is a value network estimate for the sum of probabilities of outgoing trajectories reaching correct terminal states ( 0 . 02 + 0 . 08 = 0 . 1 ).", "Only the topK states are kept in the beam ( K = 4 in the figure).", "VBSIX leverages execution space in a number of ways.", "First, since each vertex in execution space compactly represents multiple prefixes, a beam in VBSIX effectively holds more prefixes than in standard beam search.", "Second, running beam-search over a graph rather than a tree is less greedy, as the same vertex can surface back even if it fell out of the beam.", "The value-based approach has several advantages as well: First, evaluating the probability of outgoing trajectories provides look-ahead that is missing from standard beam search.", "Second (and most importantly), V is conditioned on y , which doesn't have access to, which allows finding correct programs with low model probability, that can learn from.", "We note that our two contributions are orthogonal: the critic score can be used in program space, and search in execution space can be done with actor-scores only.", "We train the model and value network V jointly (Algorithm 2).", "is trained using MML over discovered correct programs (Line 4, Algorithm 1).", "The value network is trained as follows: Given a training example ( c, u , y ) , we generate a set of correct programs Z pos with VBSIX.", "The value network needs negative examples, and so for every incorrect terminal state z neg found during search with VBSIX we create a single program leading to z neg .", "We then construct a set of training examples D v , where each example (( s, y ) , l ) labels states encountered while generating programs z Z with the probability mass of correct programs suffixes that extend it, i.e., l = (cid:80) z t p ( z t... | z | ) , where z t ranges over all z Z and t [1 . . . | z | ] Finally, we train V to minimize the log-loss objective: (cid:80) (( s,y ) ,l ) D v l log V ( s,y )+(1 l )(log(1 V ( s,y ))) .", "Similar to actor score estimation, labeling examples for V is affected by beam-search errors: the labels lower bound the true expected reward.", "However, since search is guided by the model, those programs are likely to have low probability.", "Moreover, estimates from V are based on multiple examples, compared to probabilities in the DP Algorithm 2 Actor-Critic Training 1: procedure TRAIN () 2: Initialize and randomly 3: while not converged do 4: ( x := ( c, u ) ,y ) select random example 5: Z pos PROGRAMSEARCH ( c, u ,y, ,V ) 6: Z neg programs leading to incorrect terminal states 7: D v BUILDVALUEEXAMPLES (( Z pos Z neg ) ,c,y ) 8: Update using D v , update using ( x, Z pos ,y ) 9: function BUILDVALUEEXAMPLES ( Z ,c, u ,y ) 10: for z Z do 11: for t [1 ... | z | ] do 12: s (cid:74) z 1 ...t (cid:75) exc 13: L [ s ] L [ s ] + p ( z t... | z | | c, u ) R ( z ) 14: D v { (( s,y ) ,L [ s ]) } s L 15: Return D v chart, and are more robust to search errors.", "Neural network architecture: We adapt the model proposed by Guu et al. (2017) for SCONE.", "The model receives the current utterance u i and program stack , and returns a distribution over the next token.", "Our value network receives the same input, but also the next utterance u i +1 , the world state w i and target world state y , and outputs a scalar.", "Appendix A provides a full description.", "We evaluated our method on the three domains of SCONE with the standard accuracy metric, i.e., the proportion of test examples where the predicted program has the correct denotation.", "We trained with VBSIX, and used standard beam search ( K = 32 ) at test time for programs' generation.", "Each test example contains 5 utterances, and similar to prior work we reported the model accuracy on all 5 utterances as well as the first 3 utterances.", "We ran each experiment 6 times with different random seeds and reported the average accuracy and standard deviation.", "In contrast to prior work on SCONE (Long et al., 2016; Guu et al., 2017; Suhr and Artzi, 2018), where models were trained on all sequences of 1 or 2 utterances, and thus were exposed during training to all gold intermediate states, we trained from longer sequences keeping intermediate states latent.", "This leads to a harder search problem that was not addressed previously, but makes our results incomparable to previous results 3 .", "In SCENE and TANGRAM , we used the first 4 and 5 utterances as examples.", "In ALCHEMY , we used the first utterance and 5 utterances.", "Training details To warm-start the value network, we trained it for a few thousand steps, and only then start re-ranking with its predictions.", "Moreover, we gain efficiency by first returning K 0 (=128) states with the actor score, and then re-ranking with the actor-critic score, returning K (=32) states.", "Last, we use the value network only in the last two utterances of every example since we found it has less effect in earlier utterances where future uncertainty is large.", "We used the Adam optimizer (Kingma and Ba, 2014) and fixed GloVe embeddings (Pennington et al., 2014) for utterance words.", "methods (Hyper-parameters are in appendix B): 1. MML: Our main baseline, where search is done with beam search and training with MML.", "We used randomized beam-search, which adds (cid:15) greedy exploration to beam search, which was proposed by Guu et al. (2017) and performed better 4 .", "2. EXPERT-MML: An alternative way of using the target denotation y at training time, based on imitation learning (Daume et al., 2009; Ross et al., 2011; Berant and Liang, 2015), is to train an expert policy expert , which receives y as input in addition to the parsing state, and trains with the MML objective.", "Then, our policy is trained using programs found by expert .", "The intuition is that the expert can use y to find good programs that the policy can train from.", "3. VBSIX: Our proposed training algorithm.", "We also evaluated REINFORCE, where Monte-Carlo sampling is used as search strategy (Williams, 1992; Sutton et al., 1999).", "We followed the implementation of Guu et al. (2017), who performed variance reduction with a constant baseline and added (cid:15) -greedy exploration.", "We found that REINFORCE fails to discover any correct programs to bootstrap from.", "Table 1 reports test accuracy of VBSIX compared to the baselines.", "First, VBSIX outperforms all baselines in all cases.", "MML is the strongest baseline, but even with an increased beam ( K = 64 ), VBSIX ( K = 32 ) surpasses it by a large margin (more than 20 points on average).", "On top of the improvement in accuracy, in ALCHEMY and TANGRAM the standard deviation of VBSIX is lower than the other baselines across the 6 random seeds, showing the robustness of our model.", "EXPERT-MML performs worse than MML in all cases.", "We hypothesize that using the denotation y as input to the expert policy expert results in many spurious programs, i.e., they are unrelated to the utterance meaning.", "This is since the expert can learn to perform actions that take it to the target world state while ignoring the utterances completely.", "Such programs will lead to bad generalization of .", "Using a critic at training time eliminates this problem, since its scores depend on .", "Ablations We performed ablations to examine the benefit of our two technical contributions", "(a) execution space", "(b) value-based search.", "Table 2 presents accuracy on the validation set when 0 5000 10000 15000 20000 Train step 0.0 0.1 0.2 0.3 0.4 0.5 0.6 T r a i n h i t a cc u r a c y Scene Execution Space Only Value Only Beam-SearchVBSIX 0 5000 10000 15000 20000 25000 30000 Train step 0.0 0.2 0.4 0.6 0.8 T r a i n h i t a cc u r a c y Alchemy Execution Space Only Value Only Beam-SearchVBSIX 0 5000 10000 15000 20000 25000 30000 35000 40000 Train step 0.0 0.2 0.4 0.6 0.8 T r a i n h i t a cc u r a c y Tangram Execution Space Only Value Only Beam-SearchVBSIX Figure 5: Training hit accuracy on examples with 5 utterances, comparing VBSIX to baselines with ablated components.", "We find that both contributions are important for performance, as the full system achieves the highest accuracy in all domains.", "In SCENE , each component has only a slight advantage over beam-search, and therefore both are required to achieve significant improvement.", "However, in ALCHEMY and TANGRAM most of the gain is due to the value network.", "We also directly measured the hit accuracy at training time, i.e., the proportion of training examples where the beam produced by the search algorithm contains a program with the correct denotation, showing the effectiveness of search at training time.", "In Figure 5, we report train hit accuracy in each training step, averaged across 6 random seeds.", "The graphs illustrate the performance of each search algorithm in every domain throughout training.", "The validation accuracy results are correlated with the improvement in train hit-accuracy.", "Execution Space We empirically measured two quantities that we expect should reflect the advantage of execution-space search.", "First, we measured the number of programs stored in the execution space graph compared to beam search, which holds K programs.", "Second, we counted the average number of states that are connected to correct terminal states in the discovered graph, but fell out of the beam during the search.", "The property reflects the gain from running search over a graph structure, where the same vertex can resurface.", "We preformed the analysis on VBSIX over 5-utterance training examples in all 3 domains.", "The following table summarizes the results: We found the measured properties and the contribution of execution space in each domain are Property SCENEALCHEMYTANGRAM Paths in beam 143903 5892 678 Correct pruned 18.5 11.2 3.8 correlated, as seen in the ablations.", "Differences between domains are due to the different complexities of their formal languages.", "As the formal language becomes more expressive, the execution space is more compressed as each state can be reached in more ways.", "In particular, the formal language in SCENE contains more functions compared to the other domains, and so it benefits the most from execution-space search.", "Value Network We analyzed the accuracy of the value network at training time by measuring, for each state, the difference between its expected reward (estimated from the discovered paths) and the value network prediction.", "Figure 6 shows the average difference in each training step for all encountered states (in blue), and for high reward states only (states with expected reward larger than 0 . 7 , in orange).", "Those metrics are averaged across 6 runs with different random seeds.", "The accuracy of the value network improves during training, except when the policy changes substantially, in which case the value network needs to re-evaluate the policy.", "When the value network converges, the difference between its predictions and the expected reward is 0 .", "15 0 .", "2 on average.", "However, for high reward states the difference is higher ( 0 . 3 ).", "This indicates that the value network has a bias toward lower values, which is expected as most states lead to low rewards.", "Since VBSIX uses the value network as a beam-search ranker, the value network doesn't need to be exact as long as it assigns higher values to states with higher expected rewards.", "Further analysis is provided in appendix D. 0 5000 10000 15000 20000 Train step 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 D i s t a n c e Scene All tates High Reward States 0 5000 10000 15000 20000 25000 30000 Train step 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 D i s t a n c e Alchemy All States High Reward States 0 5000 10000 15000 20000 25000 30000 35000 40000 Train step 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 D i s t a n c e Tangram All States High Reward States Figure 6: The difference between the prediction of the value network and the expected reward (estimated from the discovered paths) during training.", "Training from denotations has been extensively investigated (Kwiatkowski et al., 2013; Pasupat and Liang, 2015; Bisk et al., 2016), with a recent emphasis on neural models (Neelakantan et al., 2016; Krishnamurthy et al., 2017).", "Improving beam search has been investigated by proposing specialized objectives (Wiseman and Rush, 2016), stopping criteria (Yang et al., 2018), and using continuous relaxations (Goyal et al., 2018).", "Bahdanau et al. (2017) and Suhr and Artzi (2018) proposed ways to evaluate intermediate predictions from a sparse reward signal.", "Bahdanau et al. (2017) used a critic network to estimate expected BLEU in translation, while Suhr and Artzi (2018) used edit-distance between the current world and the goal for SCONE.", "But, in those works stronger supervision was assumed: Bahdanau et al. (2017) utilized the gold sequences, and Suhr and Artzi (2018) used intermediate worlds states.", "Moreover, intermediate evaluations were used to compute gradient updates, rather than for guiding search.", "Guiding search with both policy and value networks was done in Monte-Carlo Tree Search (MCTS) for tasks with a sparse reward (Silver et al., 2017; T. A. and and Barber, 2017; Shen et al., 2018).", "In MCTS, value network evaluations are refined with backup updates to improve policy scores.", "In this work, we gain this advantage by using the target denotation.", "The use of an actor and a critic is also reminiscent of A where states are scored by past cost and an admissible heuristic for future cost (Klein and Manning, 2003; Pauls and Klein, 2009; lee et al., 2016).", "In semantic parsing, Misra et al. (2018) recently proposed a critic distribution to improve the policy, which is based on prior domain knowledge (that is not learned).", "In this work, we propose a new training algorithm for mapping instructions to programs given denotation supervision only.", "Our algorithm exploits the denotation at training time to train a critic network used to rank search states on the beam, and performs search in a compact execution space rather than in the space of programs.", "We evaluated on three different domains from SCONE, and found that it dramatically improves performance compared to strong baselines across all domains.", "VBSIX is applicable to any task that supports graph-search exploration.", "Specifically, for tasks that can be formulated as an MDP with a deterministic transition function, which allow efficient execution of multiple partial trajectories.", "Those tasks include a wide range of instruction mapping (Branavan et al., 2009; Vogel and Jurafsky, 2010; Anderson et al., 2018) and semantic parsing tasks (Dahl et al., 1994; Iyyer et al., 2017; Yu et al., 2018).", "Therefore, evaluating VBSIX on other domains is a natural next step for our research.", "We thank the anonymous reviewers for their constructive feedback.", "This work was completed in fulfillment for the M.Sc degree of the first author.", "This research was partially supported by The Israel Science Foundation grant 942/16, the Blavat-nik Computer Science Research Fund, and The Yandex Initiative for Machine Learning." ]
[ "abstain", "abstain", "objective", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "objective", "objective", "objective", "result", "method", "result", "objective", "other", "other", "objective", "abstain", "abstain", "other", "other", "abstain", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "method", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "objective", "method", "result", "abstain", "abstain", "abstain", "method", "other", "other", "other" ]
[ "Neural Machine Translation (NMT) models achieve state-of-the-art performance on many translation benchmarks.", "As an active research field in NMT, knowledge distillation is widely applied to enhance the model's performance by transferring teacher model's knowledge on each training sample.", "However, previous work rarely discusses the different impacts and connections among these samples, which serve as the medium for transferring teacher knowledge.", "In this paper, we design a novel protocol that can effectively analyze the different impacts of samples by comparing various samples' partitions.", "Based on above protocol, we conduct extensive experiments and find that the teacher's knowledge is not the more, the better.", "Knowledge over specific samples may even hurt the whole performance of knowledge distillation.", "Finally, to address these issues, we propose two simple yet effective strategies, i.e., batch-level and global-level selections, to pick suitable samples for distillation.", "We evaluate our approaches on two large-scale machine translation tasks, WMT'14 English-German and WMT'19 Chinese-English.", "Experimental results show that our approaches yield up to +1.28 and +0.89 BLEU points improvements over the Transformer baseline, respectively.", "1 1 Introduction Machine translation has made great progress recently by using sequence-to-sequence models (Sutskever et al., 2014; Vaswani et al., 2017; Meng and Zhang, 2019; Zhang et al., 2019b; Yan et al., 2020).", "Recently, some knowledge distillation methods (Kim and Rush, 2016; Freitag et al., 2017; Gu Equal contribution. This work was done when Fusheng Wang was interning at Pattern Recognition Center, Wechat AI, Tencent Inc, China. 1 We release our code on https://github.com/Les lieOverfitting/selective distillation . et al., 2017; Tan et al., 2019; Wei et al., 2019; Li et al., 2020; Wu et al., 2020) are proposed in the machine translation to help improve model performance by transferring knowledge from a teacher model.", "These methods can be divided into two categories: word-level and sequence-level, by the granularity of teacher information.", "In their researches, the model learns from teacher models by minimizing gaps between their outputs on every training word/sentence (i.e., corresponding training sample) without distinction.", "Despite their promising results, previous studies mainly focus on finding what to teach and rarely investigate how these words/sentences (i.e., sam-ples), which serve as the medium or carrier for transferring teacher knowledge, participate in the knowledge distillation.", "Several questions remain unsolved for these samples: Which part of all samples shows more impact in knowledge distillation?", "Intuitively, we may regard that longer sentences are hard to translate and might carry more teacher knowledge.", "But are there more of these criteria that can identify these more important/suitable samples for distillation?", "Further, what are the connections among these samples?", "Are they all guiding the student model to the same direction?", "By investigating the carrier of teacher knowledge, we can shed light on finding the most effective KD method.", "Hence, in this paper, we aim to investigate the impacts and differences among all samples.", "However, it is non-trivial to analyze each of them.", "Therefore, we propose a novel analytical protocol by partitioning the samples into two halves with a specific criterion (e.g., sentence length or word cross-entropy) and study the gap between performance.", "Extensive empirical experiments are conducted to analyze the most suitable sample for transferring knowledge.", "We find that different samples differ in transferring knowledge for a substantial margin.", "More interestingly, with some partitions, especially the student model's word cross-entropy, the model with half of the knowledge even shows better performance than the model using all distill knowledge.", "The benefit of the distillation of two halves cannot collaborate.", "This phenomenon reveals that the distillation of two halves cannot collaborate, even hurt the whole performance.", "Hence, a more sophisticated selective strategy is necessary for KD methods.", "Next, we propose two simple yet effective methods to address the observed phenomenon according to word cross-entropy (Word CE), which we find is the most distinguishable criterion.", "We first propose a batch-level selection strategy that chooses words with higher Word CE within the current batch's distribution.", "Further, to step forward from local (batch) distribution to global distribution, we use a global-level FIFO queue to approximate the optimal global selection strategy, which caches the Word CE distributions across several steps.", "We evaluate our proposed method on two large-scale machine translation datasets: WMT'14 English-German and WMT'19 Chinese-English.", "Experimental results show that our approach yields an improvement of +1.28 and + 0.89 BLEU points over the Transformer baseline.", "In summary, our contributions are as follows: We propose a novel protocol for analyzing the property for the suitable medium samples for transferring teacher's knowledge.", "We conduct extensive analyses and find that some of the teacher's knowledge will hurt the whole effect of knowledge distillation.", "We propose two selective strategies: batch-level selection and global-level selection.", "The experimental results validate our methods are effective.", "Knowledge distillation approach (Hinton et al., 2015) aims to transfer knowledge from teacher model to student model.", "Recently, many knowledge distillation methods (Kim and Rush, 2016; Hu et al., 2018; Sun et al., 2019; Tang et al., 2019; Jiao et al., 2019; Zhang et al., 2019a, 2020; Chen et al., 2020a; Meng et al., 2020) have been used to get effective student model in the field of natural language processing by using teacher model's outputs or hidden states as knowledge.", "As for neural machine translation (NMT), knowledge distillation methods commonly focus on better improving the student model and learning from the teacher model.", "Kim and Rush (2016) first applied knowledge distillation to NMT and proposed the sequence-level knowledge distillation that lets student model mimic the sequence distribution generated by the teacher model.", "It was explained as a kind of data augmentation and regularization by Gordon and Duh (2019).", "Further, Freitag et al. (2017) improved the quality of distillation information by using an ensemble model as the teacher model.", "Gu et al. (2017) improved non-autoregressive model performance by learning distillation information from the autoregressive model.", "Wu et al. (2020) proposed a layer-wise distillation method to be suitable for the deep neural network.", "Chen et al. (2020b) let translation model learn from language model to help the generation of machine translation.", "To the best of our knowledge, there is no previous work in NMT concerning the selection of suitable samples for distillation.", "The few related ones mainly focus on selecting appropriate teachers for the student model to learn.", "For instance, Tan et al. (2019) let the student model only learn from the individual teacher model whose performance surpasses it.", "Wei et al. (2019) proposed an online knowledge distillation method that let the model selectively learn from history checkpoints.", "Unlike the above approaches, we explore the effective selective distillation strategy from sample perspective and let each sample determine learning content and degree.", "Given a source sentence x = ( x 1 , ..., x n ) , and its corresponding ground-truth translation sentence y = ( y 1 , ..., y m ) , an NMT model minimizes the word negative log-likelihood loss at each position by computing cross-entropy.", "For the j -th word in the target sentence, the loss can be formulated as: L ce = | V | (cid:88) k =1 1 { y j = k } log p ( y j = k | y <j , x ; ) , (1) where | V | is the size of target vocabulary, 1 is the indicator function, and p ( | ) denotes conditional probability with model parameterized by .", "In knowledge distillation, student model S gets extra supervision signal by matching its own outputs to the probability outputs of teacher model T .", "Specifically, word-level knowledge distillation de-fines the KullbackLeibler distance between the output distributions of student and teacher (Hu et al., 2018).", "After removing constants, the objective is formulated as: L kd = | V | (cid:88) k =1 q ( y j = k | y <j , x ; T ) log p ( y j = k | y <j , x ; S ) , (2) where q ( | ) is the conditional probability of teacher model.", "And then, the overall training procedure is minimizing the summation of two objectives: L = L ce + L kd , (3) where is a weight to balance two losses.", "4 Are All Words Equally Suitable for KD?", "S and T is the parameter set of student model and teacher model, respectively.", "As discussed before, as a carrier of the teacher's knowledge, ground-truth words might greatly in-fluence the performance of knowledge distillation.", "Therefore, in this section, we first do some preliminary empirical studies to evaluate the importance of different words/sentences in knowledge distillation.", "The optimal way to analyze samples' different impacts on distillation is to do ablation studies over each of them.", "However, it is clearly time-consuming and intractable.", "Hence, we propose an analytical protocol by using the partition and comparison as an approximation, which we believe could shed light on future analyses.", "Particularly, we leverage a specific criterion f to partition samples into two complementary parts: S High := { y i | f( y i ) > Median( f ( y )) , y i y } , S Low := { y i | f( y i ) Median( f ( y )) , y i y } , and analyze different effects between S High and S Low .", "Each part consists of 50% words/sentences precisely.", "The criteria come from three different perspectives: data property, student model, and teacher model.", "The detailed descriptions are as follows: Data Property.", "As longer sentences and rare words are more challenging to translate (Kocmi and Bojar, 2017; Platanios et al., 2019), its corresponding teacher knowledge may benefit the student model more.", "Hence, we choose sentence length and word frequency as criteria.", "Student Model.", "As for the student model, we care if the student model thinks these words/sentences are too complicated.", "Therefore, we use Word CE (cross-entropy of words), Sentence CE (mean of the cross-entropy of all words in sentences), and each word's embedding norm (Liu et al., 2020).", "Teacher Model.", "For the teacher model, we guess that the teacher's prediction confidence may be crucial for transferring knowledge.", "Hence, we use the prediction probability of ground-truth label ( P golden ) and entropy of prediction distribution as our criteria.", "Table 1 presents our results on different criteria.", "We also add the performance of Transformer baseline, Distill-All (distillation with all words) and Distill-Half(distillation with 50% words chosen by random) for comparison.", "Impact of Different Parts.", "Through most of the rows, we observe noticeable gaps between the BLEU scores of the S High and S Low , indicating there exists a clear difference of impact on medium of teacher knowledge.", "Specifically, for most of the criteria like cross-entropies or word frequency, the gap between two halves surpasses 0.35.", "In contrast, teacher P golden seems not useful for partitioning KD knowledge.", "We conjecture this is because no matter whether the teacher is convinced with the golden label or not, other soft labels could contain useful information (Gou et al., 2020).", "Besides, we find teacher entropy is a good-enough criterion for partitioning KD data, which inlines with previous studies of dark knowledge (Dong et al., 2019).", "Finally, we find that the KD is most sensitive (+0.64) with the Word CE criterion, which enjoys the adaptivity during the training phase and is a good representative for whether the student thinks the sample is difficult.", "In conclusion, we regard the most suitable samples should have the following properties: higher Word CE, higher Sentence CE, higher Word Frequency, which probably benefits future studies of effective KD methods.", "Impact of All and Halves.", "More interestingly, compared with Distill-All', which is the combination of the S High and S Low , the S High halves' BLEU score even surpass the Distill-All', for Word CE, Sentence CE and Word Frequency criteria.", "This leads to two conclusions: (1) Within some partitions, the S High contributes most to the KD improvements.", "(2) The amount of teacher knowledge is not the more, the better.", "The distillation knowledge of the S Low does not directly combine with the S High , even hurts S High 's performance.", "Impact of the Amount of Knowledge.", "Given that distillation knowledge is most sensitive to Word CE, we conduct extra analysis on the Word CE.", "Figure 1 presents the results of varying the amount of knowledge for S High and S Low .", "The consistent phenomenon is that the S High perform significantly better than the S Low when using the same amount of teacher's knowledge.", "These results suggest that we should focus more on the S High than on S Low .", "Besides, we notice that the model performance increases when we increase the knowledge in S High , but not the case for S Low .", "We conclude that the Word CE is distinguishable and a 10% 20% 30% 40% 50% Word Rate 27.4 27.6 27.8 28.0 28.2 28.4 BLEUS Low S High BaselineDistil All Figure 1: BLEU score (%) on WMT'14 En-De translation task.", "better indicator of teachers' useful knowledge only for S High .", "At the end of this section, we can summary the following points: To find out the most suitable medium for transferring medium, we adopt a novel method of partition and comparison, which can easily be adopted to future studies.", "The benefit of distillation knowledge drastically changes when applying to different mediums of knowledge.", "Among all criteria, knowledge distillation is the most sensitive to Word CE.", "Distilling words with higher Word CE is more reliable than words with lower CE.", "In some partitions, the distillation benefit of S Low can not add to the S High , even hurts S High 's performance.", "As mentioned above, there exist un-suitable medi-ums/samples that hurt the performance of knowledge distillation.", "In this section, we address this problem by using two simple yet effective strategy of selecting useful samples.", "In Section 4, we find that Word CE is the most distinguishable criterion.", "Hence, we continue to use the Word CE as the measure in our methods.", "As the word cross-entropy is a direct measure of how the student model agrees with the golden label, we refer to words with relatively large cross-entropy as difficult words, and words with relatively small cross-entropy as easy words, in the following parts.", "This is to keep the notation different from previous analysis.", "Then, we only need to define what is relatively large.", "Here, we introduce two CE-based selective strategies: Batch-level Selection (BLS).", "Given a minibatch B of sentence pairs with M target words, we sort all words in the current batch with their Word CE in descending order and select the top r percent of all words to distill teacher knowledge.", "More formally, let A denote the Word CE set, which contains the Word CE of each word in batch B. We define S Hard = top r %( A ) as the set of the r % largest cross-entropy words among the batch, and S Easy is its complementary part.", "For those words in S Hard , we let them get extra supervision signal from teacher model's distillation information.", "Therefore, the knowledge distillation objective in Equation 3 can be be re-formulated as: L kd = (cid:40) (cid:80) | V | k =1 q ( y k ) log p ( y k ) , y S Hard 0 , y S Easy where we simplify the notation of p and q for clarity.", "Global-level Selection (GLS).", "Limited by the number of words in a mini-batch, batch-level selection only reflects the current batch's CE distribution and can not represent the real global CE distribution of the model very well.", "In addition, the batch-level method makes our relative difficulty measure easily affected by each local batch's composition.", "The optimal approach to get the global CE distribution is to traverse all training set words and calculate their CE to get the real-time distribution after each model update.", "However, this brings a formidable computational cost and is not realistic in training.", "Therefore, as a proxy to optimal way, we extend batch-level selection to global-level selection by dexterously using a First-In-First-Out (FIFO) global queue Q .", "At each training step, we push batch words' CE into FIFO global queue Q and pop out the Oldest' words' CE in the queue to retain the queue's size.", "Then, we sort all CE values in the queue and calculate the ranking position Algorithm 1 Global-level Selection Input: B: mini-batch, Q : FIFO global queue, T : teacher model, S : student model 1: for each word i in B do 2: Compute L ce of word i by Equation 1 3: Compute L kd of word i by Equation 2 4: Push L ce to Q 5: if L ce in top r %( Q ) then 6: Loss i L ce + L kd 7: else 8: Loss i L ce 9: Loss Loss + Loss i 10: Update S with respect to Loss of each word.", "The storage of queue is much bigger than a mini-batch so that we can evaluate the current batch's CEs with more words, which reduces the fluctuation of CE distribution caused by the batch-level one.", "Algorithm 1 details the entire procedure.", "We carry out experiments on two large-scale machine translation tasks: WMT'14 English-German (En-De) and WMT'19 Chinese-English (Zh-En).", "Datasets.", "For WMT'14 En-De task, we use 4.5M preprocessed data, which is tokenized and split using byte pair encoded (BPE) (Sennrich et al., 2016) with 32K merge operations and a shared vocabulary for English and German.", "We use newstest2013 as the validation set and newstest2014 as the test set, which contain 3000 and 3003 sentences, respectively.", "For the WMT'19 Zh-En task, we use 20.4M preprocessed data, which is tokenized and split using 47K/32K BPE merge operations for source and target languages.", "We use newstest2018 as our validation set and newstest2019 as our test set, which contain 3981 and 2000 sentences, respectively.", "Evaluation.", "For evaluation, we train all the models with a maximum of 300K steps for WMT En-De'14 and WMT'19 Zh-En.", "We choose the model which performs the best on the validation set and report its performance on test set.", "We measure case sensitive BLEU calculated by multi-bleu.perl 2 2 https://github.com/moses-smt/mosesde coder/blob/master/scripts/generic/multi-bleu.perl 10% 30% 50% 70% 90% Rate 26.8 26.9 27.0 27.1 27.2 BLEU Figure 2: BLEU score (%) with different r % on validation set of WMT'14 En-De.", "and mteval-v13a.pl 3 with significance test (Koehn, 2004) for WMT'14 En-De and WMT'19 Zh-En, respectively.", "Model and Hyper-parameters.", "Following the setting in Vaswani et al. (2017), we carry out our experiments on standard Transformer (Vaswani et al., 2017) with the fairseq toolkit (Ott et al., 2019).", "By default, we use Transformer (Base), which contains six stacked encoder layers and six stacked decoder layers as both teacher model and student model.", "To verify our approaches can be applied to a stronger teacher and student models, we further use deep Transformers with twelve encoder layers and six decoder layers.", "In training processing, we use Adam optimizer with 1 = 0 .", "9 , 2 = 0 .", "98 , learning rate is 7e-4 and dropout is 0.1.", "All experiments are conducted using 4 NVIDIA P40 GPUs, where the batch size of each GPUs is set to 4096 tokens.", "And we accumulate the gradient of parameters and update every two steps.", "The average runtimes are 3 GPU days for all experiments.", "There are two hyper-parameters in our experiment, i.e., distil rate r % and global queue size Q size .", "For distil rate r % , the search space is [10%, 30%, 50%, 70%, 90%].", "The search result of r % is shown in Figure 2, we can find that the performance is sensitive to the value of r%.", "When the ratio is smaller than 50%, the increase of ratio is consistent with the BLEU score increases, and the best performance peaks at 50%.", "We directly apply the distil rate r % to the WMT'19 Zh-En task without extra searching.", "Besides, We set the Q size = 30K for WMT'14 En-De.", "For larger dataset WMT'19 ZhEn, we enlarge the Q size to from 30K to 50K and keep word rate unchanged.", "The hyper-parameter search of Q size can be found in Section 6.4.", "3 https://github.com/moses-smt/mosesde coder/blob/master/scripts/generic/mteval-v13a.pl Models En-De Existing NMT systems Vaswani et al. (2017) 27.30 ref Vaswani et al. (2017) (Big) 28.40 +1.10 Chen et al. (2020b) 27.53 +0.23 Zheng et al. (2019) 28.10 +0.80 So et al. (2019) 28.40 +1.10 Tay et al. (2020) 28.47 +1.17 Our Implemented Methods Transformer 27.29 ref Word-KD 28.14 +0.85 Seq-KD 28.15 +0.86 Batch-level Selection 28.42* +1.13 Global-level Selection 28.57* +1.28 Table 2: BLEU scores (%) on WMT'14 English-German (En-De) task.", "Compared Methods.", "We compare our method with several existing NMT systems (KD and oth-ers): Word-KD (Kim and Rush, 2016).", "Word-KD is a standard method that distills knowledge equally for each word.", "The detailed description is in Section 3.2.", "Seq-KD (Kim and Rush, 2016).", "Sequence-KD uses teacher generated outputs on training corpus as an extra source.", "The training loss can be formulated as: L seq kd = J (cid:88) j =1 | V | (cid:88) k =1 1 { y j = k } log p ( y j = k | y <j , x ; ) , (4) where y denotes the sequence predicted by teacher model from running beam search, J is the length of target sentence.", "Bert-KD (Chen et al., 2020b).", "This method leverages the pre-trained Bert as teacher model to help NMT model improve machine translation quality.", "Other Systems.", "We also include some existing methods based on Transformer(Base) for comparison, i.e., Zheng et al. (2019); So et al. (2019); Tay et al. (2020).", "Results on WMT'14 English-German.", "The results on WMT'14 En-De are shown in Table 2.", "this experiment, both the teacher model and student model are Transformer (Base).", "We also list our implementation of word-level distillation and sequence level distillation (Kim and Rush, 2016) method.", "Firstly, compared with the Transformer (Base), our re-implemented word-level and the sequence-level distillation show similar improvements with the BLEU scores up from 27.29 to 28.14 and 28.15, respectively.", "Secondly, compared with these already strong baseline methods, our batch-level selective approach further extends the improvement to 28.42, proving the selective strategy's effectiveness.", "Thirdly, our global-level distillation achieves a 28.57 BLEU score and outperforms all previous methods, showing that the better evaluation of words' CE distribution with FIFO global queue helps selection.", "It is worth noting that our strategy also significantly improves translation quality over all others methods including Word-KD.", "Finally, our methods show comparable/better performance than other existing NMT systems and even surpass the Transformer (Big), with much fewer parameters.", "Even though we find some interesting phenomena and achieve great improvement by selective distillation, the reason behind it is still unclear.", "Hence, in this section, we conduct some experiments to analyze and explain the remaining question.", "Note that we follow the previous partition and comparison method in this section and divide the samples with/without KD loss defined in our selection strategy as S Hard / S Easy .", "Conflict on Different Parts.", "The first question is that why our methods surpass the Word-KD with more knowledge.", "To answer this question, we col-lect the statistics on the gradient difference between 0 2 4 6 8 Entropy 0 2500 5000 7500 10000 12500 15000 17500 C o un t S Easy S Hard Figure 4: The entropy of prediction distribution of teacher model for different parts.", "knowledge distillation loss and cross-entropy loss on the ground-truth label for S Hard and S Easy .", "Here, we study gradients over the output distributions, which are directly related to the model's performance.", "Particularly, decoder maps target sentences y = ( y 1 , ..., y m ) to their corresponding hidden representation h = ( h 1 , ..., h m ) .", "For words in target sequence, the prediction logits l R d model | V | is given by: l = h TW (5) p = Softmax ( l ) (6) where h R d model is the layer output of transformer decoder, W R d model | V | is projection matrix.", "Then, the gradient respect to l from golden cross-entropy loss can be denotes as l L ce .", "The gradient from distillation loss can be denotes as l L kd .", "Next, we calculate the probability that l L ce and l L kd share the same direction.", "Figure 3 presents the results with the probability that gradients agree with each other during training.", "We observe that S Easy (green line) is consistently lower than distillation with all words (blue line) and S Hard (red line), which means S Easy has more inconsistency with ground-truth.", "Combining with the BLEU performances, we argue this consistency leads to the risk of introducing noise and disturbs the direction of parameter updating.", "Besides, the agreement of Distill-All (blue line in Fig) lies in the middle of two halves.", "It proves that S Easy and S Hard compromise with each other on some conflicts.", "It also proves that there exist some conflicts between the knowledge in S Easy and S Hard .", "the student model's point of view.", "However, in previous literature, they commonly consider knowledge from the teacher's perspective.", "Hence, in this section, we study the correlation between these two perspectives.", "Because previous studies commonly regard teacher's soft-labels contain dark knowledge (Dong et al., 2019), we take the entropy of teacher's prediction as a proxy.", "Concretely, we randomly select 100K tokens in the training set and calculate the entropy of distribution predicted by the teacher model for both S Hard and S Easy .", "As shown in Figure 4, we notice that the S Easy 's entropy distribution is more concentrated in range (0, 4) and peaks around 1.2.", "In contrast, the S Hard 's entropy distribution is more spread out.", "The overall distribution shifts to higher entropy, which indicates S Hard tends to provide a smoother supervision signal.", "Consequently, we conclude that even though our selective strategy comes from the student's perspective, it also favors samples with abundant dark knowledge in teacher's perspective.", "To some extent, this explains why the S Hard ' knowledge benefits distillation performance more.", "Results on WMT'19 Chinese-English.", "We also conduct experiments on the larger WMT'19 Zh-en dataset (20.4M sentence pairs) to ensure our methods can provide consistent improvements across different language pairs.", "As shown in Table 3, our method still significantly outperforms the Transformer (Base) with +0.89.", "Compared with the Word-KD, our approach consistently improves with +0.41 BLEU points.", "Besides, we also find that Seq-KD with our methods extends the improvement of BLEU score from 27.27 to 27.61.", "This indicates that our selective strategy is partially orthogonal to the improvement Models En-De Deep Transformer (12 + 6) 27.94 ref Word-KD 28.90 +0.96 Ours 29.12* +1.18 Table 4: BLEU scores (%) on WMT'14 English-German (En-De) task.", "of Seq-KD and maintains generalizability.", "In summary, these results suggest that our methods can achieve consistent improvement on different sized datasets across different language pairs.", "Results with Larger Model Size.", "Here, we investigate how our method is well-generalized to larger models.", "We use a deep transformer model with twelve encoder layers and six decoder layers for our larger model experiments.", "As shown in Table 4, Deep Transformer (12 + 6) and Word-KD have already achieved strong performance with up to 28.90 BLEU points, and our method still outperforms these baselines (29.12 BLEU).", "It proves our methods' generalizability to larger models.", "This section analyzes how Q size affects our model's performance.", "As mentioned before, Q size denotes the size of the global FIFO queue, which affects simulating the word cross-entropy distribution of the current model.", "Figure 5 shows the search results of Q size .", "We can find that smaller and larger queue size both hurts the BLEU scores.", "Besides, 30K and 50K of queue size are the best for WMT'14 En-De and WMT'19 Zh-En, respectively.", "This also accords with our intuition that smaller Q size degrades the global-level queue to batch level, and larger Q size slows down the update of CE distribution.", "Figure 6 plots the partition Word CE of S Hard and S Easy for batch-level and global-level selection.", "We can see that, as the training progresses, batch-level selection starts to suffer from the high variance because of each batch's randomness.", "Selections with FIFO queue drastically reduce the variance and make a reasonable estimation of global CE distribution.", "These findings prove the effectiveness of our proposed FIFO queue.", "In this work, we conduct an extensive study to analyze the impact of different words/sentences as the carrier in knowledge distillation.", "Analytic results show that distillation benefits have a substantial margin, and these benefits may not collaborate with their complementary parts and even hurt the performance.", "To address this problem, we propose two simple yet effective strategies, namely the batch-level selection and global-level selection.", "We would like to thank the anonymous reviewers for their valuable comments and suggestions to improve this paper." ]
[ "abstain", "abstain", "abstain", "objective", "result", "abstain", "objective", "method", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "objective", "result", "objective", "result", "objective", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "other", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "abstain", "objective", "other" ]
[ "It has been shown that named entity recognition (NER) could benefit from incorporating the long-distance structured information captured by dependency trees.", "We believe this is because both types of features the contextual information captured by the linear sequences and the structured information captured by the dependency trees may complement each other.", "However, existing approaches largely focused on stacking the LSTM and graph neural networks such as graph convolutional networks (GCNs) for building improved NER models, where the exact interaction mechanism between the two different types of features is not very clear, and the performance gain does not appear to be significant.", "In this work, we propose a simple and robust solution to incorporate both types of features with our Synergized-LSTM (Syn-LSTM), which clearly captures how the two types of features interact.", "We conduct extensive experiments on several standard datasets across four languages.", "The results demonstrate that the proposed model achieves better performance than previous approaches while requiring fewer parameters.", "Our further analysis demonstrates that our model can capture longer dependencies compared with strong baselines.", "1 1 Introduction Named entity recognition (NER) is one of the most fundamental and important tasks in natural language processing (NLP).", "While the literature (Peters et al., 2018; Akbik et al., 2018; Devlin et al., 2019) largely focuses on training deep language models to improve the contextualized word representations, previous studies show that Lu Xu is under the Joint PhD Program between Alibaba and Singapore University of Technology and Design.", "The work was done when Zhanming Jie was a PhD student in Singapore University of Technology and Design.", "1 We make our code publicly available at https:// github.com/xuuuluuu/SynLSTM-for-NER .", "the structured information such as interactions between non-adjacent words can also be important for NER (Finkel et al., 2005; Jie et al., 2017; Aguilar and Solorio, 2019).", "However, sequence models such as bidirectional LSTM (Hochreiter and Schmidhuber, 1997) are not able to fully capture the long-range dependencies (Bengio, 2009).", "For instance, Figure 1 (top) shows one type of structured information in NER.", "The words Precision Castparts Corp. can be easily inferred as ORGANIZATION by its context (i.e., Corp. ).", "However, the second entity PCP could be misclassified as a PRODUCT entity if a model relies more on the context begin trading with but ignores the hidden information that PCP is the symbol of Precision Castparts Corp. .", "Previous research works (Li et al., 2017; Jie and Lu, 2019; Wang et al., 2019) have been using the parse trees (Chomsky, 1956, 1969; Sandra and Taft, 2014) to incorporate such structured information.", "Figure 1 (Dependency Path) shows that the first entity can be connected to the second entity following the dependency tree with 5 hops.", "Incorporating the dependency information can be done with graph neural networks (GNNs) such as graph convolutional networks (GCNs) (Kipf and Welling, 2017).", "However, simply stacking the LSTM and GCN architectures for NER can only provide us with modest improvements; sometimes, it decreases performance (Jie and Lu, 2019).", "Based on the dependency path in Figure 1, it requires a 5-layer GCN to capture the connections between these two entities.", "However, deep GCN architectures often face training difficulties, which cause a performance drop (Hamilton et al., 2017b; Kipf and Welling, 2017).", "Directly stacking GCN and LSTM has diffi-culties in modeling the interaction between dependency trees and contextual information.", "To address the above limitations, we propose the Synergized-LSTM (Syn-LSTM), a new recurrent neural network architecture that considers an additional graph-encoded representation to update the memory and hidden states, as shown in Figure 2.", "More specifically, the graph-encoded representation for each word can be obtained with GCNs.", "Our proposed Syn-LSTM allows the cell to receive the structured information from the graph-encoded representation.", "With the newly designed gating mechanism, our model is able to make independent assessments on the amounts of information to be retrieved from the word representation and the graph-encoded representation respectively.", "Such a mechanism allows for better integration of both contextual and structured information.", "Our contributions can be summarized as: We propose a simple and robust Syn-LSTM model to better incorporate the structured information conveyed by dependency trees.", "The output of the Syn-LSTM cell is jointly determined by both contextual and structured information.", "We adopt the classic conditional random fields (CRF) (Lafferty et al., 2001) on top of the Syn-LSTM for NER.", "We conduct extensive experiments on several standard datasets across four languages.", "The proposed model significantly outperforms previous approaches on these datasets.", "We show that the proposed model can capture long-distance interactions between entities.", "Our further analysis statistically demonstrates the proposed gating mechanism is able to aggregate the structured information selectively.", "To incorporate the long-range dependencies, we consider an additional graph-encoded representation g t (Figure 2) as the model input to integrate", "contextual and structured information.", "The graph-encoded representation g t can be derived from Graph Neural Networks (GNNs) such as GCN (Kipf and Welling, 2017), which are capable of bringing in structured information through graph structure (Hamilton et al., 2017a).", "However, structured information sometimes is hard to encode, as we can see from the example in Figure 1.", "One naive approach is to use a deep GNN to capture such information along multiple dependency arcs between two words, which could mess up information and lead to training difficul-ties.", "A straightforward solution is to integrate both structured and contextual information via LSTM.", "As shown in Figure 1 (Hybrid Paths), the structured information can be passed to neighbors or context, which allows a model to use less number of GNN layers and alleviate such issues for long-range dependencies.", "The input to the LSTM can simply be the concatenation of word representation x t and g t at each position (Jie and Lu, 2019) 2 .", "However, because such an approach requires both x t and g t to decide the value of the input gate jointly, it could be a potential victim of two sources of uncertainties: 1) the uncertainty of the quality of graph-encoded representation g t , and 2) the uncertainty of the exact interaction mechanism between the two types of features.", "These may lead to sub-optimal performance, especially if the graph-encoded representation g t is unsatisfactory.", "Thus, we need to design a new approach to incorporate both types of information from x t and g t with a more explicit interaction mechanism, with which we hope to alleviate the above issues.", "We propose the Synergized-LSTM (Syn-LSTM) to better integrate the contextual and structured information to address the above limitations.", "The inputs of the Syn-LSTM cell include previous cell state c t 1 , previous hidden state h t 1 , current cell input x t , and an additional graph-encoded representation g t .", "The outputs of the Syn-LSTM cell include current cell state c t and current hidden state h t .", "Within the cell, there are four gates: input gate i t , forget gate f t , output gate o t , and an additional new gate m t to control the flow of information.", "Note that the forget gate f t and output gate o t are not just looking at h t 1 and x t ; they are also affected by the graph-encoded representation g t .", "The cell state c t and hidden state h t are computed as follows: f t = ( W ( f ) x t + U ( f ) h t 1 + Q ( f ) g t + b ( f ) ) (1) o t = ( W ( o ) x t + U ( o ) h t 1 + Q ( o ) g t + b ( o ) ) (2) i t = ( W ( i ) x t + U ( i ) h t 1 + b ( i ) ) (3) m t = ( W ( m ) g t + U ( m ) h t 1 + b ( m ) ) (4) c t = tanh( W ( u ) x t + U ( u ) h t 1 + b ( u ) ) (5) s t = tanh( W ( n ) g t + U ( n ) h t 1 + b ( n ) ) (6) c t = f t (cid:12) c t 1 + i t (cid:12) c t + m t (cid:12) s t (7) h t = o t (cid:12) tanh( c t ) (8) where is the sigmoid function, W ( ) , U ( ) , Q ( ) and b ( ) are learnable parameters.", "The additional new gate m t is used to control the information from the graph-encoded representation directly.", "Such a design allows the original input gates i t and our new gate m t to make independent assessments on the amounts of information to be retrieved from the word representation x t and the graph-encoded representation g t respectively.", "On the other hand, we also have a different candidate state s t to represent the cell state that corresponds to the graph-encoded representation separately.", "With the proposed Syn-LSTM, the structured information captured by the dependency trees can be passed to each cell, and the additional gate m t is able to control how much structured information can be incorporated.", "The additional gate enables the model to feed the contextual and structured information into the LSTM cell separately.", "Such a mechanism allows our model to aggregate the information from linear sequence and dependency trees selectively.", "Similar to the previous work (Levy et al., 2018), it is also possible to show that the cell state c t implicitly computes the element-wise weighted sum of the previous states by expanding Equation 7: c t = f t (cid:12) c t 1 + i t (cid:12) c t + m t (cid:12) s t (9) = t (cid:88) j =0 ( i j (cid:12) t (cid:89) k = j +1 f k ) (cid:12) c j + t (cid:88) j =0 ( m j (cid:12) t (cid:89) k = j +1 f k ) (cid:12) s j (10) = t (cid:88) j =0 a tj (cid:12) c j + t (cid:88) j =0 q tj (cid:12) s j (11) Note that the two terms, a tj and q tj , are the product of gates.", "The value of the two terms are in the range from 0 to 1.", "Since the c t and s t represent contextual and structured features, the corresponding weights control the flow of information.", "The goal of named entity recognition is to predict the label sequence y = { y 1 , y 2 , ..., y n } given the input sequence w = { w 1 , w 2 , ..., w n } , where w t represents the t -th word and n is the number of words.", "Our model is mainly constructed with three layers: input representation layer, bi-directional Syn-LSTM layer, and CRF layer.", "The architecture of our Syn-LSTM-CRF is shown in Figure 3.", "Input Representation Layer Similar to the work by Lample et al. (2016), our input representation also includes the character embeddings, which are the hidden states of character-based BiLSTM.", "Jie and Lu (2019) highlight that the dependency relation helps to enhance the input representation.", "Furthermore, previous methods (Wang et al., 2018; Wang and Lu, 2018) use embeddings of part-of-speech (POS) tags as additional input representation.", "The input representation x t of our model is the concatenation of the word embedding v t , the character representation e t , the dependency relation embedding r t , and the POS embedding p t : x t = [ v t ; e t ; r t ; p t ] (12) where both r t and p t embeddings are randomly initialized and are fine-tuned during training.", "For experiments with the contextualized representations (e.g., BERT (Devlin et al., 2019)), we further concatenate the contextual word representation to x t .", "For our task, we employ the graph convolutional network (Kipf and Welling, 2017; Zhang et al., 2018b) to get the graph-encoded representation g t .", "Given a graph, an adjacency matrix A of size n n is able to represent the graph structure, where n is the number of nodes; A i,j = 1 indicates that node i and node j are connected.", "We transform dependency tree into its corresponding adjacency matrix 3 A , and A i,j = 1 denotes that node i and node j have dependency relation.", "Note that the purpose of graph-encoded representation g t is to incorporate the dependency information from neighbor nodes.", "The input and output representations of the l -th layer GCN at t -th position are denoted as g l 1 t and g lt respectively.", "Similar to the work by Zhang et al. (2018b), we use d t = (cid:80) nj =1 A t,j , which is the total number of neighbors of node t , to normalize the representation before going through the nonlinear function.", "The GCN operation is defined as: g lt = ReLU( n (cid:88) j =1 A t,j W l g l 1 t /d t + b l ) (13) where W l is a linear transformation and b l is a bias.", "The initial g 0 t is the concatenation of word embedding v t , character embedding e t , and dependency relation embedding r t : g 0 t = [ v t ; e t ; r t ] .", "Bi-directional Syn-LSTM Layer With the word representation x t and the graph-encoded representation g t , a bi-directional Syn-LSTM is applied to generate contextual representation.", "The forward and backward Syn-LSTM enable the model to integrate the contextual and structured information from both directions.", "We concatenate the hidden state h t from forward Syn-LSTM and hidden state 3 We treat the dependency edge as undirected and add a self-loop for each node: A i,j = A j,i and A i,i = 1 .", "CRF Layer The CRF (Lafferty et al., 2001) is widely used in NER tasks as it is capable of capturing the structured correlations between adjacent output labels.", "Given the sentence w and dependency tree , the probability of the label sequence y is defined as: P ( y | w , ) = exp( score ( w , , y )) (cid:80) y (cid:48) exp( score ( w , , y (cid:48) )) (14) The score function is defined as: score ( w , , y ) = n (cid:88) t =0 T y t ,y t +1 + n (cid:88) t =1 E y t (15) where T y t ,y t +1 denotes the transition score from label y t to y t +1 , E y t denotes the score of label y t at the t -th position and the scores are computed using the hidden state h t .", "We learn the model parameters by minimizing the negative log-likelihood and employ the Viterbi algorithm to obtain the best label sequence during evaluation.", "Datasets The proposed model is evaluated on four benchmark datasets: SemEval 2010 Task 1 (Recasens et al., 2010) Catalan and Spanish datasets, and OntoNotes 5.0 (Weischedel et al., 2013) English and Chinese datasets.", "We choose these four datasets as they have explicit dependency annotations which allow us to evaluate the effectiveness of our approach when dependency trees of different qualities are used.", "For SemEval 2010 Task 1 datasets, there are 4 entity types: PER , LOC and ORG and MISC .", "For OntoNotes 5.0 datasets, there are 18 entity types in total.", "Following the work by Jie and Lu (2019), we transform the parse trees into the Stanford dependency trees (De Marneffe and Manning, 2008) by using Stanford CoreNLP (Manning et al., 2014).", "Detailed statistics of each dataset can be found in Table 1.", "Intuitively, longer sentences would require the model to capture more long-distance interactions in the sentences.", "We present the number of entities in terms of different sentence lengths to show that these datasets have a modest amount of entities in long sentences.", "Experimental Setup For Catalan, Spanish, and Chinese, we use the FastText (Grave et al., 2018) 300 dimensional embeddings to initialize the word embeddings.", "For OntoNotes 5.0 English, we adopt the publicly available GloVE (Pennington et al., 2014) 100 dimensional embeddings to initialize the word embeddings.", "For experiments with the contextualized representation, we adopt the pre-trained language model BERT (Devlin et al., 2019) for the four datasets.", "Specifically, we use bert-as-service (Xiao, 2018) to generate the contextualized word representation without fine-tuning.", "Following Luo et al. (2020), we use the cased version of BERT large model for the experiments on the OntoNotes 5.0 English data.", "We use the cased version of BERT base model for the experiments on the other three datasets.", "For the character embedding, we randomly initialize the character embeddings and set the dimension as 30, and set the hidden size of character-level BiLSTM as 50.", "The hidden size of GCN and Syn-LSTM is set as 200, the number of GCN layer is 2.", "We adopt stochastic gradient descent (SGD) to optimize our model with batch size 100, L2 regularization 10 8 , initial learning rate lr 0.2 and the learning rate is decayed 4 with respect to the number of epoch.", "We select the best model based on the performance on the dev set 5 and apply it to the test set.", "We use the bootstrapping t-test to compare the results.", "Baselines We compare our model with several baselines with or without dependency tree information.", "The first one is BERT-CRF, where we apply a CRF layer on top of BERT (Devlin et al., 2019).", "Secondly, we compare with the BERT implementation by HuggingFace (Wolf et al., 2019).", "For models with dependency trees, we take the models BiLSTM-GCN-CRF and dependency-4 We set the decay as 0.1 and the learning rate for each epoch equals to lr/ (1 + decay ( epoch 1)) .", "guided LSTM-CRF (DGLSTM-CRF) proposed by Jie and Lu (2019), and our implemented GCN-BiLSTM-CRF.", "The BiLSTM-GCN-CRF model simply stacks the GCN on top of the BiLSTM to incorporate the dependency trees.", "The GCN-BiLSTM-CRF model takes the concatenation of the graph-encoded representation from GCN and word embedding as input into BiLSTM.", "The DGLSTM-CRF takes the concatenation of the head word representation and word embedding as input into BiLSTM.", "Note that the original implementation of DGLSTM-CRF uses ELMo (Peters et al., 2018), but we also implement it with BERT.", "Besides, we compare our model with previous works that have results on these datasets.", "SemEval 2010 Task 1 Table 2 shows comparisons of our model with baseline models on the SemEval 2010 Task 1 Catalan and Spanish datasets.", "Our Syn-LSTM-CRF model outperforms all existing models with F 1 82.76 and 85.09 ( p < 10 5 ) compared to DGLSTM-CRF on Catalan and Spanish datasets when FastText word embeddings are used.", "Our model outperforms the BiLSTM-CRF model by 13.25 and 11.22 F 1 points, and outperforms BiLSTM-GCN-CRF (Jie and Lu, 2019) model by 4.64 and 3.16 on Catalan and Spanish.", "The large performance gap between BiLSTM-GCN-CRF and our model indicates that Syn-LSTM-CRF shows better compatibility with GCN, and this confirms that simply stacking GCN on top of the BiLSTM does not perform well.", "Our method outperforms GCN-BiLSTM-CRF model by 5.33 and 3.24 F 1 points on Catalan and Spanish.", "This shows that our proposed model demonstrates a better integration of contextual information and structured information.", "Furthermore, our proposed method brings 1.12 and 1.62 F 1 points improvement on Catalan and Spanish datasets compare to the DGLSTM-CRF (Jie and Lu, 2019).", "The DGLSTM-CRF employs 2-layer dependency guided BiLSTM to capture grandchild dependencies, which leads to longer training time and more model parameters.", "However, our Syn-LSTM-CRF is able to get better performance with fewer model parameters and shorter training time because of the fewer LSTM layers.", "Such results demonstrate that our proposed Syn-LSTM-CRF manages to capture structured information effectively.", "Furthermore, with the contextualized word representation, the Syn-LSTM-CRF +BERT achieves much higher performance improvement than any other method.", "Our model outperforms the strong baseline model DGLSTM-CRF +ELMO by 4.83 and 2.54 in terms of F 1 ( p < 10 5 ) on Catalan and Spanish, respectively.", "OntoNotes 5.0 English To understand the generalizability of our model, we evaluate the proposed Syn-LSTM-CRF model on large scale OntoNotes 5.0 datasets.", "Table 3 shows comparisons of our model with baseline models on English.", "Our Syn-LSTM-CRF model outperforms all existing methods with 89.04 in terms of F 1 score ( p < 0 . 01 ) compared to DGLSTM-CRF, when GloVE word embeddings are used.", "Our model outperforms the BiLSTM-CRF model by 1.97 in F 1 , BiLSTM-GCN-CRF (Jie and Lu, 2019) model by 0.86.", "Note that our implemented GCN-BiLSTM-CRF outperforms the previous DGLSTM-CRF (Jie and Lu, 2019) by 0.14 in F 1 .", "Our Syn-LSTM-CRF further brings the improvement to 0.52.", "Moreover, with the contextualized word representation BERT, our method achieves an F 1 score of 90.85 ( p < 10 5 ) compared to DGLSTM-CRF +ELMO .", "Our method outperforms the previous model (Luo et al., 2020), which relies on document-level information, by 0.55 in F 1 .", "Furthermore, the performance improvement on recall is more prominent as compared to precision.", "This shows that the proposed Syn-LSTM-CRF is able to extract more entities.", "experimental results on the OntoNotes 5.0 Chinese test set in Table", "4. Our model still consistently outper-Models P. R. F 1 Chiu and Nichols (2016a) 86.04 86.53 86.28 Li et al. (2017) 88.00 86.50 87.21 Strubell et al. (2017) -86.84 Ghaddar and Langlais (2018) -87.95 BiLSTM-CRF 87.21 86.93 87.07 BiLSTM-GCN-CRF 88.30 88.06 88.18 GCN-BiLSTM-CRF 88.56 88.76 88.66 DGLSTM-CRF (2019) 88.53 88.50 88.52 Luo et al. (2020) -87.98 Syn-LSTM-CRF (Ours) 88.96 89.13 89.04 + Contextualized Word Representation Akbik et al. (2018) -89.30 BERT-CRF 88.42 88.33 88.37 Wolf et al. (2019) 88.39 90.29 89.33 BiLSTM-CRF +ELMO 89.14 88.59 88.87 BiLSTM-CRF +BERT 89.32 90.02 89.67 BiLSTM-GCN-CRF +ELMO 89.40 89.71 89.55 GCN-BiLSTM-CRF +BERT 89.34 91.26 90.29 DGLSTM-CRF (2019) +ELMO 89.59 90.17 89.88 DGLSTM-CRF +BERT 89.63 89.87 89.75 Luo et al. (2020) +BERT -90.30 Syn-LSTM-CRF +BERT (Ours) 90.14 91.58 90.85 Table 3: Experimental results [%] on OntoNotes 5.0 English test set.", "forms the baseline models, specifically by 2.04 in F 1 compared to BiLSTM-CRF, by 2.39 compared to BiLSTM-GCN-CRF, by 1.86 compared to GCN-BILSTM-CRF and by 1.11 ( p < 10 5 ) compared to DGLSTM-CRF when FastText is used.", "Note that the baseline BiLSTM-GCN-CRF model is 0.35 points worse than BiLSTM-CRF.", "Such results further confirm the effectiveness of our proposed Syn-LSTM-CRF for incorporating structured information.", "We find a similar behavior when the contextualized word representation BERT is used.", "With the contextualized word representation, we achieve a higher F 1 score of 80.20.", "Robustness Analysis To study the robustness of our model and check whether our model can regulate the flow of information from the graph-encoded representation, we analyze the influence of the quality of dependency trees.", "We train and evaluate an additional dependency parser (Dozat and Manning, 2017).", "Specifically, we train the Models P. R. F 1 Pradhan et al. (2013) 78.20 66.45 71.85 Lattice LSTM (2018) 76.34 77.01 76.67 BiLSTM-CRF 78.45 74.59 76.47 BiLSTM-GCN-CRF 76.35 75.89 76.12 GCN-BiLSTM-CRF 78.30 75.07 76.65 DGLSTM-CRF (2019) 77.40 77.41 77.40 Syn-LSTM-CRF (Ours) 77.95 79.07 78.51 + Contextualized Word Representation BERT-CRF 79.83 79.68 79.75 Wolf et al. (2019) 77.35 81.74 79.49 BiLSTM-CRF +ELMO 79.20 79.21 79.20 BiLSTM-CRF +BERT 78.45 81.24 79.82 BiLSTM-GCN-CRF +ELMO 78.71 79.29 79.00 GCN-BiLSTM-CRF +BERT 79.03 80.98 80.00 DGLSTM-CRF (2019) +ELMO 78.86 81.00 79.92 DGLSTM-CRF +BERT 77.79 81.65 79.67 Syn-LSTM-CRF +BERT (Ours) 78.66 81.80 80.20 Table 4: Experimental results [%] on OntoNotes 5.0 Chinese test set.", "dependency parser 6 on the given training datasets and select the best model based on the dev sets.", "Then we apply the best model to the test sets to obtain dependency trees.", "We also train and evaluate our model with random dependency trees.", "Table 8 presents the comparisons between Syn-LSTM-CRF +BERT and DGLSTM-CRF +ELMO with given, predicted and random dependency trees.", "We observe that both models encounter a performance drop when we use the predicted parse trees and random trees.", "Our performance differences with the given parse trees are relatively smaller than the corresponding differences in DGLSTM-CRF +ELMO .", "Such an observation demonstrates the robustness of our proposed model against structured information from the trees of different quality.", "It is worthwhile to note that, with the predicted dependencies, our proposed Syn-LSTM-CRF +BERT is still able to outperform the strong baseline DGLSTM-CRF +ELMO even with the given parse trees on Catalan, English, and Chinese datasets.", "To further study the robustness, we conduct an analysis to investigate if the gate m t (Figure 2) has the ability to regulate the flow of information from the graph-encoded representation.", "Intuitively, the gate m t should tend to have a small value when 6 The performance of the dependency parser can be found in the Appendix.", "We statistically plot the number of words with respect to different gate value ranges ( m t ).", "Figure 4 shows the comparison between the models of using random trees and given trees on Catalan and Spanish 7 .", "We observe that the gate m t is more likely to open (the value is higher) when we use the given parse trees compared with random parse trees.", "Such behavior demonstrates that our proposed model can selectively aggregate the information from the graph-encoded representation.", "Effect of Sentence Length We compare the performance of our Syn-LSTM-CRF +BERT with BiLSTM-CRF +BERT and DGLSTM-CRF +ELMO models with respect to sentence length, and the results are shown in Figure", "5. We observe that the Syn-LSTM-CRF +BERT model consistently outperforms the two baseline models on the four languages 8 .", "In particular, although the performance tends to drop as the sentence length increases, our proposed model shows relatively better performance when the sentence length is 60 .", "This confirms that the proposed Syn-LSTM-CRF +BERT is able to effectively incorporate structured information.", "Note that our 2-layer GCN is computed based on the 7 We found a similar behavior for OntoNotes 5.0 English and Chinese datasets, and the detailed result can be found in the Appendix.", "8 See the Appendix for the results on OntoNotes 5.0 English and Chinese datasets.", "With the graph-encoded representation and the proposed Syn-LSTM-CRF +BERT , the individual word representation is enhanced by both contextual and structured information.", "Therefore, for the sentences with length of 14 , we can still observe obvious improvements.", "The significant performance improvements on the four datasets show the capability of our Syn-LSTM-CRF to capture the structured information despite the sentence length.", "Effect of Entity Length We conduct another evaluation on BiLSTM-CRF +BERT , DGLSTM-CRF +ELMO , and Syn-LSTM-CRF +BERT models with respect to entity length { 1 , 2 , 3 , 4 , 5 , 6 } on the four languages.", "Table 6 shows the performance comparison of two models with respect to entity length.", "With the structured information, both DGLSTM-CRF +ELMO and Syn-LSTM-CRF +BERT achieve better performance compared to BiLSTM-CRF +BERT .", "When the length of entity is 3, Syn-LSTM-CRF +BERT achieves better results compared to DGLSTM-CRF +ELMO .", "This confirms that our proposed method can effectively incorporate the structured information.", "Our model consistently outperforms BiLSTM-CRF +BERT , and the performance tends to have more improvements when entities are getting longer except on the Chinese dataset.", "We note there are some special characteristics of the Chinese language.", "As mentioned by Jie and Lu (2019), the percentage of entities that are able to perfectly form a sub-tree is only 92 .", "9% for OntoNotes Chinese, as compared to 98 .", "5% , 100% , 100% for OntoNotes English, SemEval Catalan and Spanish.", "Furthermore, the ratio of long entities is much higher for Catalan and Spanish compared Dataset Model Entity Length 1 2 3 4 5 6 Catalan BiLSTM-CRF +BERT 82.4 84.4 77.8 53.3 31.8 36.2 DGLSTM-CRF +ELMO 85.4 85.1 84.1 78.9 60.9 59.3 Syn-LSTM-CRF +BERT 90.5 91.1 87.2 77.8 63.8 60.6 Spanish BiLSTM-CRF +BERT 85.1 84.2 81.5 33.7 43.1 27.2 DGLSTM-CRF +ELMO 89.3 87.4 90.8 74.1 67.7 64.4 Syn-LSTM-CRF +BERT 92.7 90.9 91.1 73.0 75.4 58.5 English BiLSTM-CRF +BERT 92.9 88.3 83.1 85.5 80.5 77.9 DGLSTM-CRF +ELMO 91.8 90.1 85.4 87.0 80.8 78.7 Syn-LSTM-CRF +BERT 92.9 90.8 87.7 87.4 80.6 79.8 Chinese BiLSTM-CRF +BERT 82.5 74.6 71.4 65.0 69.8 52.5 DGLSTM-CRF +ELMO 82.2 75.5 71.8 64.1 58.5 41.1 Syn-LSTM-CRF +BERT 82.5 75.6 73.1 66.4 66.1 42.5 Table 6: F 1 score [%] based on entity length on Catalan, Spanish, English and Chinese datasets.", "to English and Chinese.", "The experimental results on Catalan and Spanish datasets show significant improvements for long entities.", "Such results show that the structured information conveyed by the dependency trees can be more crucial when entity length becomes longer.", "Number of GCN Layers To fully explore the impact of the number of GCN layers, we conduct another experiment on Syn-LSTM-CRF +BERT model with the number of GCN layers { 1 , 2 , 3 } , and Figure 6 shows the performance on the dev set of the four languages.", "The last bar, indicated as AVG, is obtained by averaging the dev results on the four datasets.", "We observe that the overall performance is better when the number of GCN layers equals 2.", "Note that similar behavior can also be found in the work by Kipf and Welling (2017) for document classification and node classification.", "Therefore, we evaluate our proposed Syn-LSTM-CRF model with 2-layer GCN.", "Ablation Study To understand the contribution of each component, we conduct an ablation study on the OntoNotes 5.0 English dataset, and Table 7 presents the detailed results of our model with contextualized representation.", "We find that the performance drops by 0.24 F 1 score when we only use 1-layer GCN.", "Without GCN at all, the score drops by 1.13 F 1 .", "The original dependency contributes 0.27 F 1 score.", "Removing the dependency relation embedding also decreases the performance by 0.27 F 1 .", "When we remove the POS tags embedding, the result drops by 0.39 F 1 .", "LSTM LSTM has demonstrated its great effectiveness in many NLP tasks and becomes a standard module for many state-of-the-art models (Wen et al., 2015; Ma and Hovy, 2016; Dozat and Manning, 2017).", "However, the sequential nature of the LSTM makes it challenging to capture long-range dependencies.", "Zhang et al. (2018a) propose the S-LSTM model to include a sentence state to allow both local and global information exchange simultaneously.", "Mogrifier LSTM (Melis et al., 2020) mutually gates the current input and the previous output to enhance the interaction between the input and the context.", "These two works do not consider structured information for the LSTM design.", "Since natural language is usually structured, Shen et al. (2018) propose ON-LSTM to add a hierarchical bias to allow the neurons to be updated by following certain order.", "While the ON-LSTM is learning the latent constituency parse trees, we focus on incorporating the explicit structured information conveyed by the dependency parse trees.", "NER Early work (Sasano and Kurohashi, 2008) uses syntactic dependency features to improve the SVM performance on Japanese NER task.", "Liu et al. (2010) propose to construct skip-edges to link similar words or words having typed dependencies to capture long-range dependencies.", "The later works (Collobert et al., 2010; Lample et al., 2016; Chiu and Nichols, 2016b) focus on using neural networks to extract features and achieved the state-of-the-art performance.", "Jie et al. (2017) find that some relations between the dependency edges and the entities can be used to reduce the search space of their model, which significantly reduces the time complexity.", "Yu et al. (2020) employ pre-trained language model to encode document-level information to explore all spans with the graph-based dependency graph based ideas.", "The pre-trained language models (e.g., BERT (Devlin et al., 2019), ELMO (Peters et al., 2018)) further improve neural-based approaches with a good contextualized representation.", "However, previous works did not focus on investigating how to effectively integrate structured and contextual information well.", "In this paper, we propose a simple and robust Syn-LSTM model to better integrate the structured information leveraged from the long-range dependencies.", "Specifically, we introduce an additional graph-encoded representation to each recurrent unit.", "Such a graph-encoded representation can be obtained via GNNs.", "Through the newly designed gating mechanism, the hidden states are enhanced by contextual information captured by the linear sequence and structured information captured by the dependency trees.", "We present the Syn-LSTM-CRF for NER and adopt the GCN on dependency trees to obtain the graph-encoded representations.", "Our extensive experiments and analysis on the datasets with four languages demonstrate that the proposed Syn-LSTM is able to effectively incorporate both contextual and structured information.", "The robustness analysis demonstrates that our model is capable of selectively aggregating the information from the graph-encoded representation.", "We would like to thank the anonymous reviewers for their helpful comments.", "This research is partially supported by Ministry of Education, Singapore, under its Academic Research Fund (AcRF) Tier 2 Programme (MOE AcRF Tier 2 Award No: MOE2017-T2-1-156).", "Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not reflect the views of the Ministry of Education, Singapore." ]
[ "abstain", "method", "abstain", "objective", "method", "abstain", "objective", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "method", "abstain", "objective", "objective", "method", "method", "objective", "objective", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "result", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "method", "method", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "objective", "method", "abstain", "abstain", "result", "objective", "objective", "other", "other", "other" ]
[ "Abstract In order to better understand the reason behind model behaviors (i.e., making predic-tions), most recent work has exploited generative models to provide complementary explanations.", "However, existing approaches in natural language processing (NLP) mainly focus on WHY A rather than contrastive WHY A NOT B, which is shown to be able to better distinguish confusing candidates and improve model performance in other research fields.", "In this paper, we focus on generating C ontrastive E xplanations with counterfactual examples in NLI and propose a novel K nowledgeA ware generation framework ( KACE ).", "Specifically, we first identify rationales (i.e., key phrases) from input sentences, and use them as key perturbations for generating counterfactual examples.", "After obtaining qualified counterfactual examples, we take them along with original examples and external knowledge as input, and employ a knowledge-aware generative pre-trained language model to generate contrastive explanations.", "Experimental results show that contrastive explanations are bene-ficial to clarify the difference between predicted answer and other answer options.", "Moreover, we train an BERT-large based NLI model enhanced with contrastive explanations and achieve an accuracy of 91.9% on SNLI, gaining an improvement of 5.7% against ETPA (Explain-Then-Predict-Attention) and 0.6% against NILE (WHY A).", "In recent years, pre-trained language models (De-vlin et al., 2019; Liu et al., 2019; Yang et al., 2019) have been widely adopted in many tasks of natural language processing (Talmor et al., 2019; Choi et al., 2018; Bowman et al., 2015).", "However, due to Work is done during internship at Alibaba Group.", "the lack of textual explanations, most downstream models become more complicated and difficult to understand.", "End users, especially those working in critical domains such as healthcare or online ed-ucation, become more skeptical and reluctant to adopt or trust them, although these models have been proved to improve the decision-making performance.", "Therefore, providing faithful textual explanations has become a promising way to overcome the black-box property of neural networks, which has attracted the attention of academia and industrial communities.", "Recently, the majority of existing methods (Xu et al., 2020; Cheng et al., 2020; Karimi et al., 2020; Ramamurthy et al., 2020; Atanasova et al., 2020; Kumar and Talukdar, 2020) in natural language processing try to explain the predictions of neural models in a model-intrinsic or model-agnostic (also known as post-hoc) way.", "While post-hoc models (Chen et al., 2020b; Karimi et al., 2020; Kumar and Talukdar, 2020) provide explanations after making predictions without affecting the overall accuracy, most of them neglect the rationales in inputs and provide textual explanations just in the form of WHY A.", "However, we argue that contrastive explanations in the form of WHY A NOT B could provide more informative and important clues that are easier to understand and persuade end-users.", "Moreover, we believe that contrastive explanations could benefit downstream tasks (e.g., NLI), since such kind of explanations contain more helpful information (e.g. relations between rationales) that can be used to improve model performance.", "To further enhance the explainability and performance of NLI, we propose a novel textual contrastive explanation generation framework in this paper, which is post-hoc and considers rationales, counterfactual examples, and external knowledge.", "Specifically, we first identify rationales (i.e., key phrases) from a premise-hypothesis (P-H) pair with Rationale Identification CounterfactualExampleGeneration CounterfactualExampleSelection ContrastiveExplanationGeneration Premise : A woman and a young child are making sculptures out of clay.", "Then we further select one most qualified counterfactual example for any other label B. Note that the acquisition of a qualified counterfactual example of class B is essential to generate a meaningful explanation for WHY NOT B, otherwise the resultant contrastive explanation will be groundless or useless.", "After that, we take the selected examples along with the original P-H pair and related external knowledge as input, and finally employ a knowledge-aware pre-trained language model to generate contrastive explanation, which will specify why the prediction label is A rather than B, and clarify the confusions for end-users.", "Moreover, we train an NLI model enhanced with contrastive explanations and achieve the new state-of-art performance on SNLI.", "The contributions of this paper are as follows: We introduce a novel knowledge-aware contrastive explanation generation framework ( KACE ) for natural language inference tasks.", "We consider the rationales in inputs and regard them as important perturbations for generating counterfactual examples rather than just discarding them like previous post-hoc work (Hendricks et al., 2018; Cheng et al., 2020).", "We integrate external knowledge with generative pre-trained language model rather than only taking original inputs (Kumar and Talukdar, 2020; Rajani et al., 2019) for contrastive explanation generation.", "Experimental results show that knowledge-aware contrastive explanations are able to clarify the difference between predicted class and the others, which help to clarify the confusion of end-users and further improve model performance than WHY A explanations 1 .", "Here, we define the task of contrastive explanation generation for NLI.", "Given a trained neural network model f with input x and predicted class A , the problem of generating contrastive explanations (CE) to an input x is to specify why x belongs to category/class A rather than B , defined as: r = Rationales ( x, A ) (1) x (cid:48) = Reversal ( x, B, r ) (2) CE = Generator ( x (cid:48) , x, A ) (3) In Equation 1, we first identify a set of rationales in given inputs, as described in Section 3.1, and in Equation 2 we generate counterfactual examples with reversal mechanism as presented in Section 3.2.", "In Equation 3, we take the selected counterfactual example along with original example and external knowledge as input, and employ a knowledge-aware generator to produce contrastive explanation as detailed in Section 3.3.", "1 Our code will be released as soon as possible at https://github.com/AI4NLP/KACE Long, fascinating, soulful.", "Considering that rationales are important features of an instance, it is essential to regard rationales as key perturbations for counterfactual example generation.", "In this paper, we formulate rationale identification as a token-level sequence labelling task where 1 indicates a rationale token and 0 indicates a background token.", "Being similar with (Thorne et al., 2019), we first construct the input sequence for a premise p and a hypothesis h as S p = (cid:104) s (cid:105) Label (cid:104) s (cid:105) P remise (cid:104) s (cid:105) and S h = (cid:104) s (cid:105) Hypothesis (cid:104) s (cid:105) , where (cid:104) s (cid:105) is a special token that separates the components.", "Let y represent the relation between S p and S h where y { entailment, contradiction, neutral } .", "For each instance, we need to identify a subset r of zero or more tokens as rationales from both premise and hypothesis sentences.", "Both premise and hypothesis are encoded with RoBERTa (Liu et al., 2019), yielding hidden representation H p = [ , h pj , ] and H h = [ , h hi , ] respectively.", "As rationalizer is proposed by (Zhao and Vy-diswaran, 2021), we follow this work for rationale identification using cross attention to embed the hypothesis (premise) into premise (hypothesis), which is defined as: a ij = exp (( h hi ) TT anh ( WT 1 h pj )) (cid:80) L p m =0 exp (( h hi ) TT anh ( WT 1 h pm )) (4) h hi = [ h hi , P ooling ( H p ) , (cid:88) k a ij h pj ] (5) where a ij denotes the attention score of j th token in premise to the i th token in the hypothesis, L p denotes the length of the premise sentence and W 1 is a trainable parameter matrix.", "The representation of i th token in the hypothesis, denoted as h hi , is created by concatenating its original state representation, max-pooling representation over h p , and the corresponding sum of attention representation from h p .", "At last, we use a softmax layer with a linear transformation to model the probability of the i th token in S h being a rationale token.", "As we have introduced above, counterfactual examples of other classes are of key importance to generate contrastive explanations.", "In this part, we describe how to generate counterfactual examples.", "Given a trained neural network model f , the problem of generating counterfactual example for an instance x is to find a set of examples c 1 , c 2 , ..., c k that lead to a desired prediction y (cid:48) .", "The counterfactual examples are explainable and contrastive when they appropriately consider proximity, diversity and validity.", "Here, we define a three-part loss function to select qualified counterfactual example: L = L valid + 1 L dist + 2 L div (6) H tokens P tokens Cross Attention H p H h Rationales Token original input P-H pairs Pre-trained LM (RoBERTa / BERT) Reversal Mechanism Loss Evaluation Counterfactual Example Candidate label y and the other label y' Knowledge Extraction from ConcepetNet WHY A Generator (GPT-2) Contrastive Explanation (WHY A NOT B) WHY NOT B Generator (GPT-2) Rationale Identification Counterfactual Example Generation Knowledge-aware Contrastive Explanation Generator", "where 1 and 2 are hyperparameters for balancing L dist and L div .", "For generating counterfactual example, the validity term, which ensures the generated counterfactual examples have desired prediction target, is defined as: L valid = k (cid:88) i =1 loss ( f ( c i ) , y (cid:48) ) (7) Meanwhile, the generated examples should be proximal to the original instance as described in (Cheng et al., 2020), which means only a small change needs to be made.", "We do not expect a big change that transforms a large portion of the original, in which way there will be no difference with merely presenting an example of counter classes and the corresponding explanation will be uninformative or useless.", "That is, we expect that resultant examples are able to preserve the main content of input while changing domain-related parts.", "where t indicates a rationale.", "we calculate the pairwise distance of a set of counterfactual examples and minimize:", "After defining the loss function, we use a reversal mechanism to produce counterfactual examples.", "In the reversal mechanism, we use hypernym and hyponym of tokens in WordNet 2 for perturbation.", "For example, as shown in Figure 2, the original premise and hypothesis are a woman and a young child are making sculptures out of clay and a man and a woman painting on canvas, and the label is contradiction.", "We find from WordNet the hypernyms of making sculptures out of clay and painting on canvas as doing art and making something respectively.", "We replace them with their hypernyms to obtain counterfactual examples, and use the model f trained on the original P-H training dataset to predict the resultant examples (Equation 7), and keep those belong to neutral or entailment.", "After the validity justification, we perform further selection by following Equation 8 and Equation 10, and choose the samples with the smallest loss for neutral and entailment for latter contrastive explanation generation.", "After obtaining qualified counterfactual examples, some work (Cheng et al., 2020; Wachter et al.,", "2017; Verma et al., 2020) provides them as counterfactual explanation directly.", "However, since counterfactual examples do not provide explanations explicitly, it could be difficult for users to understand.", "Hence, in this part, we focus on generating contrastive explanation via knowledge-aware generative language model, which explain WHY A NOT B rather than merely WHY A.", "While traditional approach generate explanation with SHAP 3 or LIME 4 , recent work has exploited to use pre-trained generative language models (Radford et al., 2019; Lewis et al., 2020; Raffel et al., 2020).", "In this paper, we use knowledge-aware pre-trained language model to generate contrastive explanation.", "Knowledge Extraction Given selected counterfactual examples and identified rationales, we extract relevant knowledge to enhance the generative language model.", "We acquire structured knowledge and rationale definitions from ConceptNet 5 and dictionary source 6 separately.", "For ConceptNet, we extract knowledge with Breadth-First-Search (BFS) algorithm as described in (Ji et al., 2020).", "For dictionary, we extract the definition of rationales by following (Chen et al., 2020a).", "After extraction, we concatenate these knowledge for training knowledge-aware explanation generator.", "Knowledge-Aware Explanation Generator For contrastive explanation generation, we divide the WHY A NOT B problem into two simple question: 1) why the label of the input belong to A, 2) why the label of the input not belong to B. In previous study, (Kumar and Talukdar, 2020) proposed a label-specific explanation generator, which fine-tuned GPT2 independently for each label.", "However, the generator can only produce explanations for WHY A.", "For the other part of contrastive explanation, we collect some contrastive explanations annotated by human and use them to fine-tune a WHY NOT B generator.", "Taking a premise-hypothesis pair x along with the qualified counterfactual example x (cid:48) and extracted knowledge KE as input, which is in the form of (cid:104) s (cid:105) Label (cid:104) s (cid:105) x (cid:104) s (cid:105) x (cid:48) (cid:104) s (cid:105) KE (cid:104) s (cid:105) , our fine-tuned language model generates explanations that support the corresponding label in a WHY A NOT 3 https://github.com/slundberg/shap 4 https://github.com/marcotcr/lime 5 https://github.com/commonsense/conceptnet5/ 6 https://dictionary.cambridge.org/ B way.", "With these explanations, end-users can observe and understand the difference between original input and counterfactual example explicitly.", "SNLI & e-SNLI The SNLI dataset (Bowman et al., 2015) is a balanced collection of P-H annotated pairs with labels from { entailment, neutral, contradiction } , which consists of about 550K, 10K and 10K examples for train, development, and test set, respectively 7 .", "(Camburu et al., 2018) extend the SNLI dataset to e-SNLI 8 with natural language explanations of the ground truth labels.", "Annotators were asked to highlight words in the premise and hypothesis pairs which could explain the labels and write a natural language explanation using the highlighted words.", "In this paper, we use the highlighted words for rationale identification and use the natural language explanation to fine-tune the language model based WHY A generator.", "IMDB The IMDB dataset (Maas et al., 2011) is a movie reviews dataset for sentiment classification.", "It contains 25,000 training data and 25,000 test data with movie reviews labeled as positive or negative.", "In this paper, we use IMDB as a out-of-domain dataset to evaluate if counterfactual examples can improve the robustness of our model.", "We are committed to generate contrastive explanations which can distinguish the predicted label and others at semantic level, hence, BLEU (Pa-pineni et al., 2002) score is not a proper way to measure the quality of explanations.", "That is, it can be better confirmed by manual evaluation.", "In this work, we use manual evaluation and case study for contrastive explanations quality evaluation.", "Meanwhile, we use accuracy to measure the effectiveness of generated contrastive explanations on improving model performance in terms of data augmentation (organized in the form of (cid:104) s (cid:105) CE (cid:104) s (cid:105) P remise (cid:104) s (cid:105) Hypothesis (cid:104) s (cid:105) ).", "RoBERTa & BERT For sequence labelling during rationale identification, we use RoBERTa-large and BERT-large, which have 24 layers, 16 attention heads and a hidden size of 1024 (355M parameters for RoBERTa-large, 340M parameters for BERT-large).", "For downstream classifications tasks, a clas-sification layer is added over the hidden state of the first [CLS] token at the last layer.", "GPT-2 For natural language explanation generation, we use the GPT-2 architecture (Radford et al., 2019).", "In particular, we use the GPT2-medium model that has 24 layers, 16 attention heads and a hidden size of 1024 (345M parameters).", "We fine-tuned GPT-2 model with label-specific examples that are integrated with contrastive examples and external knowledge from ConceptNet.", "inference model that considers recursive architectures in both local inference modeling and inference composition, and incorporates syntactic parsing information.", "(Zhang et al., 2020) incorporate explicit contextual semantics from pre-trained semantic role labeling and introduce an improved language representation model, Semantics-aware BERT (SemBERT), which is capable of explicitly absorbing contextual semantics with a BERT backbone.", "CA-MTL (Pilault et al., 2021) is a novel transformer based architecture that consists of a new conditional attention mechanism as well as a set of task conditioned modules that facilitate weight sharing, and achieves the new state-of-art performance on SNLI.", "ETPA (Camburu et al., 2018) propose Explain-Then-Predict-Attention (ETPA) that generates an explanation and then predicts the label with only the generated explanation.", "NILE:post-hoc (Kumar and Talukdar, 2020) propose natural language inference over label-specific explanations (NILE).", "A premise and hypothesis pair is input to label-specific a candidate explanation generator that generates natural language explanations supporting the corresponding label.", "The generated explanations are then fed into an explanation processor, which predicts labels using evidence presented in these explanations.", "LIREx-base (Zhao and Vydiswaran, 2021) propose LIREx-base that incorporates both a rationale enabled explanation generator and an instance selector to select only relevant, plausible natural language explanations (NLEs) to augment NLI models and evaluate on the standardized SNLI.", "For rationale identification, we use RoBERTa-base to extract hidden representations and set the learning rate to 2e-5, dropout to 0.02, batch size to 8 and number of epochs to 10.", "Meanwhile, we use AdamW (Loshchilov and Hutter, 2018) as the optimizer and adopt cross-entropy loss as the loss function.", "In the counterfactual example generation part, we build a hypernym and hyponym table, and use hypernym and hyponym of tokens in WordNet for perturbation.", "In the contrastive explanation generation part, we use GPT-2 as the generative language model for training WHY A generator and WHY NOT B Generator.", "For generator, we set the learning rate to 5e-5, adam epsilon to 1e-8, length for generation to 100.", "Explanation Generation for SNLI In Table 1, we present the inputs of our model, the results of our approach that include token-level explanation (rationales), counterfactual example and generated contrastive explanation, compared with manually annotated explanation and generated WHY A explanations by NILE:post-hoc and LIREx-base.", "Compared with WHY A explanations that are simple and lack essential information, the contrastive explanation contains more information such as making sculptures out of clay is a type of art and making sculptures is different from painting on canvas.", "As shown in Table 1, we provide not only the contrastive explanation but also the identified rationales and reversed counterfactual example for reference.", "To quantitatively assess contrastive explanations, we compared our method with LIREx-base and NILE:post-hoc in terms of explanation quality through human evaluation on 100 SNLI test samples.", "The explanation quality refers to whether an explanation provides enough essential information for a predicted label.", "As shown in Table 2, contrastive explanations produced by our method have a better quality by obtaining over 2.0% and 9.0% than LIREx-base and NILE:post-hoc .", "Explanation Enhanced NLI In Table 3, we report the experimental results of our method and other baselines include BERT, SemBERT (Zhang et al., 2020), CA-MTL (Pilault et al., 2021), NILE:post-hoc (Kumar and Talukdar, 2020) and LIREx-base (Zhao and Vydiswaran, 2021) on SNLI.", "With contrastive explanations, we are able to improve the performance of both BERT-large and RoBERTa-large.", "Compared with NILE:post-hoc (Kumar and Talukdar, 2020), the same scale BERT-large model with contrastive explanations brings a gain of 0.4% on test, which indicates the knowledge-aware contrastive generator are better than the generator of NILE.", "Compared with LIREx-base that uses RoBERTa-large (Zhao and Vydiswaran, 2021), the BERT-large model and RoBERTa-large with contrastive explanations bring a gain of 0.3% and 1.0% separately, which suggests contrastive explanations are better than rationale enabled explanation.", "In general, contrastive explanations can achieve new state-of-art performance and get it closer to human annotation (a gain of 1.1% on BERT-Large).", "We believe that contrastive explanations contain more helpful information (e.g., relations between rationales, differences between original and counterfactual examples) that can be used to improve model performance.", "Ablation Study We perform ablation studies with BERT-large on the SNLI dataset to evaluate the impacts of different components employed in our method, and report the results in Table 4.", "We isolated rationales, counterfactual examples and external knowledge, separately.", "The model without rationales means we generate contrastive explanations with counterfactual examples generated through randomly replacing tokens and extracted external knowledge.", "The model without counterfactual examples means we extracted knowledge with given rationales and generate contrastive explanation with them.", "The model without external knowledge means we generate contrastive explanation only with rationales and counterfactual examples.", "The model without contrastive explanation actually is the BERT-large baseline in SNLI.", "We can observe that each component is helpful.", "Especially, if we remove external knowledge and contrastive explanations, we can see a clear decrease of 0.6% and 0.8%, respectively.", "It indicates that external knowledge and contrastive explanation generation are the most essential components, while rationales and counterfactual examples affect the performance less.", "On one hand, the ablation study results show, external knowledge and rationales affect more than counterfactual examples on explanation generation.", "On the other hand, the results suggest that each component contributes positively, and indicate the importance of knowledge aware contrastive explanations, as we highlighted in the title.", "ex-Table 4: The accuracy (%) of ablation studies on SNLI.", "amples of IMDB for out of domain evaluation.", "As shown in Table 5, we train BERT-base on two different training sets: the original training set TRAINO , and the union of original training examples and generated counterfactual examples TRAINO C , and evaluate it with two separated dev sets: the original dev set DEVO and the generated counterfactual example dev set DEVC .", "Experimental results shown that BERT-base model enhanced with counterfactual examples achieves 88.5% and 95.1%, bringing a gain of 11.0% on DEVC while a slight decrease of 1.7% on DEVO .", "It indicates that counterfactual examples can help to improve the robustness of model for more diversified data distribution.", "With IMDB evaluation, we demonstrate that counterfactual examples can not only help to generate contrastive explanation, but also contribute to data augmentation.", "In the experiments on SNLI, we evaluated the effectiveness of counterfactual example in contrastive explanation generation.", "In IMDB experiments, we further verify the effectiveness of counter-factual examples for data augmentation with only rationales identification and heuristic reversal mechanism.", "Counterfactual example aims to find a minimal change in data that flips the model's prediction and is used for explanation.", "(Wachter et al., 2017) first propose the concept of unconditional counterfactual explanations and a framework to generate counterfactual explanations.", "(Hendricks et al., 2018) first consider the evidence that is discriminative for one class but not present in another class, and learn a model to generate counterfactual explanations for why a model predicts class A instead of B. In this paper, we focus on counterfactual example generation providing contrastive example for natural language inference.", "For post-hoc explainable NLP system, we can divide explanations into three types: feature-based, example-based and concept-based.", "For feature-based explanation, (Ribeiro et al., 2016) propose LIME and (Guidotti et al., 2018) extend LIME by fitting a decision tree classifier to approximate the non-linear model.", "However, there is no guarantee that they are faithful to the original model.", "For example-based explanation, (Kim et al., 2016) select both prototypes and criticisms from the original data points.", "(Wachter et al., 2017) propose counterfactual explanations providing alternative perturbations.", "For concept-based explanation, (Ghorbani et al., 2019) explains model decisions through concepts that are more understandable to human than individual features or characters.", "In this paper, we integrate counterfactual example and concepts for contrastive explanation generation.", "For natural language inference, (Bowman et al., 2015) propose SNLI which contains samples of premise and hypothesis pairs with human annotations.", "In order to provide interpretable and robust explanations for model decisions, (Camburu et al., 2018) extend the SNLI dataset with natural language explanations of the ground truth labels, named e-SNLI.", "For explanation generation in NLI, (Kumar and Talukdar, 2020) propose NILE, which utilizes label-specific generators to produce labels along with explanation.", "However, (Zhao and Vy-diswaran, 2021) find NILE do not take into account the variability inherent in human explanation, and propose LIREx which incorporates a rationale enabled explanation generator.", "In this paper, we consider generating contrastive explanations in NLI.", "In this paper, we focus on knowledge-aware contrastive explanation generation for NLI.", "We generate counterfactual examples by changing identified rationales of given instances.", "Afterwards, we extract concepts knowledge from ConceptNet and dictionary to train knowledge-aware explanation generators.", "We show that contrastive explanations that specify why a model makes prediction A rather than B can provide more faithful information than other WHY A explanations.", "Moreover, contrastive explanations can be used for data augmentation to improve the performance and robustness of existing model.", "The exploration of contrastive explanation in other NLP tasks (i.e. question answering) and better evaluation metrics for explanation will be performed in the future.", "We thank the anonymous reviewers for their helpful comments on this paper.", "This work is supported by National Key R&D Program of China (No. 2018AAA0101900), the NSFC projects (No. 62072399, No. U19B2042, No. 61402403), Chinese Knowledge Center for Engineering Sciences and Technology, MoE Engineering Research Center of Digital Library, Alibaba Research Intern Program of Alibaba Group, Alibaba-Zhejiang University Joint Institute of Frontier Technologies, and the Fundamental Research Funds for the Central Universities." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "result", "objective", "objective", "method", "method", "objective", "objective", "method", "method", "objective", "method", "other", "method", "other", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "other", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "other", "abstain", "method", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "method", "method", "method", "method", "result", "abstain", "abstain", "other", "other" ]
[ "This paper explores the problem of ranking short social media posts with respect to user queries using neural networks.", "Instead of starting with a complex architecture, we proceed from the bottom up and examine the effectiveness of a simple, word-level Siamese architecture augmented with attention-based mechanisms for capturing semantic soft matches between query and post tokens.", "Extensive experiments on datasets from the TREC Microblog Tracks show that our simple models not only achieve better effectiveness than existing approaches that are far more complex or exploit a more diverse set of relevance signals, but are also much faster.", "Implementations of our samCNN ( S imple A ttention-based M atching CNN) models are shared with the community to support future work.", "1 1 Introduction Despite a large body of work on neural ranking models for traditional ad hoc retrieval over web pages and newswire documents (Huang et al., 2013; Shen et al., 2014; Guo et al., 2016; Xiong et al., 2017; Mitra et al., 2017; Pang et al., 2017; Dai et al., 2018; McDonald et al., 2018), there has been surprisingly little work (Rao et al., 2017) on applying neural networks to searching short social media posts such as tweets on Twitter.", "Rao et al. (2019) identified short document length, informality of language, and heterogeneous relevance signals as main challenges in relevance modeling, and proposed the first neural model specifically designed to handle these characteristics.", "Evaluation on a number of datasets from the TREC Microblog Tracks demonstrates state-of-the-art effectiveness as well as the necessity of different model components to capture a multitude of relevance signals.", "In this paper, we also examine the problem of modeling relevance for ranking short social media posts, but from a complementary perspective. As Weissenborn et al. (2017) notes, most sys-tems are built in a top-down process: authors propose a complex architecture and then validate design decisions with ablation experiments. However, such experiments often lack comparisons to strong baselines, which raises the question as to whether model complexity is empirically justified. As an alternative, they advocate a bottom-up approach where architectural complexity is gradually increased. We adopt exactly such an approach, focused exclusively on word-level modeling. As shown in Figure 1, we examine variants of a simple, generic architecture that has emerged as best practices in the NLP community for tackling modeling problems on two input sequences: a Siamese CNN architecture for learning representations over both inputs (a query and a social media post in our case), followed by fully-connected layers that produce a final relevance prediction (Sev-eryn and Moschitti, 2015; He et al., 2016; Rao", "et al., 2016), which we refer to as a General Sentence Encoder in Section 2.1.", "Further adopting best practices, we incorporate query-aware convolutions with an average aggregation layer in the representation learning process.", "Recently, a number of researchers (Conneau et al., 2017; Mohammed et al., 2018) have started to reexamine simple baselines and found them to be highly competitive with the state of the art, especially with proper tuning.", "For example, the In-ferSent approach (Conneau et al., 2017) uses a simple BiLSTM with max pooling that achieves quite impressive accuracy on several classifica-tion benchmarks.", "Our contribution is along similar lines, where we explore simple yet highly effective models for ranking social media posts, to gain insights into querypost relevance matching using standard neural architectures.", "Experiments with TREC Microblog datasets show that our best model not only achieves better effectiveness than existing approaches that leverage more signals, but also demonstrates 4 speedup in model training and inference compared to a recently-proposed neural model.", "Our model comprises a representation learning layer with convolutional encoders and another simple aggregation layer.", "These architectural components are described in detail below.", "with randomly initialized kernels to learn semantic representations for text.", "More formally, given query q and post p as sentence inputs, we first convert them to embedding matrices Q and P through an embedding lookup layer, where Q R n d and P R m d , d is the dimension of embeddings, and n and m are the number of tokens in q and p , respectively.", "Then we apply a standard convolution operation with kernel window size k over the embedding matrix Q and P .", "The convolution operation is parameterized by a weight term W RF k d and a bias term b w RF , where F is the number of convolutional kernels.", "This generates semantic representation O q R n F and O p R m F , on which max pooling and an MLP are applied to obtain query representation g q R d and post representation g p R d .", "The weakness of the kernels in the general sentence encoder is that they do not incorporate knowledge from the query when attempting to capture feature patterns from the post.", "Inspired by attention mechanisms (Bahdanau et al., 2014), we propose two novel approaches to incorporate query information when encoding the post representation, which we introduce below.", "Query-Aware Attention Encoder (QAtt): In QAtt (Figure 2, left), for each query token, we construct a token-specific convolutional kernel to inject the query information.", "Unlike methods that apply attention mechanisms after the sentence representations are generated (Bahdanau et al., 2014; Seo et al., 2016), our approach aims to model the representation learning process jointly with an attention mechanism.", "where U RF k d represents trainable parameters, Q t q is the embedding of token t q with size R d and W t q QAtt RF k d .", "The element-wise product is applied between the token embedding Q t q and the last dimension of kernel weights U .", "In other words, we create F convolutional kernels for each query token, where each kernel is injected with the embedding of that query token via element-wise product.", "Figure 2 (left) illustrates one kernel for the query token Evernote', where element-wise product is represented by blue dotted arrows.", "When a QAtt token-specific kernel is applied, a window slides across the post embeddings P and learns soft matches to each query token to generate query-aware representations.", "On top of the QAtt kernels, we apply max-pooling and an MLP to produce a set of post representations { h i } , with each h i R d standing for the representation learned from query token t q i .", "Position-Aware Attention Encoder (PAtt): In the QAtt encoder, token-specific kernels learn soft matches to the query.", "However, they still ignore positional information when encoding the post semantics, which has been shown to be effective for sequence modeling (Gehring et al., 2017).", "To overcome this limitation, we propose an alternative attention encoder that captures positional information through interactions between query embeddings and post embeddings.", "Given a query token t q and the j -th position in post p , we compute the interaction scores by taking the cosine similarity between the word embeddings of token t q and post tokens t p j : j + k 1 from position j to j + k 1 : S j = [ cos ( t q , t p j ); ... ; cos ( t q , t p j + k 1 )] (2) where S j R k 1 and k is the width of the convolutional kernel we are learning.", "That is, for each token in the post within the window, we compute its cosine similarity with query token t q .", "We then convert the similarity vector S j into a matrix: S j = S j 1 , S j R k d (3) where 1 R 1 d with each element set to", "1. Finally, the PAtt convolutional kernel for query token t q at the j -th position is constructed as: W t q ,j PAtt = V S j (4) where V RF k d represents the trainable parameters.", "The element-wise product is applied between the attention weights S j and the last two dimensions of kernel weights V .", "Conceptually, this operation can be thought as adding a soft attention weight (with values in the range of [0 , 1] ) to each convolutional kernel, where the weight is determined by the cosine similarity between the token from the post and a particular query token; since cosine similarity is a scalar, we fill in the value in all d dimensions of the kernel, where d is the size of the word embedding.", "This is illustrated in Figure 2 (right), where we show one kernel of width two for the query token Evernote'.", "The brown (green) arrows capture cosine similarity between the query token Evernote' and the first (second) token from the post in the window.", "These values then serve as weights in the kernels, shown as the hatched areas.", "Similar to QAtt, the PAtt encoder with max-pooling and an MLP generates a set of post representations { h i } , with each h i standing for the representation learned from query token t q i .", "It is worth noting that both the QAtt and PAtt encoders have no extra parameters over a general sentence encoder.", "However, incorporating the query-aware and position-aware information enables more effective representation learning, as our experiments show later.", "The QAtt and PAtt encoders can also be used as plug-in modules in any standard convolutional architecture to learn query-biased representations.", "After the representation layer, a set of vectors { g q , g p , { h i }} is obtained.", "Because our model yields different numbers of h i with queries of different lengths, further aggregation is needed to output a global feature v .", "We directly average all vectors v = 1 N q (cid:80) h i as the aggregated feature, where N q is the length of the query.", "To obtain a final relevance score, the feature vectors g q , g p , and v are concatenated and fed into an MLP with ReLU activation for dimensionality reduction to obtain o , followed by batch normalization and fully-connected layer and softmax to output the final prediction.", "The model is trained end-to-end with a Stochastic Gradient Decent optimizer using negative log-likelihood loss.", "Datasets and Hyperparameters.", "Our models are evaluated on four tweet test collections from the TREC 20112014 Microblog (MB) Tracks (Ou-nis et al., 2011; Soboroff et al., 2012; Lin and Efron, 2013; Lin et al., 2014).", "Each dataset contains around 5060 queries; detailed statistics are shown in Table", "1. As with Rao et al. (2019), we evaluated our models in a reranking task, where the inputs are up to the top 1000 tweets retrieved from bag of words ranking using query likelihood (QL).", "We ran four-fold cross-validation split by year (i.e., train on three years' data, test on one year's data) and followed Rao et al. (2019) for sampling validation sets.", "For metrics, we used average precision (AP) and precision at rank 30 (P30).", "We conducted Fisher's two-sided, paired randomization tests (Smucker et al., 2007) to assess statistical significance at p < 0 .", "05 .", "The best model hyperparameters are shown in Table", "2. Baselines.", "On top of QL, RM3 (Abdul-Jaleel et al., 2004) provides strong non-neural results using pseudo-relevance feedback.", "We also compared against MP-HCNN (Rao et al., 2019), the first neural model that captures specific characteristics of social media posts, which improves over many previous neural models, e.g., K-NRM (Xiong et al., 2017) and DUET (Mitra et al., 2017), by a significant margin.", "To the best of our knowledge, Rao et al. (2019) is the most effective neural model to date.", "We compared against two variants of MP-HCNN; MP-HCNN+QL includes a linear interpolation with QL scores.", "QL on TREC 2013 (queries 111170).", "Table 3 shows the effectiveness of all variants of our model, compared against previous results copied from Rao et al. (2019).", "Model 1 illustrates the effectiveness of the basic BiCNN model with a kernel window size of two; combining different window sizes (Kim, 2014) doesn't yield any improvements.", "It appears that this model performs worse than the QL baseline.", "Comparing Model 2 to Model 1, we find that query-aware kernels contribute significant improvements, achieving effectiveness comparable to the QL baseline.", "With Model 3, which captures positional information with the position-aware encoder, we obtain competitive effectiveness compared to Model 8, the full MP-HCNN model that includes interpolation with QL.", "Note that Model 8 leverages additional signals, including URL information, character-level encodings, and external term features such as tfidf.", "With Model 4, which interpolates the position-aware encoder with QL, we obtain state-of-the-art effectiveness.", "Per-Query Analysis.", "In Figure 3, we show per-query AP differences between the PAtt model and the QL baseline on the TREC 2013 dataset.", "As we can see, PAtt improves on most of the queries.", "For the best-performing query 164 lindsey vonn sidelined , we project the hidden states o into a low-dimensional space using t-SNE (Maaten and Hinton, 2008), shown in Figure 4.", "We observe that with the basic BiCNN model (left), relevant posts are scattered.", "With the addition of an attention mechanism (either QAtt in the middle or PAtt on the right), most of the relevant posts are clustered together and separated from the non-relevant posts.", "With PAtt, there appears to be tighter clustering and better separation of the relevant posts from the non-relevant posts, giving rise to a better ranking.", "We confirmed similar behavior in many queries, which illustrates the ability of our position-aware attention encoder to learn better query-biased representations compared to the other two models.", "For the worst-performing query 125 Oscars snub Affleck , the PAtt model lost 0.47 in AP and 0.11 in P30.", "To diagnose what went wrong, we sampled the top 30 posts ranked by the PAtt model and counted the number of posts that contain different combinations of the query terms in Table 4.", "The PAtt model indeed captures matching patterns, mostly on Oscars and Affleck .", "However, from the relevance judgments we see that snub is the dominant term in most relevant posts, while Oscars is often expressed implicitly.", "For example, QL assigns more weight to the term snub in the relevant post argo wins retributions for the snub of ben affleck because of the term's rarity; in contrast, the position-aware encoder places emphasis on the wrong query terms.", "Model Performance.", "Finally, in terms of training and inference speed, we compared the PAtt model with MP-HCNN on a machine with a GeForce GTX 1080 GPU (batch size: 300).", "In addition to being more effective (as the above results show), PAtt is also approximately 4 faster.", "In this paper, we proposed two novel attention-based convolutional encoders to incorporate query-aware and position-aware information with minimal additional model complexity.", "Results show that our model is simpler, faster, and more effective than previous neural models for searching social media posts.", "This research was supported in part by the Natural Sciences and Engineering Research Council (NSERC) of Canada." ]
[ "objective", "objective", "result", "abstain", "other", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "objective", "result", "other" ]
[ "Traditional Key-value Memory Neural Networks (KV-MemNNs) are proved to be effective to support shallow reasoning over a collection of documents in domain specific Question Answering or Reading Comprehension tasks.", "However, extending KV-MemNNs to Knowledge Based Question Answering (KB-QA) is not trivia, which should properly decompose a complex question into a sequence of queries against the memory, and update the query representations to support multi-hop reasoning over the memory.", "In this paper, we propose a novel mechanism to enable conventional KV-MemNNs models to perform interpretable reasoning for complex questions.", "To achieve this, we design a new query updating strategy to mask previously-addressed memory information from the query representations, and introduce a novel STOP strategy to avoid invalid or repeated memory reading without strong annotation signals.", "This also enables KV-MemNNs to produce structured queries and work in a semantic parsing fashion.", "Experimental results on benchmark datasets show that our solution, trained with question-answer pairs only, can provide conventional KV-MemNNs models with better reasoning abilities on complex questions, and achieve state-of-art performances.", "Memory Neural Networks (MemNNs) [Weston et al. , 2014; Sukhbaatar et al. , 2015b] are a fam-ily of neural network models that aim to learn how to reason with a long-term memory component and various inference components.", "The memory component serves as a knowledge base to recall facts from the past.", "MemNNs have been successfully applied in many natural language processing applications such as question questiontext Embedding Key Addressing query representation Hashing k 1 k 2 k 3 k N v 1 v 2 v 3 v N memory slots ValueReading queryupdate Ranking answer representation Embedding candidate answer representation answertext answer candidates + Figure 1: The Key-Value Memory Neural Network Architecture.", "answering and reading comprehension (RC).", "Recently, Miller et al.", "[2016] proposed a variant of MemNNs, namely Key-Value Memory Neural Networks (KV-MemNNs), which generalizes the original MemNNs by storing facts in a key-value structured memory.", "Figure 1 illustrates the basic architecture of KV-MemNNs, which consists of five components.", "The question is first fed to the Embedding component and the Hashing component.", "The former converts the incoming question to an internal feature representation.", "The Hashing component uses the question to pre-select a list of facts to compose the key-value memory.", "The Key Addressing component takes the input question representation and the current memory to compute the relevance probability between the question and each key in the memory.", "The Value Reading component reads the values of all addressed memories by taking their weighted sum using the relevance probabilities.", "The obtained value representation is then added to the query representation to change the query focus for the next round of memory reading.", "After multiple hops of reasoning over the memories, the final value representation is treated as the answer representation to perform the final prediction over all candidate answers in the Ranking component.", "The KV-MemNNs have been shown to support shallow reasoning in domain-specific knowledge based question answering (KB-QA) tasks such as MovieQA [Tapaswi et al. , 2016].", "However, when applied to a more challenging scenario, e.g., open domain KB-QA, the KV-MemNNs models do not perform as well as expected, possibly due to two reasons.", "First of all, the focus of conventional KV-MemNNs is about understanding the facts in the memory rather than properly understanding the questions, where the latter requires incrementally decomposing a complex natural language question into a set of focused queries with the help of the facts in the memory.", "However, in open domain KB-QA, questions are usually more complicated, e.g., multi-relation questions such as who does maggie grace play in taken , where more than one entity and relation are mentioned.", "Secondly, as shown in Figure 1, KV-MemNNs usually work in an information retrieval (IR) fashion, which first retrieve a set of candidate answers from KB, then rank them by computing the similarity between the value representation and candidates, and finally select the top one or a fixed number of top candidates as the answer.", "We can imagine that it is not trivial for such IR-styled KV-MemNNs to properly resolve complex constraints from natural language questions, or to handle questions with multiple answers.", "We believe that an ideal framework for open domain KB-QA should first understand the natural language questions, explicitly represent the meaning, and make the answer retrieval process more interpretable.", "To build such an interpretable KV-MemNN KB-QA model, we need to deal with the following challenges: (1) KV-MemNNs often read the memory repeatedly since they do not know when to stop; (2) during multiple memory readings, conventional KV-MemNNs often fail to precisely update the queries for multi-relation questions; (3) strong annotations are usually required to train an interpretable QA model, e.g., the supervision for the memory selection at each hop.", "To address the challenges, we propose a novel solution to make conventional KV-MemNNs feasible to open domain KB-QA.", "In particular, we introduce a flexible KV-MemNN solution that can work in both the IR and semantic parsing style with large-scale memory.", "To this end, we first present a novel query updating method that is able to decompose complex questions and precisely address a relevant key at each hop.", "Secondly, we introduce a new STOP strategy during memory readings, which imports a special key STOP into the memory and guides our model to avoid repeated or invalid memory readings.", "In addition, our proposed model can learn to reason over memory slots with weak supervision, e.g., question-answer pairs only, opposing the strong supervision that most current neural semantic parsers demand, which incurs high labor costs.", "Experimental results on two benchmark datasets show that our proposed model can not only enhance the reasoning capability of KV-MemNNs, but also be flexible enough to work as a semantic parser, with state-of-the-art performances.", "There are usually two main challenges in the open domain KB-QA task: (1) it often requires the ability to properly analyze and represent the natural language questions against knowledge bases, especially for those involving multiple entities and relations, which we also call as reasoning over the KBs; (2) training such interpretable question understanding models requires considerable strong annotations, which is expensive to obtain in practice.", "Existing works address these using either the information retrieval (IR) based solutions or the semantic paring (SP) based approaches.", "The IR-based models [Yao and Van Durme, 2014; Yao, 2015; Bast and Haussmann, 2015; Bordes et al. , 2015; Dong et al. , 2015; Jain, 2016; Lai et al. , 2019] tackle the KB-QA task by developing various ranking models towards the candidate answers, which implicitly meet the reasoning requirements during the candidate-searching step or in designing the ranking functions.", "In contrast, the SP-based approaches [Berant et al. , 2013; Kwiatkowski et al. , 2013; Berant and Liang, 2014; Reddy et al. , 2014; Yih et al. , 2015; Xu et al. , 2016] explicitly represent the meaning of questions as logical forms or structured queries that naturally support reasoning over structured KBs.", "More recently, memory based reasoning solutions [Weston et al. , 2014; Miller et al. , 2016] are proposed to tackle the task.", "[Weston et al. , 2014] proposed the Memory Neural Networks (MemNNs), which enable the neural network models read/write on an external memory component, and are further extended into an End-to-End fashion [Sukhbaatar et al. , 2015a].", "[Miller et al. , 2016] further proposed the Key-Value Memory Network, which generalizes the MemNN by storing facts in a key-value structured memory.", "Both of the two models could perform shallow reasoning over the memory, since they can find answers by consecutively making predictions over multiple memory slots.", "Compared to the flat memory slots in MemNNs, the Key-Value design can precisely accommodate more complex structured resources, e.g., discriminating subjects and objects in structured KBs, thus makes KV-MemNNs more flexible and better fit to different applications.", "These neural networks models are often trained in an end-to-end fashion, making the models relatively less interpretable.", "Conceptually similar to our STOP strategy, Shen et al.", "[2017] propose the termination gate mechanism based on a random variable generated from the internal state for reading comprehension.", "In contrast, our model attempts to learn a general STOP key embedding based on the incrementally updated query representations, which can be learned from question-answer pairs only and lead to more explicit reasoning interpretations over structured KBs.", "This make our model potentially suit more real scenarios.", "Our work is also related to [Jain, 2016; Bao et al. , 2016], which are designed to support reasoning for multi-relation questions by exploring the relation path and certain KB schema, e.g., CVT nodes, in the Freebase.", "The former also considers previously-addressed keys during query updating, but ignores the value representations.", "Thus, it still requires predefined rules and threshold to ar-tificially add intermediate value representations to update the query.", "The latter also relies on a set of predefined rules to perform reasoning over Freebase.", "In contrast, our model incorporates both the key and value representations into the query representations, and update in a more uniform way, thus is more general and supports more reasoning scenarios in KB-QA.", "For a given question x , a knowledge base KB and the question's answer y , we aim to learn a model such that", "where y is the predicted answer.", "In standard KV-MemNNs, the function F can be composed of five components, i.e., key hashing, key addressing, value reading, query updating and answer prediction.", "Next, we will introduce how we design a novel mechanism upon those components to equip KV-MemNNs with more powerful reasoning ability.", "The architecture of our model is shown in Figure 2.", "The knowledge facts in KB are usually organized in a triple < subject, relation , object > , such as < Maggie Grace, fb:actor character , Kim > .", "In KV-MemNNs, these facts are stored in a key-value structured memory, where the key k is composed of the left-hand side entity (subject) and the relation, e.g., Maggie Grace fb:actor character , and the value v is the right-hand side entity (object), e.g., Kim.", "Traditional KV-MemNNs first preselect a list of candidate KB facts (k 1 ,v 1 ) ,..., (k n ,v n ) to compose the key-value memory.", "Particularly, one can first detect entity mentions in the question, and include all KB facts that contains with one of those entities as subject into the memory.", "In our experiments, we directly use the entity linking results of Xu et al.", "[2016] and filter out their relations that have more than 100 objects.", "In practice, we find this strategy could effectively avoid the memory exploding for popular entities.", "In order to help the model avoid repeated or invalid memory reading, we introduce a special key, STOP , into the memory for all questions.", "The corresponding value of the STOP key is a special symbol represented by an all-zero vector.", "The STOP key is designed to tell our model that we has already accumulated sufficient facts at hand to answer the question, so there is no need to find other knowledge facts from the memory in later hops.", "Key addressing is basically a matching process, aiming to find the most suitable key for a given query.", "It can be formulated as a function that computes the relevance probability p i between the question x and each key k i : p i = Softmax( A ( x ) A ( k i )) where is a feature map of dimension D , A is a d D matrix.", "The values of memories are then read by taking their weighted sum using the relevance probabilities, and the value representation o is returned to locate the answers or update the who does maggie grace play in taken q 0 [Maggie Grace fb:actor..character] [Kim] [Maggie Grace fb:actor..character] [Alice] [Maggie Grace fb:actor..film] [Taken fb:film..character] [Malice in Wonderland] [Kim] [STOP] [**Zero Vector**] Addressing p 0 key representation A ( k ) Key-Value Memory addressedkey representation value representation A ( v ) addressedvalue representation M0 q 1 hop = 1 Key-ValueMemory value representation p 1 addressedkey representation addressedvalue representation key representation M1 hop = 2 [Maggie Grace fb:actor..character] [Maggie Grace fb:actor..character] [Taken fb:film..character] softmax [Maggie Grace fb:actor..character] [Maggie Grace fb:actor..character] append structured query append structured query [Maggie Grace fb:actor..character] [Taken fb:film..character] softmax Figure 2: A running example of our key-value memory network model to answer the question who does maggie grace play in taken .", "query for further memory addressing: o = (cid:88) i p i A ( v i ) There are many methods to represent questions and memory slots, including keys and values.", "Here we simply use the Bag-of-Words model to produce the representations, where we sum the embedding of each word in the question or memory slot together to obtain their vector representations.", "After reading the addressed memory, the initial query representation q = A ( x ) should be updated so that the new evidence o collected in the current hop can be properly considered to retrieve more pertinent information in later steps.", "Traditional KV-MemNNs simply add the initial query q and the returned value o , then perform a linear transformation to obtain the new query representation.", "This updating strategy is effective in the RC task, since the questions in RC are relatively simple, and their main emphasis is to select proper memory values to change the focus of the query, until reaching the answer.", "However, the questions in open domain KB-QA tasks are more complicated, usually involved with multiple relations or constraints.", "Take the question who does maggie grace play in taken as an example, the expected answer should follow two constraints: (1) Maggie Grace plays this answer; and (2) this answer is from the movie Taken .", "To answer this question, the model needs to perform two hops of inference consecutively, i.e., matching two keys Maggie Grace", "fb:actor..character and Taken", "fb:film..character in the memory.", "Conventional query updating methods, e.g., adding q and o , may not be helpful in guiding the model to predict the other key in later hops, and possibly even hurt the performance, since it may introduce unrelated information into the new query.", "Intuitively, in KB-QA, masking previously-addressed keys from the query could benefit latter inference, since the model will be able to focus on the next hop, e.g., the movie Taken .", "Therefore, we take into account the query and addressed memories at the t -th hop when updating the query q t +1 for the next hop: q t +1 = M t ( q t (cid:88) i p ti A ( k i ) (cid:124) (cid:123)(cid:122) (cid:125) addressed key (cid:88) i p ti A ( v i ) (cid:124) (cid:123)(cid:122) (cid:125) addressed value ) where denotes the concatenation of vectors.", "The query updating step is parameterized with a different matrix M t on the t -th hop, which is designed to learn a proper way to combine these three representations.", "Conventional KV-MemNNs use the value o at the final hop of inference to retrieve the answers, by simply computing the similarity between o and all candidate answers.", "This may be of risk for our task.", "First of all, many questions in the open domain KB-QA have multiple answers, but, KV-MemNNs are supposed to select the candidate with the highest similarity score as the only answer.", "Secondly, the value representation at the final step may not fully capture the answer information throughout the whole inference process.", "For example, for multi-constraint questions, the model may address different constraints at different hops, which requires the model to take the value representations in every hop into consideration in order to produce the final answer representation from a global view.", "We therefore propose to accumulate the value representations of all hops to make the resulting answer representation lean more on satisfying all constraints.", "We compute the answer representation m at each hop by adding the value representations of both the current hop and the previous one: m t +1 = o t +1 + o t , m 0 = o 0 .", "With the final m at hand, we could follow traditional IR-based methods to use the final A nswer R epresentation to find the best match over all possible candidate values in the memory, namely the AR approach.", "Instead, we can also collect all the best matched keys at every hop to construct a S tructured Q uery and execute it over the KB to obtain all qualified answers, namely the SQ approach.", "Specifically, the structured query can be constructed as: we select the keys that have the highest relevance probabilities in every hop, resulting a sequence of keys sk 0 ,... .", "Starting from sk 0 , we append the key sk i into the final structured query until we see the STOP key for the first time at the k -th hop, i.e., SQ = { sk 0 , ..., sk k 1 } .", "Apparently, the SQ approach can easily output all qualified answers through excuting the queries over the KB, while the AR approach still has difficulties in selecting multiple answers from a ranked list.", "However, keep in mind that, we do not have gold-standard structured queries for training.", "As a result, we have to adopt different strategies to find answers in the training and test phases.", "During training , after a fixed number H hops, we follow the AR approach to use the final m H to compute a prediction over possible candidates, and we train the model by minimizing the cross-entropy between the prediction and gold-standard answers.", "During testing , we follow the SQ approach to collect the final answers by constructing and executing the structured queries over the KB to obtain all answers.", "As illustrated in Figure 2, for the example question who does maggie grace play in taken , our model selects three keys, i.e., [ < maggie grace, fb:actor..character > , < taken, film..character > , and < STOP > ].", "We combine the previous two triples to construct the structured query, which is then executed over the KB to get the answer.", "We should point out that our model could still use the AR approach to predict the answers just like in the training phase.", "And as far as we know, our model is the first one that is suitable to tackle the KB-QA task in both the information retrieval and semantic parsing fashions.", "Given an input question x , the network with parameters uses the answer representation m hx to perform the prediction over candidate answers at hop h , resulting a prediction vector a hx , where the i -th component is respect to the probability of candidate answer i .", "We denote t x as the target distribution vector.", "We compute the standard cross-entropy loss between a hx and t x , and further define the objective function over all training data: L ( ) = (cid:88) x H (cid:88) h =1 t x log a hx + || || 2 where is a vector of regularization parameters.", "Intuitively, this loss function makes our model generate shorter paths to reach answers from the question.", "On the other hand, it encourages the query updating method to mask the information already addressed in previous hops for the next query representation.", "This design, together with the query updating method, are the keys to learn the STOP strategy.", "We evaluate our model on two benchmark datasets to investigate whether our enhanced KV-MemNNs model can better perform reasoning over the memory in the open domain KB-QA task, and whether it can make the QA procedure more interpretable.", "We use the WebQuestions dataset [Berant et al. , 2013] as our main dataset, which contains 5,810 question-answer pairs.", "This dataset is built on Freebase [Bollacker et al. , 2008] and all answers are Freebase entities.", "We use the same training, development and test split as [Berant et al. , 2013], containing 3000, 778 and 2032 questions, respectively.", "Our model is trained using the Adam optimizer [Kingma and Ba, 2014] with mini-batch size 60.", "The learning rate is set to 0.001.", "The complexity of model was penalized by adding L2 regularization to the cross entropy loss function.", "Gradients are clipped when their norm is bigger than 20.", "The hop size is set to 3.", "We initialize word embed-dings using pre-trained word representations from Turian et al.", "[2010] and the dimension of word embedding is set to 50.", "We use the average question-wise F 1 as our evaluation metric.", "We compare our model with representative IR based KB-QA models, and several state-of-the-art semantic parsing models.", "We also include different variants of our model for comparisons to shed light on the advantages of our proposed strategies, especially our three important building blocks the STOP strategy, how to update a query, and how to obtain the answers.", "CQU+AR: uses the C onventional Q uery U pdating method (CQU) which performs a linear transformation over the sum of the query and value representations.", "This method adopts the AR approach to predict answers where the one with highest probability is selected as the answer.", "KVQU+AR: applies the approach introduced in this paper that additionally considers both the K ey and V alue representations in the Q uery U pdating (KVQU).", "This method updates the query representation after reading the memory values at each hop of inference.", "Note that this model can be seen as a variant of Jain [2016], which is essentially an IR-based approach.", "CQU+SQ: uses the CQU method to update the query representations, and applies the SQ approach to obtain the answers.", "KVQU+SQ: uses the KVQU method to update the query representations, and adopts the SQ approach to obtain the answers.", "STOP+CQU+AR: introduces the STOP key into the memory, but still uses the conventional query updating method and answer representations to find the answers.", "STOP+CQU+SQ: introduces the STOP key and uses the conventional query updating method, but uses the SQ approach to obtain the answers.", "STOP+KVQU+AR: introduces the STOP key to the memory, uses the KVQU approach to update the query representations, and adopts the AR approach to retrieve the answers.", "STOP+KVQU+SQ: is our main model that introduces the STOP key, applies the KVQU query updating method, and retrieves answers using the post-constructed structured queries.", "Table 1 summarizes the performance of various methods on the test set of WebQuestions.", "We can see that our main model ( STOP+KVQU+SQ ) performs the best among all its variations, and significantly outperforms the state-of-the-art methods on WebQuestions (with one-tailed t-test sig-nificance of p < 0.05).", "We can also see that even the conventional KV-MemNN model (i.e., model", "(a)) could still outperform traditional semantic parsing models [Berant et al. , 2013; Berant and Liang, 2014], showing the effectiveness of memory networks in organizing and utilizing structured data.", "By replacing the CQU method with our proposed KVQU method, the KV-MemNN model (i.e., model", "(b)) could further gain an improvement of 1.2%, indicating a proper query representation updating method is critical to KV-MemNNs.", "After introducing the STOP strategy, the KV-MemNN model is capable to perform proper multi-hop reasoning over the memory, thus outperform most existing methods by a large margin except Bao et al.", "[2016].", "Previously, Bao et al.", "[2016] achieves the state-Question Addressed Key who did armie hammer play in the social network Armie Hammer fb:award", "of-the-art performance on the WebQuestions by explicitly addressing the multi-constraint questions.", "Specifically, Bao et al.", "[2016] designs a multi-constraint graph for such questions based on the Freebase schema, and introduces a set of predefined rules to solve complex representations such as max , min , top -X and so on.", "On the other hand, our model only requires the KB facts to be stored in a Key-Value memory, independent of certain KB schemas.", "Although those complex expressions in WebQuestions could be easily covered by hand-crafted rules as many did, our model does not work upon such rules, and we think it is crucial to properly enhance KV-MemNNs with such more advanced reasoning abilities, which we leave for future work.", "As shown Table 1, we can see that when introducing the STOP strategy, almost all models improve by around 4%, not only in the SQ setting (e.g., from", "(d) to", "(f)), but also in the AR setting (e.g., from", "(a) to", "(e)).", "This is not surprising, since the STOP key is introduced to help our model learn to determine when it should stop reading the memory to avoid repeated or invalid addressing.", "This apparently will lead to more accurate key addressing at each hop, and produce more accurate structured queries.", "And, better key addressing can also help to generate better value representations at each hop, and finally better answer representation at the last hop.", "Keep in mind that we obtain such improvement in the case that we do not have gold-standard annotations for the STOP key, but with question-answer pairs only.", "To better investigate how the STOP key works, we randomly select 200 questions from the test set and manually analyze the structured queries generated by our model.", "We find that, for 184 questions (92%) our model predicts the STOP key after one hop of inference and continuously predicts STOP keys in later hops.", "For the remaining questions, our model predicts two distinct keys before predicting the STOP key.", "Among these 184 questions, 178 questions (96.7%) can be resolved using exactly a one-triple query.", "For the remaining 6 questions, it requires two distinct triples to find the answers.", "This indicates that our model can successfully utilize the STOP strategy to avoid repeated or invalid reading over the memory, at least, for simple questions.", "For the multi-relation questions which require at least two hops of inference to find the answers, we evaluate our model on 326 multi-constraint questions selected in Bao et al.", "[2016].", "We find that for 283 questions (86.8%), our model performs two hops of inference before predicting the STOP key while for the remaining 13.2%, our model only performs one hop of inference.", "This demonstrates that even without strong annotations in terms of structured queries, our model still manages to recognize the multi-relation structure and properly stops the invalid reading process.", "Table 2 illustrates several examples before and after using the STOP strategy.", "In KV-MemNNs, it is important to properly update the query representation after each hop, since the updated query will be used to address more focused information in the next hop, which is especially crucial to support multi-hop reasoning over the memory.", "We experimented with two query updating approaches, the traditional adding-based method, CQU, and our proposed KVQU, which additionally considers the addressed key representations in previous hops to update the query representation.", "From Table 1, we can see that no matter which techniques are used to retrieve the final answers, our model can always benefit from replacing the CQU with our KVQU.", "The main reason may be that KVQU learns to mask already addressed information and retain those untouched, which leads to more accurate key addressing and more expressive answer representations.", "But, we also observe that by switching from AR to SQ, STOP+KVQU+SQ achieves more improvement than STOP+CQU+SQ.", "One possible reason is that the KVQU method, together with the STOP strategy, can help address the most relevant keys (i.e., top#1 key) more accurately than CQU does at each hop, which results in more accurate structured queries.", "We randomly select 50 multi-constraint questions from the test set of WebQuestions and analyze the addressed keys at each hop.", "We find that for 48 questions, the model with CQU (model", "(f)) repeatedly selects the same keys that have the highest relevance probabilities at the first two hops.", "However, when replacing CQU with KVQU, the model addresses different keys at the first two hops (examples shown in Table 2).", "One possible reason is that, in contrast to the CQU that only uses the value representation of current hop to update the query, KVQU additionally considers the addressed keys in the current hop, and aims to mask the already addressed information, so that in later hops, the model will focus on the remaining untouched part of the question.", "We investigate two different answer retrieval methods, the IR styled method AR and the semantic parsing styled method SQ, in the experiments.", "By comparing CQU+AR", "(a) and CQU+SQ", "(c) in Table 1, we can see that replacing the answer Hop Size STOP + KVQU + SQ STOP + KVQU + AR 1 38.7% 35.6% 2 45.9% 41.2% 3 53.2% 50.4% 4 53.1% 46.7% 5 53.0% 44.2% Table 3: Performance (answer F 1 ) of STOP+KVQU+SQ and STOP+KVQU+AR with different hop sizes on the development set of WebQuestions.", "retrieval method does not boost the performance, but actually hurts.", "This is not surprising, because with the vanilla CQU, the model does not know how to stop reading the memory, thus may select repeated/invalid keys after the first hop, which will produce incorrect structured queries.", "Similarly, the KVQU+SQ also suffers from the incorrect structured queries.", "After introduce the STOP key into the memory, we find that no matter which query updating methods are used, the models using structured queries (STOP+*+SQ) significantly outperform their corresponding versions using the ranking based method (STOP+*+AR).", "Despite of the STOP strategy, the improvement may come from two aspects.", "First, the structured queries in SQ are built from the best key at every hop, thus have captured more global information throughout the reasoning procedure, while the AR method only uses the o in the last hop to retrieve answers.", "Secondly, the models with SQ can easily output multiple qualified answers from the KB, while the methods with AR can only output the top one from a ranked list in the memory.", "Moreover, from a practical perspective, the SQ methods provide more interpretability than the ranking based methods.", "To investigate the impact of the hop size to the model performance, we evaluate the model STOP+KVQU+SQ and STOP+KVQU+AR with various hop size settings on the development set of WebQuestions.", "We evaluate both of the two models since we wonder which answer retrieval method is more sensitive to the hop size.", "As shown in Table 3, the model with AR answer retrieval method achieves the best performance with hop size of 3.", "Then with the hop size increasing, the performance drops.", "In contrast, the model with the SQ method also achieves the best performance when the hop size is 3.", "When the hop size increases, the performance does not drop significantly, but almost remains unchanged.", "We think the reason may be that the model with the AR method keeps predicting the keys as the hop size increasing, which inevitably introduces noise to the answer representations.", "On the other hand, the model with the SQ method is less sensitive to the hop size, since once it sees the STOP key, the latter predictions do not affect the resulted structured query.", "To further examine our model under different conditions, besides WebQuestions, we also evaluate our main model on the QALD-4 dataset [Unger et al. ], which is built upon another KB, DBpe-dia [Auer et al. , 2007].", "Like Freebase, DBpe-dia stores real world facts in the triple format, making it easier for our model to adapt to this new KB.", "The QALD-4 dataset consists of three QA datasets, namely, Multilingual QA, Biomedical QA and Hybrid QA.", "Here we evaluate our model on the multilingual QA dataset which contains 250 English question-SPARQL pairs, where 200 questions (80%) are used for training and 50 questions for testing.", "These questions are more complicated than those of WebQuestions, e.g., all QALD-4 questions require at least two hops of inference over the KB to answer.", "Table 4 lists the results of our model and other participated systems [Unger et al. ].", "We can see that our model can achieve the best performance on this dataset, without importing extra rules.", "We analyze the errors of our main model on the development set of WebQuestions.", "Around 80% errors are caused by incorrect key addressing.", "The key addressing errors are mainly due to (1) the entities in the addressed keys are incorrect.", "Entity linking itself is a challenging task and we do not employ any existing entity linking tools to incorporate the linking confidence score, which may further improve our model; (2) the relations in the addressed keys are incorrect, which is mainly caused by insufficient context in the question.", "For example, in what is duncan bannatyne , the information from the question is quite limited for our model to predict the correct key fb:profession .", "Although the STOP strategy proves to be effective on most questions, there are still some questions where the STOP strategy does not work, e.g., what team did david beckham play for in 2011 Method Recall Precision F-measure Xser 0.71 0.72 0.72 gAnswer 0.37 0.37 0.37 CASIA 0.40 0.32 0.36 our model 0.78 0.82 0.81 Table 4: Results of the models on the test set of QALD.", "shown in Table 2.", "We failed to perform reasoning over the time constraint 2011 due to limited context left for 2011 after we addressed play for in earlier hops.", "Also due to this same reason, our model can not correctly answer the second question in Table 2, who is the governor of India 2009 .", "Keep in mind that, we actually do not have annotated addressed keys at each hop to explicitly teach our model.", "The remaining cases also include failures in deep analysis according to specific KBs.", "For example, in who is the mother of prince michael jackson , our model only matches the triple michael jackson fb:parents , but fail to spot female in the next hop.", "In this paper, we propose to apply the KV-MemNNs as a semantic parsing module to approach the open-domain KB-QA task.", "We introduce a novel STOP strategy to derive structured queries with flexible number of query triples during multi-hop memory reading, and present a new query updating method, which considers the already-addressed keys in previous hops as well as the value representations.", "Experimental results show that the STOP strategy not only enables multi-hop reasoning over the memory, but also acts as the key to construct the structured queries, which help our model achieve the state-of-the-art performances on two benchmark datasets.", "We would like to thank the anonymous reviewers for their helpful comments and valuable suggestions.", "We also thank Yao Fu for his constructive comments on the early version of the draft.", "This work is supported by National High Technology R&D Program of China (Grant No.2018YFC0831905), Natural Science Foundation of China (Grant No. 61672057, 61672058).", "For any correspondence, please contact Yansong Feng." ]
[ "abstain", "abstain", "objective", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "objective", "abstain", "objective", "objective", "objective", "objective", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "objective", "abstain", "abstain", "other", "other", "other", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "result", "other", "other", "other", "other" ]
[ "Fine-grained entity typing (FGET) aims to classify named entity mentions into fine-grained entity types, which is meaningful for entity-related NLP tasks.", "For FGET, a key challenge is the low-resource problem the complex entity type hierarchy makes it difficult to manually label data.", "Especially for those languages other than English, human-labeled data is extremely scarce.", "In this paper, we propose a cross-lingual contrastive learning framework to learn FGET models for low-resource languages.", "Specifically, we use multi-lingual pre-trained language models (PLMs) as the backbone to transfer the typing knowledge from high-resource languages (such as English) to low-resource languages (such as Chinese).", "Furthermore, we introduce entity-pair-oriented heuristic rules as well as machine translation to obtain cross-lingual distantly-supervised data, and apply cross-lingual contrastive learning on the distantly-supervised data to enhance the backbone PLMs.", "Experimental results show that by applying our framework, we can easily learn effective FGET models for low-resource languages, even without any language-specific human-labeled data.", "Our code is also available at https://github.com/thunlp/CrossET.", "Recently, various efforts have been devoted to exploring fine-grained entity typing (FGET) (Ling and Weld, 2012; Li et al., 2020), aiming to identify concrete fine-grained entity types for named entity mentions in sentences (Figure 1).", "Since the type information of named entity mentions is useful for understanding textual semantics, FGET is widely applied to enhance entity-related tasks, such Corresponding authors.", "as coreference resolution (Khosla and Rose, 2020), entity linking (Onoe and Durrett, 2020; Chen et al., 2020a), relation extraction (Ren et al., 2017; Zhou and Chen, 2021) and event extraction (Nguyen et al., 2016; Yang and Mitchell, 2016).", "Despite the success of FGET, the low-resource problem is always a challenge of FGET, since the complex type hierarchy makes it difficult to manually label data.", "To alleviate the low-resource problem, besides utilizing auto-labeled data (Ling and Weld, 2012; Gillick et al., 2014; Xin et al., 2018; Dai et al., 2021), manually building FGET datasets is the most effective approach (Sang and De Meulder, 2003; Hovy et al., 2006; Ling and Weld, 2012; Choi et al., 2018; Ding et al., 2021).", "However, existing FGET datasets are mainly in English.", "For datasets in specific languages other than English, such as Chinese (Lee et al., 2020), Japanese (Suzuki et al., 2016), Dutch and Spanish (van Erp and Vossen, 2017), their scale and quality are not comparable to those English datasets.", "In this paper, we introduce a cross-lingual framework to learn FGET models for low-resource languages, via utilizing the data in high-resource languages 2241 (e.g. utilizing English datasets).", "Transferring the typing knowledge from high-resource languages to low-resource languages is not easy.", "As different languages have quite different patterns, it is challenging to understand the semantics of both high-resource and low-resource languages at the same time.", "With only a few examples of low-resource languages and no parallel data, it is also hard to bridge different languages for knowledge transfer.", "To handle these issues: (1) we use multi-lingual pre-trained language models (PLMs) as backbone.", "Multi-lingual PLMs such as M-BERT (Devlin et al., 2019) are pre-trained on large-scale multi-lingual corpora, taking it as the backbone can well encode data in different languages into the same semantic space (Han et al., 2021).", "(2) we apply heuristic rules and cross-lingual contrastive learning to bridge multiple languages.", "We design several entity-pair-oriented heuristic rules to obtain distant supervision, which can automatically annotate entity types by utilizing latent relations between entity pairs.", "Machine translation is used on the auto-labeled data to establish a connection between high-resource and low-resource languages.", "Finally, we apply contrastive learning to learn similarities between cross-lingual auto-labeled types, instead of using pseudo-labels to learn a classifier, which can enhance the type recognition ability and reduce the side effect of auto-labeled data.", "For convenience, we name our cross-lingual contrastive learning framework C ROSS -C in the following sections.", "We conduct experiments on two popular FGET datasets: Open-Entity (Choi et al., 2018) and Few-NERD (Ding et al., 2021), and translate their test sets into non-English versions to evaluate the effectiveness of CROSS-C for low-resource languages.", "Quantitative experimental results show that applying CROSS-C can easily train effective FGET models for low-resource languages, even without any language-specific human-labeled data.", "Besides quantitative experiments, we also provide some visualization of feature spaces and conduct case studies for qualitative analysis to show how CROSS-C works.", "In this section, we will introduce our cross-lingual framework to learn FGET models for low-resource languages.", "We will first give some essential notations and definitions, and then elaborate on the details of our framework.", "As shown in Figure 1, given a sentence x and one named entity mention m in the sentence, our goal is to determine types from a fine-grained type set T according to the sentence context for the mention m .", "Note that FGET is a multi-label classification problem, since multiple types can be assigned to a single named entity mention.", "For a high-resource language h , sufficient human-labeled data {X h , Y h } exists, where X h = { x h, 1 , x h, 2 , . . . } is the sentence set and Y h = { y h, 1 , y h, 2 , . . . } is the label set.", "Each sentence x h,i X h contains a named entity mention m h,i , and y h,i T is the fine-grained type set of the named entity mention m h,i .", "Similarly, we define the dataset {X l , Y l } for a low-resource language l , where |X l | |X h | 1 .", "In this paper, we use {X h , Y h } , {X l , Y l } and large-scale unlabeled multi-lingual data to train a FGET model for the low-resource language l .", "We use multi-lingual BERT (M-BERT) (Devlin et al., 2019) as the framework backbone to encode the input.", "M-BERT has the same architecture as BERT, but is pre-trained on the multi-lingual corpora in 104 languages.", "Therefore, M-BERT has a good ability to transfer knowledge across languages (Pires et al., 2019; Selvaraj et al., 2021), making it suits our setting well.", "Note that, our framework does not depend on a specific PLM, any other multi-lingual PLMs can also be used as the backbone to encode the input.", "Given a sentence x = [ w 1 , . . . , m, . . . , w n ] , where m is the named entity mention, we additionally insert an entity marker [ENT] on each side of the mention m .", "By feeding the sentence with entity markers into M-BERT, we can get representations [ h w 1 , . . . , h [ENT] , h m , h [ENT] , . . . , h w n ] for all input tokens.", "The left entity marker representation h [ENT] is used to represent the named entity mention.", "For simplicity, we denote this process as m = M-PLM ( x ) , where m is the entity mention representation and x is the input sentence.", "Given each entity type t T , the probability that the mention m in the sentence x can be classified as 1 In our experiments, we focus on handling a difficult and extreme case |X l | = 0 , i.e, there is no any human-labeled data for the low-resource language l .", "the type t is given as P ( t | x ) = (cid:0) t M-PLM ( x ) (cid:1) , (1) where is the sigmoid function, t is the representation of the entity type t , and indicates all learnable model parameters.", "With the data {X h , Y h } in the high-resource language h and the data {X l , Y l } in the low-resource language l , the overall optimization objective is as arg max (cid:104) L high ( ) + L low ( ) (cid:105) , (2) where L high ( ) and L low ( ) respectively indicate the loss functions for the high-resource language h and the low-resource language l .", "These loss functions are defined as L high ( ) = 1 |X h | |X h | (cid:88) i =1 (cid:88) t T (cid:2) t y h,i log P ( t | x h,i ) + (1 t y h,i ) log(1 P ( t | x h,i )) (cid:3) , L low ( ) = 1 |X l | |X l | (cid:88) i =1 (cid:88) t T (cid:2) t y l,i log P ( t | x l,i ) + (1 t y l,i ) log(1 P ( t | x l,i )) (cid:3) .", "(3) For the function c , if the condition c is satisfied, then c = 1 , otherwise c = 0 .", "As we mentioned before, there are only a few human-labeled examples in low-resource languages.", "Although multi-lingual PLMs can provide an effective backbone to understand multilingual semantics, more examples are still required to bridge different languages.", "The existing distantly-supervised methods annotate the mentions of the same entity in multiple sentences with the same pseudo label (Ling and Weld, 2012; Gillick et al., 2014; Xin et al., 2018).", "However, in Figure 2, the mention Mark Twain requires to be annotated with writer or miner according to specific semantics.", "Hence, these single-entity-oriented heuristic rules inevitably bring much noise.", "To this end, we introduce heuristic rules orienting entity pairs to automatically annotate data with less noise.", "Instead of annotating specific entity types, we annotate whether two named entity mentions are of similar types.", "On the one hand, this strategy can consider the correlation and similarity between different types.", "On the other hand, this strategy is suitable for contrastive learning, which can reduce the side effect of data noise.", "In fact, in relation extraction, recent works have adopted similar strategies (Soares et al., 2019; Peng et al., 2020) and achieved promising results.", "More specifically, as shown in Figure 2, we take three rules to obtain distantly-supervised data: (1) Rules without knowledge bases .", "As shown in Figure", "2(a), without using knowledge bases, if one entity pair is mentioned by two sentences, the mentions of the same entity in these two sentences are considered to have similar types.", "(2) Rules with knowledge bases .", "As shown in Figure", "2(b), by using knowledge bases, if entity pairs in two sentences have same relations in knowledge bases, and these pairs have shared entities, the mentions of corresponding entities are considered to have similar types.", "(3) Building cross-lingual data with machine translation .", "As shown in Figure", "2(c), we use machine translation to translate the data from the high-resource language to the low-resource language.", "Owing to the translation, the above-mentioned auto-labeled examples and their translated versions constitute a cross-lingual distantly-supervised dataset.", "By making full advantage of distant supervision and machine translation, we can greatly expand our dataset to bridge high-resource and low-resource languages, and further transfer the typing knowledge between these languages.", "To make FGET models pay more attention to textual contexts rather than merely focusing on entity names, we use the [MASK] token to mask named entity mentions with a probability of 0 .", "5 .", "With all above-mentioned heuristic rules in Section 2.3, we can get the distantly-supervised data X h = { x h, 1 , x h, 2 , . . . } in the high-resource language h , the distantly-supervised data X l = { x l, 1 , x l, 2 , . . . } in the low-resource language l , and the translated data X t = { x t, 1 , x t, 2 , . . . }", "Given any two sentences x 1 , x 2 in these distantly-supervised datasets, we use the function s ( x 1 , x 2 ) to measure the similarity between the entity mentions of the two sentences.", "In practice, we take the cosine similarity with temperature as the function s ( x 1 , x 2 ) : s ( x 1 , x 2 ) = M-PLM ( x 1 ) M-PLM ( x 2 ) M-PLM ( x 1 ) M-PLM ( x 2 ) , (4) where M-PLM ( ) is the entity mention representation computed by multi-lingual PLMs.", "The cross-lingual contrastive learning consists of two important objectives.", "One is the mono-lingual objective for each language, and the other is the cross-lingual objective.", "For both the high-resource language h and the low-resource language l , their mono-lingual objectives are defined as follows, L mono-h ( ) = 1 | X h | | X h | (cid:88) i =1 (cid:2) log (cid:88) p P ( x h,i ) e s ( x h,i , p ) log( (cid:88) p P ( x h,i ) e s ( x h,i , p ) + (cid:88) n N ( x h,i ) e s ( x h,i , n ) ) (cid:3) , L mono-l ( ) = 1 | X l | | X l | (cid:88) i =1 (cid:2) log (cid:88) p P ( x l,i ) e s ( x l,i , p ) log( (cid:88) p P ( x l,i ) e s ( x l,i , p ) + (cid:88) n N ( x l,i ) e s ( x l,i , n ) ) (cid:3) , (5) where P ( x h,i ) X h and N ( x h,i ) X h are respectively the positive set and the negative set of the example x h,i .", "P ( x l,i ) and N ( x l,i ) are defined in a similar way for the example x l,i .", "To ensure that the model does not push the representations of different languages far away, so that the low-resource language l can benefit from the high-resource language h , we further use X h and its translated set X t to define the cross-lingual objective as follows, L cross ( ) = 1 | X t | | X t | (cid:88) i =1 (cid:2) log (cid:88) p P ( x t,i ) e s ( x t,i , p ) log( | X t | (cid:88) j =1 i = j e s ( x t,i , x t,j ) + | X h | (cid:88) j =1 e s ( x t,i , x h,j ) ) (cid:3) , (6) where P ( x t,i ) X h X t is the positive set of the example x t,i .", "arg max [ L cross ( ) + L mono-r ( ) + L mono-l ( )] .", "We divide the whole learning process into two stages: pre-training and fine-tuning.", "The pretraining stage is to use Eq.", "(7) to optimize parameters on the distantly-supervised data.", "Considering computational efficiency, every time we sample a batch of examples for contrastive learning, and then sample multiple positive examples for each example in the batch.", "After the pre-training stage, we use Eq.", "(2) to fine-tune parameters on human-labeled data to learn classifiers for FGET.", "In this section, we evaluate the effectiveness of our framework CROSS-C on two typical entity-related datasets: Open-Entity and Few-NERD.", "For each dataset, we conduct experiments in both low-resource (few-shot or zero-shot) and full-set settings.", "In addition to quantitative experiments, to further show how our method works, we also provide some visualization of feature spaces for qualitative analysis.", "Open-Entity (Choi et al., 2018) and Few-NERD (Ding et al., 2021) are both popular FGET datasets.", "Open-Entity includes 9 general types and 121 fine-grained types.", "Each example in Open-Entity may correspond to multiple entity types.", "Few-NERD includes 8 general types and 66 fine-grained types.", "Both of these two datasets have a clear type hierarchy, which is suitable for evaluating the model performance on the entity typing task.", "In our experiments, we require models to predict both general types and fine-grained types for each entity mention in sentences.", "In this paper, we select English as a high-resource language and Chinese as a low-resource language.", "We attempt to use human-labeled English data and large-scale unlabeled multi-lingual data for learning, to obtain an effective Chinese FGET model.", "This is very difficult, since no any Chinese human-labeled data is used in this process.", "To obtain distantly-supervised data, we apply our heuristic rules to automatically annotate the English and Chinese Wikipedia pages 2 .", "We then use machine translation (Klein et al., 2017; Tan et al., 2020) to translate the English distant-supervised examples into corresponding Chinese versions for cross-lingual contrastive learning.", "All test sets of Open-Entity and Few-NERD are translated into Chinese for evaluation.", "Although the test set built by machine translation may exist some errors, the overall semantics of the translated examples can still support determining the types of entity mentions.", "Taking human-labeled examples for evaluation is better, yet large-scale human-annotated entity typing datasets are still lacking.", "The experiments are performed under three settings: Few-shot setting .", "This setting requires models to infer entity types with a few supervised examples.", "We randomly sample 2, 4, 8, 16 examples for each entity type for training.", "Zero-shot setting .", "This setting requires models to infer entity types without any supervised training, i.e., no human-labeled example is used for training.", "Full-set setting .", "In this setting, all supervised examples in datasets are used for training.", "We follow the widely-used setting of Ling and Weld (2012), use the loose micro F 1 scores to evaluate the performance of models.", "model C ROSS -C .", "We use F-T to denote directly using English human-labeled data to fine-tune M-BERT, which is demonstrated the effectiveness in Selvaraj et al. (2021).", "We use M ONO C to denote only using mono-lingual contrastive learning objectives for pre-training, and then use English human-labeled data to fine-tune pre-trained parameters.", "All above-mentioned models are optimized by AdamW with the learning rate { 5e-6,1e-5,3e-5,5e-5 } .", "The batch size used for pre-training and fine-tuning is from { 8,16,32,64,128,256 } .", "For cross-lingual contrastive learning, we only traverse large-scale distantly-supervised data once.", "For fine-tuning models on human-labeled data, the epochs are from { 1,3,5,7,10 } .", "The temperature used for the cosine similarity is 0 .", "5 .", "(1) Using a multi-lingual PLM as the backbone can lead to an effective FGET model for those low-resource languages.", "All methods, including both the baseline models and our CROSS-C, can achieve non-trivial entity typing results on the Chinese test sets, without using any Chinese human-labeled examples for training models.", "(2) Using distantly-supervised data for contrastive learning can significantly improve the typing capabilities of the backbone PLMs.", "Compared with directly fine-tuning a multi-lingual PLM with human-labeled data in high-resource languages, conducting contrastive learning on multi-lingual distantly-supervised data can better bridge high-resource languages and low-resource languages, which is beneficial to obtain effective models in low-resource languages.", "(3) Compared with mono-lingual contrastive learning, our cross-lingual contrastive learning can better improve the transfer of typing knowledge from high-resource languages to low-resource languages.", "Our CROSS-C achieves the best results in all shot settings.", "And the improvements of CROSSC will gradually increase as the number of shots decreases.", "These results show that our method can effectively improve model performance for low-resource languages even without any high-quality supervised language-specific data.", "(1) In our low-resource settings, although there are no human-labeled Chinese data at all, there are still some high-quality English examples for each entity type.", "Therefore, the improvements of contrastive learning on the English test sets are not as obvious as on the Chinese test sets.", "However, compared with directly fine-tuning PLMs, contrastive learning methods still bring significant improvements, demonstrating the power of using distant supervision for data augmentation.", "(2) Owing to multi-lingual data, which makes models in multiple languages learn from each other, our cross-lingual contrastive learning further brings additional improvements over the mono-lingual contrastive learning.", "This proves the effectiveness of our cross-lingual contrastive framework.", "Table 3 shows the results of zero-shot entity typing on the Chinese test sets.", "In this table, we can see that: without a trained type classifier, our cross-lingual contrastive learning still brings the backbone PLM a strong type recognition ability in the pre-training stage.", "We show the model performance curve as the number of supervised examples increases in Figure", "3. Note that only the supervised examples of the high-resource language English are used for training models.", "There is still no human-labeled data for the low-resource language Chinese.", "The results in the figure show that: (1) For high-resource languages, by using more supervised examples, the improvements brought by contrastive learning are gradually decreasing, which is in line with our intuition.", "But we should also notice that even in the full-set setting, contrastive learning methods achieve comparable or even slightly better results than fine-tuning PLMs.", "This means that taking contrastive learning can well reduce the impact of data noise while enhanc-2246 Figure 3: The model performance (%) curve as the number of supervised examples increases.", "ing performance by making full use of distantly-supervised data.", "(2) In both low-resource and full-set settings, the results of our contrastive learning on the Chinese test sets are always significantly higher than other baseline models.", "This shows that our framework can utilize the supervised data of high-resource languages and large-scale unlabeled multi-lingual data to handle FGET for low-resource languages.", "In order to show how our CROSS-C works more intuitively, we conduct comprehensive ablation experiments.", "The results of the ablation experiments are shown in Table , where cc means that we drop the cross-lingual contrastive objective for pretraining the backbone PLM, zc means that we drop the mono-lingual contrastive objective on the Chinese distantly-supervised data, and ec means that we drop the mono-lingual contrastive objective on the English distantly-supervised data.", "From Table , we can find that: both the monolingual contrastive objectives and the cross-lingual objective play an important role in enhancing the backbone PLM, and the combination of them can lead to greater improvements.", "This is also the reason that our cross-lingual contrastive learning includes both mono-lingual and cross-lingual contrastive objectives for pre-training the backbone.", "We also give the visualization of the model during the ablation experiments of CROSS-C in Figure", "4. From the visualization results, we can find that it is difficult to bridge high-resource languages and low-resource languages without using any contrastive learning.", "As we gradually increase the number of contrastive learning objectives, the distinction between entity types becomes more obvious, and the fusion of multi-lingual semantics also becomes better.", "As one of the most important tasks in the field of information extraction, FGET has been studied for a long time.", "Ling and Weld (2012); Yosef et al. (2012) first propose to classify named entity mentions into various fine-grained entity types, instead of just a few coarse-grained types (Sang and De Meulder, 2003; Hovy et al., 2006).", "Since fine-grained types bring informative semantics for language understanding, these types are widely used to enhance entity-related NLP tasks, such as coreference resolution (Khosla and Rose, 2020), entity linking (Onoe and Durrett, 2020; Chen et al., 2020a), relation extraction (Ren et al., 2017; Zhou and Chen, 2021) and event extraction (Nguyen et al., 2016; Yang and Mitchell, 2016).", "Some recent efforts further incorporate entity types to learn 2247 Model Open-Entity (Chinese) 2-Shot 4-Shot 8-Shot 16-Shot CROSS-C 48.1 50.1 55.5 64.6 cc 45.2 2 .", "Distantly-supervised FGET methods .", "Since entity types have complex hierarchies, manually annotating FGET data is not easy, and thus the low-resource problem is one of the key challenges of FGET.", "To alleviate this issue, distantly-supervised methods have been widely explored for FGET.", "One typical distantly-supervised approach is using knowledge bases to automatically annotate entities mentioned in the text.", "Ling and Weld (2012); Gillick et al. (2014) collect anchors in the Wikipedia pages that correspond to entities in knowledge bases, and then label these anchors with entity types in knowledge bases.", "This approach is then followed by a series of works (Ren et al., 2017; Xin et al., 2018; Choi et al., 2018; Dai et al., 2019; Jin et al., 2019; Lee et al., 2020) to obtain pseudo labels.", "Other approaches use various noun phrases in sentences as type pseudo labels (Dai et al., 2020, 2021), which can make full use of the recently proposed PLMs for data augmentation.", "Human-labeled FGET datasets .", "In addition to the distantly-supervised methods, the construction of FGET datasets is also advancing.", "CoNLL (Sang and De Meulder, 2003) and Ontonotes (Hovy et al., 2006) are the earliest datasets, although they just cover several coarse-grained types.", "Then, Ling and Weld (2012); Gillick et al. (2014); Ding et al. (2021) introduce about a hundred fine-grained types and annotate a large number of examples for each type.", "Choi et al. (2018) further extend FGET by introducing an ultra-fine set containing thousands of types.", "Since annotating FGET examples is time-consuming and labor-intensive, many of the ultra-fine types proposed by Choi et al. (2018) only have distantly-supervised examples.", "However, all these efforts only focus on English.", "There are also some efforts to build datasets in other languages, such as Chinese (Lee et al., 2020), Japanese (Suzuki et al., 2016), Dutch and Spanish (van Erp and Vossen, 2017), but the scale and quality of these non-English datasets are still not comparable with English datasets, i.e., non-English human-labeled data are still scarce.", "Cross-lingual and contrastive learning for FGET .", "Although cross-lingual learning has been widely explored in entity linking (Sil et al., 2018; Upadhyay et al., 2018; Rijhwani et al., 2019) and named entity recognition (Pan et al., 2017; Xie et al., 2018; Rahimi et al., 2019; Zhou et al., 2019), cross-lingual entity typing has not yet been explored much (Selvaraj et al., 2021).", "For contrastive learning (Chen et al., 2020b; Oord et al., 2018), some preliminary works have explored it for extracting relations between entities (Soares et al., 2019) and achieved promising results.", "Peng et al. (2020) further use contrastive learning to analyze the impact of entity information on relation extraction.", "Similar to cross-lingual learning, the exploration of contrastive learning for FGET is still in the preliminary stage.", "In this paper, to learn effective FGET models for those low-resource languages, we propose an effective cross-lingual contrastive learning framework CROSS-C to transfer the typing knowledge from high-resource languages to low-resource languages.", "Specifically, the framework CROSS-C uses a multi-lingual PLM M-BERT as the framework backbone, which can simultaneously capture multi-lingual semantics in a unified feature space.", "Furthermore, to bridge the gap between high-resource languages and low-resource languages, we introduce entity-pair-oriented heuristic rules as well as machine translation to automatically obtain high-quality cross-lingual data, and then apply cross-lingual contrastive learning on this distantly-supervised data to enhance the backbone PLM.", "The experimental results show that by applying CROSS-C, the typing knowledge can be transferred from high-resource languages to low-resource languages, and we can learn effective FGET models without any language-specific 2248 human-labeled data for those low-resource languages.", "Ning Ding, Guangwei Xu, Yulin Chen, Xiaobin Wang, Xu Han, Pengjun Xie, Hai-Tao Zheng, and Zhiyuan Liu.", "Dan Gillick, Nevena Lazic, Kuzman Ganchev, Jesse Kirchner, and David Huynh.", "2014.", "Context-dependent fine-grained entity type tagging.", "arXiv preprint arXiv:1412.1820 .", "Xu Han, Zhengyan Zhang, Ning Ding, Yuxian Gu, Xiao Liu, Yuqi Huo, Jiezhong Qiu, Yuan Yao, Ao Zhang, Liang Zhang, et al. 2021.", "Pre-trained models: Past, present and future.", "AI Open , 2:225250.", "Eduard Hovy, Mitch Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel.", "2006.", "Ontonotes: the 90% solution.", "In Proceedings of NAACL-HLT , pages 5760.", "Sopan Khosla and Carolyn Rose.", "2020.", "Using type information to improve entity coreference resolution.", "In Proceedings of the Workshop of ACL , pages 2031.", "Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senel-lart, and Alexander M Rush.", "2017.", "Opennmt: Open-source toolkit for neural machine translation.", "In Proceedings of ACL, System Demonstrations , pages 67 72.", "Chin Lee, Hongliang Dai, Yangqiu Song, and Xin Li.", "2020.", "A chinese corpus for fine-grained entity typing.", "In Proceedings of LREC , pages 44514457.", "Jing Li, Aixin Sun, Jianglei Han, and Chenliang Li.", "2020.", "A survey on deep learning for named entity recognition.", "TKDE .", "Weijie Liu, Peng Zhou, Zhe Zhao, Zhiruo Wang, Qi Ju, Haotang Deng, and Ping Wang.", "2020.", "K-bert: Enabling language representation with knowledge graph.", "In Proceedings of AAAI , pages 29012908.", "Thien Huu Nguyen, Kyunghyun Cho, and Ralph Gr-ishman.", "2016.", "Joint event extraction via recurrent neural networks.", "In Proceedings of NAACL-HLT , pages 300309.", "Yasumasa Onoe and Greg Durrett.", "2020.", "Fine-grained entity typing for domain independent entity linking.", "In Proceedings of AAAI , pages 85768583.", "Aaron van den Oord, Yazhe Li, and Oriol Vinyals.", "2018.", "Representation learning with contrastive predictive coding.", "arXiv preprint arXiv:1807.03748 .", "Xiaoman Pan, Boliang Zhang, Jonathan May, Joel Noth-man, Kevin Knight, and Heng Ji.", "2017.", "Cross-lingual name tagging and linking for 282 languages.", "In Proceedings of ACL , pages 19461958.", "2249 Hao Peng, Tianyu Gao, Xu Han, Yankai Lin, Peng Li, Zhiyuan Liu, Maosong Sun, and Jie Zhou.", "(1) We will explore how to better utilize unsupervised data to deal with the low-resource problem of FGET, such as using better PLMs and more effective tuning methods.", "(2) We will also promote the construction of cross-lingual FGET datasets, which will advance the development of FGET in specific languages, especially for those low-resource languages other than English.", "This work is supported by the National Key R&D Program of China (No. 2020AAA0106502), Institute Guo Qiang of Tsinghua University, Beijing Academy of Artificial Intelligence (BAAI), and International Innovation Center of Tsinghua University.", "This work is also supported by Tencent AI Platform Department." ]
[ "abstain", "abstain", "abstain", "objective", "method", "result", "result", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "result", "abstain", "method", "method", "method", "abstain", "result", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "other", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "other", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "other", "other" ]
[ "Dense retrieval has achieved impressive advances in first-stage retrieval from a large-scale document collection, which is built on bi-encoder architecture to produce single vector representation of query and document.", "However, a document can usually answer multiple potential queries from different views.", "So the single vector representation of a document is hard to match with multi-view queries, and faces a semantic mismatch problem.", "This paper proposes a multi-view document representation learning framework, aiming to produce multiview embeddings to represent documents and enforce them to align with different queries.", "First, we propose a simple yet effective method of generating multiple embeddings through viewers.", "Second, to prevent multi-view embeddings from collapsing to the same one, we further propose a global-local loss with annealed temperature to encourage the multiple viewers to better align with different potential queries.", "Experiments show our method outperforms recent works and achieves state-of-the-art results.", "Over the past few years, with the advancements in pre-trained language models (Devlin et al., 2019; Liu et al., 2019), dense retrieval has become an important and effective approach in open-domain text retrieval (Karpukhin et al., 2020; Lee et al., 2019; Qu et al., 2021; Xiong et al., 2020).", "A typical dense retriever usually adopts a bi-encoder (Huang et al., 2013; Reimers and Gurevych, 2019) architecture to encode input query and document into a single low-dimensional vector (usually CLS to-ken), and computes the relevance scores between their representations.", "In real-world applications, the embedding vectors of all the documents are pre Work done during internship at Microsoft Research Asia.", "Title: IPod Document: Beginning in mid-2007, four major airlines, United, Continental, Delta, and Emirates, reached agreements to install iPod seat connections.", "The free service will allow passengers to power and charge an iPod, and view video and music libraries on individual seat-back displays.", "Originally KLM and Air France were reported to be part of the deal with Apple, but they later released statements explaining that they were only contemplating the possibility of incorporating such systems.", "The iPod line can play several audio file formats including MP3, AAC/M4A, Protected AAC, AIFF, WAV, Audible audiobook, and Apple Lossless.", "The iPod Photo introduced the ability to display JPEG, BMP, GIF,TIFF, and PNG image file formats.", "-computed in advance, and the retrieval process can be efficiently boosted by the approximate nearest neighbor (ANN) technique (Johnson et al., 2019).", "To enhance bi-encoder's capacity, recent studies carefully design sophisticated methods to train it effectively, including constructing more challenging hard negatives (Zhan et al., 2021; Xiong et al., 2020; Qu et al., 2021) and continually pre-train the 5990 language models (Gao and Callan, 2021a; Oguz et al., 2021) for a better transfer.", "However, being limited to the single vector representation, bi-encoder faces the upper boundary of representation capacity according to theoretical analysis in Luan et al. (2021).", "In the real example from SQuAD dev dataset, we also find that a single vector representation can't match well to multi-view queries, as shown in Figure.1.", "The document corresponds to four different questions from different views, and each of them matches to different sentences and answers.", "In the traditional bi-encoder, the document is represented to a single vector while it should be recalled by multiple diverse queries, which limits the capacity of the bi-encode.", "As for the multi-vector models, cross-encoder architectures perform better by computing deeply-contextualized representations of query-document pairs, but are computationally expensive and impractical for first-stage large-scale retrieval (Reimers and Gurevych, 2019; Humeau et al., 2020).", "Some recent studies try to borrow from cross-encoder and extend bi-encoder by employing more delicate structures, which allow the multiple vector representations and dense interaction between query and document embeddings.", "including late interaction (Khattab and Zaharia, 2020) and attention-based aggregator (Humeau et al., 2020; Tang et al., 2021).", "However, most of them contain softmax or sum operators that can't be decomposed into max over inner products, and so fast ANN retrieval can't be directly applied.", "Based on these observations, we propose M ulti-V iew document R epresentations learning framework, MVR in short.", "MVR originates from our observation that a document commonly has several semantic units, and can answer multiple potential queries which contain individual semantic content.", "It is just like given a specified document, different askers raise different questions from diverse views.", "Therefore, we propose a simple yet effective method to generate multi-view representations through viewers , optimized by a global-local loss with annealed temperature to improve the representation space.", "Prior work has found [CLS] token tends to aggregate the overall meaning of the whole input segment (Kovaleva et al., 2019; Clark et al., 2019), which is inconsistent with our goal of generating multi-view embeddings.", "So we first modify the bi-encoder architecture, abandon [CLS] token and add multiple [Viewer] tokens to the document input.", "The representation of the viewers in the last layer is then used as the multi-view representations.", "To encourage the multiple viewers to better align with different potential queries, we propose a global-local loss equipped with an annealed temperature.", "In the previous work, the contrastive loss between positive and negative samples is widely applied (Karpukhin et al., 2020).", "Apart from global contrastive loss, we formulate a local uniformity loss between multi-view document embeddings, to better keep the uniformity among multiple viewers and prevent them from collapsing into the same one.", "In addition, we adopt an annealed temperature which gradually sharpens the distribution of viewers, to help multiple viewers better match to different potential queries, which is also validated in our experiment.", "The contributions of this paper are as follows: We propose a simple yet effective method to generate multi-view document representations through multiple viewers.", "To optimize the training of multiple viewers, we propose a global-local loss with annealed temperature to make multiple viewers to better align to different semantic views.", "Experimental results on open-domain retrieval datasets show that our approach achieves state-of-the-art retrieval performance.", "Further analysis proves the effectiveness of our method.", "With the development of deep language model (De-vlin et al., 2019), fine-tuned deep pre-trained BERT achieve advanced re-ranking performance (Dai and Callan, 2019; Nogueira and Cho, 2019).", "The initial approach is the cross-encoder based re-ranker, as shown in", "Figure.2(a).", "It feeds the concatenation of query and document text to BERT and outputs the [CLS] token's embeddings to produce a relevance score.", "Benefiting from deeply-contextualized representations of querydocument pairs, the deep LM helps bridge both vocabulary mismatch and semantic mismatch.", "However, cross-encoder based rankers need computationally expensive cross-attention operations (Khattab and Zaharia, 2020; Gao and Callan, 2021a), so it is 5991 [CLS] A 1 A m Query Encoder [CLS] B 1 B m Doc Encoder [CLS] A 1 A m [CLS] B 1 B m Score [CLS] A 1 A m Query Encoder [CLS] B 1 B m Doc Encoder [CLS] A 1 A m [CLS] B 1 B m Sum MaxSim MaxSim MaxSim [CLS] A 1 A m Query Encoder [CLS] A 1 A m [CLS] B 1 B m Doc Encoder [CLS] B 1 B m e q Compressor c 1 ,c 2 ,c k Pooler Attention e d Score", "impractical for large-scale first-stage retrieval and is usually deployed in second-stage re-ranking.", "As for first-stage retrieval, bi-encoder is the most adopted architecture (Karpukhin et al., 2020) for it can be easily and efficiently employed with support from approximate nearest neighbor (ANN) (John-son et al., 2019).", "As illustrated in", "Figure.2(b), it feeds the query and document to the individual encoders to generate single vector representations, and the relevance score is measured by the similarity of their embeddings.", "Equipped with deep LM, bi-encoder based retriever has achieved promising performance (Karpukhin et al., 2020).", "And later studies have further improved its performance through different carefully designed methods, which will be introduced in Sec.2.2 Beside the typical bi-encoder, there are some variants(Gao et al., 2020; Chen et al., 2020; Mehri and Eric, 2021) proposing to employ dense interactions based on Bi-encoder.", "As shown in", "Fig.2(c), ColBERT (Khattab and Zaharia, 2020) adopts the late interaction paradigm, which computes token-wise dot scores between all the terms' vectors, sequentially followed by max-pooler and sum-pooler to produce a relevance score.", "Later on, Gao et al. (2021) improve it by scoring only on overlapping token vectors with inverted lists.", "Another variant is the attention-based aggregator, as shown in", "Fig.2(d).", "It utilizes the attention mechanism to compress the document embeddings to interact with the query vector for a final relevance score.", "There are several works (Humeau et al., 2020; Luan et al., 2021; Tang et al., 2021) built on this paradigm.", "Specifically, Poly-Encoder(learnt-k) (Humeau et al., 2020) sets k learnable codes to attend them over the document embeddings.", "DRPQ (Tang et al., 2021) achieve better results by iterative K-means clustering on the document vectors to generate multiple embeddings, followed by attention-based interaction with query.", "However, the dense interaction methods can't be directly deployed with ANN, because both the sum-pooler and attention operator can't be decomposed into max over inner products, and the fast ANN search cannot be applied.", "So they usually first approximately recall a set of candidates then refine them by exhaustively re-ranking, While MVR can be directly applied in first-stage retrieval.", "Another related method is ME-BERT(Luan et al., 2021), which adopts the first k document token embeddings as the document representation.", "However, only adopting the first-k embeddings may lose beneficial information in the latter part of the document, while our viewer tokens can extract from the whole document.", "In Sec.5.2, we also find the multiple embeddings in MEBERT will collapse to the same [CLS] , while our global-local loss can address this problem.", "In addition to the aforementioned work focusing on the architecture design, there exist loads of work that proposes to improve the effectiveness of dense retrieval.", "Existing approaches of learning dense passage retriever can be divided into two categories: (1) pre-training for retrieval (Chang et al., 2020; Lee et al., 2019; Guu et al., 2020) and (2) fine-tuning pre-trained language models (PLMs) on labeled data (Karpukhin et al., 2020; Xiong et al., 2020; Qu et al., 2021).", "In the first category, Lee et al. (2019) and Chang et al. (2020) propose different pre-training task and demonstrate the effectiveness of pre-training in dense retrievers.", "Recently, DPR-PAQ (Oguz et al., 2021) proposes domain matched pre-training, while Condenser (Gao and Callan, 2021a,b) enforces the model to produce an information-rich CLS representation with continual pre-training.", "As for the second class, recent work (Karpukhin et al., 2020; Xiong et al., 2020; Qu et al., 2021; Zhan et al., 2021) shows the key of fine-tuning an effective dense retriever revolves around hard nega-5992 tives.", "DPR (Karpukhin et al., 2020) adopts in-batch negatives and BM25 hard negatives.", "ANCE (Xiong et al., 2020) proposes to construct hard negatives dynamically during training.", "RocketQA (Qu et al., 2021; Ren et al., 2021b) shows the cross-encoder can filter and mine higher-quality hard negatives.", "Li et al. (2021) and Ren et al. (2021a) demonstrate that passage-centric and query-centric negatives can make the training more robust.", "It is worth mentioning that distilling the knowledge from cross-encoder-based re-ranker into bi-encoder-based retriever (Sachan et al., 2021; Izacard and Grave, 2021; Ren et al., 2021a,b; Zhang et al., 2021) can improve the bi-encoder's performance.", "Most of these works are built upon bi-encoder and naturally inherit its limit of a single vector representation, while our work modified the bi-encoder to produce multi-view embeddings, and is also orthogonal to these strategies.", "We start with a bi-encoder using BERT as its backbone neural network, as shown in Figure", "2(b).", "A typical Bi-encoder adopts dual encoder architecture which maps the query and document to single dimensional real-valued vectors separately.", "Given a query q and a document collection D = { d 1 , . . . , d j , . . . , d n } , dense retrievers leverage the same BERT encoder to get the representations of queries and documents.", "Then the similarity score f ( q, d ) of query q and document d can be calculated with their dense representations: f ( q, d ) = sim ( EQ ( q ) , ED ( d )) (1) Where sim ( ) is the similarity function to estimate the relevance between two embeddings, e.g., co-sine distance, euclidean distance, etc.", "And the inner-product on the [CLS] representations is a widely adopted setting (Karpukhin et al., 2020; Xiong et al., 2020).", "A conventional contrastive-learning loss is widely applied for training query and passage encoders supervised by the target task's training set.", "For a given query q , it computed negative log likelihood of a positive document d + against a set of negatives { d 1 , d 2 ,", "..d l } .", "Where is hyper-parameter of temperature-scaled factor, and an appropriate temperature can help in better optimization (Sachan et al., 2021; Li et al., 2021).", "Limited to single vector representation, the typical bi-encoder faces a challenge that a document contains multiple semantics and can be asked by different potential queries from multi-view.", "Though some previous studies incorporate dense interaction to allow multiple representations and somehow improve the effectiveness, they usually lead to more additional expensive computation and complicated structure.", "Therefore, we propose a simple yet effective method to produce multi-view representations by multiple viewers and we will describe it in detail.", "As pre-trained BERT has benefited a wide scale of the downstream tasks including sentence-level ones, some work has found [CLS] tend to aggregate the overall meaning of the whole sentence (Koval-eva et al., 2019; Clark et al., 2019).", "However, our model tends to capture more fine-grained semantic units in a document, so we introduce multiple viewers .", "Rather than use the latent representation of the [CLS] token, we adopt newly added multiple viewer tokens [VIE] to replace [CLS] , which are randomly initialized.", "For documents input, we add different [ V IE i ] (i=1,2,..., n) at the beginning of sentence tokens.", "To avoid effect on the positional encoding of the original input sentences, we set all the position ids of [ V IE i ] to 0, and the document sentence tokens start from 1 as the original.", "Then We leverage the dual encoder to get the representations of queries and documents: E ( q ) = Enc q ([ V IE ] q [ SEP ]) E ( d ) = Enc d ([ V IE 1 ] [ V IE n ] d [ SEP ]) (3) Where is the concatenation operation.", "[VIE] and [SEP] are special tokens in BERT.", "Enc q and Enc d mean query and document encoder.", "We use the last layer hidden states as the query and document embeddings.", "The representations of the [VIE] tokens are used as representations of query q and document d , which are denoted as E 0 ( q ) and E i ( d )( i = 0 , 1 , ..., k 1) , respectively.", "As the query is much shorter than the document and usually represents one concrete meaning, we retain the typical setting to produce only one embedding for the query.", "Then the similarity score f ( q, d ) of query q and document d can be calculated with their dense representations.", "As shown in Figure.3, we first compute the Individual Scores between the single query embedding and document's multi-view embeddings, in which we adopt the inner-product.", "The resulted score corresponding to [ V IE i ] is denoted as f i ( q, d )( i = 0 , 1 , ..., k 1) .", "The we adopt a max-pooler to aggregate individual score to the Aggregate Score f ( q, d ) as the similarity score for the given query and document pairs: f ( q, d ) = Max i { f i ( q, d ) } = Max i { sim ( E 0 ( q ) , E i ( d )) } (4) 3.3 Global-Local Loss In order to encourage the multiple viewers to better align to different potential queries, we introduce a Global-Local Loss to optimize the training of multi-view architecture.", "It combines the global contrastive loss and the local uniformity loss.", "The global contrastive loss is inherited from the traditional bi-encoder.", "Given the query and a positive document d + against a set of negatives { d 1 , d 2 ,", "..d l } , It is computed as follows: L global = log e f ( q,d + ) / e f ( q,d + ) / + (cid:80) l e f ( q,d l ) / (6) To improve the uniformity of multi-view embedding space, we propose applying Local Uniformity Loss among the different viewers in Eq.7.", "For a specific query, one of the multi-view document representations will be matched by max-score in Eq.4.", "The local uniformity loss enforces the selected viewer to more closely align with the query and distinguish from other viewers.", "To further encourage more different viewers to be activated, we adopt an annealed temperature in Eq.8, to gradually tune the sharpness of viewers' softmax distribution.", "In the start stage of training with a high temperature, the softmax values tend to have a uniform distribution over the viewers, to make every viewer fair to be selected and get back gradient from train data.", "As the training process goes, the temperature decreases to make optimization more stable.", "Where is a hyper-parameter to control the annealing speed, t denotes the training epochs, and the temperature updates every epoch.", "To simplify the settings, we use the same annealed temperature in L local and L global and our experiments validate the annealed temperature works mainly in conjunction with L local through multiple viewers.", "During inference, we build the index of all the reviewer embeddings of documents, and then our model directly retrieves from the built index leveraging approximate nearest neighbor (ANN) technique.", "However, both Poly-Encoder (Humeau et al., 2020) and DRPQ (Tang et al., 2021) adopt attention-based aggregator containing softmax or sum operator so that the fast ANN can't be directly applied.", "Though DRPQ proposes to approximate softmax to max operation, it still needs to first recall a set of candidates then rerank them using the complex aggregator, leading to expensive computation and complicated procedure.", "In contrast, MVR 5994 Method SQuAD Natural Question Trivia QA R@5 R@20 R@100 R@5 R@20 R@100 R@5 R@20 R@100 BM25 (Yang et al., 2017) --59.1 73.7 -66.9 76.7 DPR (Karpukhin et al., 2020) -76.4 84.8 -74.4 85.3 -79.3 84.9 ANCE (Xiong et al., 2020) --81.9 87.5 -80.3 85.3 RocketQA (Qu et al., 2021) --74.0 82.7 88.5 -Condenser (Gao and Callan, 2021a) --83.2 88.4 -81.9 86.2 DPR-PAQ (Oguz et al., 2021) --74.5 83.7 88.6 -DRPQ (Tang et al., 2021) -80.5 88.6 -82.3 88.2 -80.5 85.8 coCondenser (Gao and Callan, 2021b) --75.8 84.3 89.0 76.8 83.2 87.3 coCondenser(reproduced) 73.2 81.8 88.7 75.4 84.1 88.8 76.4 82.7 86.8 MVR 76.4 84.2 89.8 76.2 84.8 89.3 77.1 83.4 87.4 Table 1: Retrieval performance on SQuAD dev, Natural Question test and Trivia QA test.", "can be directly applied in first-stage retrieval without post-computing process as them.", "Though the size of the index will grow by viewer number k , the time complexity can be sublinear in index size (An-doni et al., 2018) due to the efficiency of ANN technique(Johnson et al., 2019).", "Natural Questions (Kwiatkowski et al., 2019) is a popular open-domain retrieval dataset, in which the questions are real Google search queries and answers were manually annotated from Wikipedia.", "TriviaQA (Joshi et al., 2017) contains a set of trivia questions with answers that were originally scraped from the Web.", "SQuAD Open (Rajpurkar et al., 2016) contains the questions and answers originating from a reading comprehension dataset, and it has been used widely used for open-domain retrieval research.", "We follow the same procedure in (Karpukhin et al., 2020) to preprocess and extract the passage candidate set from the English Wikipedia dump, resulting to about two million passages that are nonoverlapping chunks of 100 words.", "Both NQ and TQA have about 60K training data after processing and SQuAd has 70k.", "Currently, the authors release all the datasets of NQ and TQ.", "For SQuAD, only the development set is available.", "So we conduct experiments on these three datasets, and evaluate the top5/20/100 accuracy on the SQuAD dev set and test set of NQ and TQ.", "We have counted how many queries correspond to one same document and the average number of queries of SQuAD, Natural Questions and Trivia QA are 2.7, 1.5 and 1.2, which indicates the multi-view problem is common in open-domain retrieval.", "We train MVR model following the hyper-parameter setting of DPR (Karpukhin et al., 2020).", "All models are trained for 40 epochs on 8 Tesla V100 32GB GPUs.", "We tune different viewer numbers on the SQuAD dataset and find the best is 8, then we adopt it on all the datasets.", "To make a fair comparison, we follow coCondenser (Gao and Callan, 2021b) to adopt mined hard negatives and warm-up pre-training strategies, which are also adopted in recent works (Oguz et al., 2021; Gao and Callan, 2021a) and show promotion.", "Note that we only adopt these strategies when compared to them, while in the ablation studies our models are built only with the raw DPR model.", "During inference, we apply the passage encoder to encode all the passages and index them using the Faiss IndexFlatIP index (Johnson et al., 2019).", "We compare our MVR model with previous state-of-the-art methods.", "Among these methods, DRPQ (Tang et al., 2021) achieved the best results in multiple embeddings methods, which is the main compared baseline for our model.", "In addition, we also compare to the recent strong dense retriever, including ANCE (Xiong et al., 2020), Ro-cekteQA (Qu et al., 2021), Condenser (Gao and Callan, 2021a), DPR-PAQ (Oguz et al., 2021) and coCondenser (Gao and Callan, 2021b).", "For coCondenser, we reproduced its results and find it a little lower than his reported one, maybe due to its private repo and tricks.", "Overall, these methods mainly focus on mining hard negative samples, knowledge distillation or pre-training strategies to 5995 Models R@5 R@20 R@100 DPR(k=1) 66.2 76.8 85.2 ME-BERT(k=4) 66.8 77.6 85.5 ME-BERT(k=8) 67.3 77.9 86.1 MVR(k=4) 68.5 78.5 85.8 MVR(k=6) 72.3 80.3 86.4 MVR(k=8) 75.5 83.2 87.9 MVR(k=12) 74.8 82.9 87.4 Table 2: Performance of different viewers' number in MVR and compared models.", "improve dense retrieval.", "And our MVR framework is orthogonal to them and can be combined with them for better promotion.", "As shown in Table 1, we can see that our proposed MVR performs better than other models.", "Compared to DRPQ which performs best in the previous multi-vector models, MVR can outperform it by a large margin, further confirming the superiority of our multi-view representation.", "It's worth noting that our model improves more on the SQuAD dataset, maybe due to the dataset containing more documents that correspond to multiple queries as we state in Sec.4.1.", "It indicates that MVR can address the multi-view problem better than other models.", "Impact of Viewers' Number: We conduct ablation studies on the development set of SQuAD open.", "For fair comparison, we build all the models mentioned in the following based on DPR toolkit, including MEBERT and MVR.", "The results are shown in Table 2, and the first block shows the results of DPR and MEBERT which adopt the first k token embeddings.", "Compared to DPR and MEBERT, our model shows strong performance, which indicates the multi-view representation is effective and useful.", "Then, we analyze how the different numbers of viewers ( k = 4 , 6 , 8 , 12 ) affect performance in Method Doc Encoding Retrieval DPR 2.5ms 10ms ColBERT 2.5ms 320ms MEBERT 2.5ms 25ms DRPQ 5.3ms 45ms MVR 2.5ms 25ms Table 4: Time cost of online and offline computing in SQuAD retrieval task.", "MVR.", "We find that the model achieves the best performance when k = 8 .", "When k increase to k = 12 or larger, it leads little decrease in the performance, maybe due to there being not so many individual views in a document.", "Analysis on Global-local Loss: In this part, we conduct more detailed ablation study and analysis of our proposed Global-local Loss.", "As shown in Table 3, we gradually reduce the strategies adopted in our model.", "We find not having either local uniformity loss (LC loss in table) or annealed temperature damages performance, and it decreases more without both of them.", "We also provide more experimental results to show the effectiveness of the annealed temperature.", "We first tune the fixed temperature, find it between 0.3 to 1 is beneficial.", "We adopt the annealed temperature annealed from 1 to 0.3 gradually as in Eq.8, finding a suitable speed( = 0 . 1 ) can better help with optimization during training.", "Note that the model w/o Multiple Viewers can be seen as DPR with annealed , just little higher than raw DPR in Table 2, while annealed improves more when using multi-viewer.", "It indicates our annealed strategy plays a more important role in multi-view learning.", "Efficiency Analysis: We test the efficiency of our model on 4 Nvidia Tesla V100 GPU for the SQuAD dev set, as shown in Table", "4. We record the encoding time per document and retrieval time per query, and don't include the query encoding time for it is equal for all the models.", "To compare our approach with other different models, we also record the retrieval time of other related models.", "We can see that our model spends the same encoding time as DPR, while DRPQ needs additional time to run K-means clustering.", "With the support of Faiss, the retrieval time MVR cost is near to DPR and less than ColBERT (Khattab and Zaharia, 2020) and DRPQ (Tang et al., 2021) which need additional post-computing as we state in Sec.2.1.", "To analyze the difference between MVR and sentence-level retrieval which is another way to produce multiple embeddings, we design several models as shown in Table", "5. Sentence-level means that we split all the passages into individual sentences with NLTK toolkit 1 , resulting to an average of 5.5 sentences per passage.", "The new positives are the sentences containing answers in the original positives, and new negatives are all the split sentences of original negatives.", "K-equal-splits means the DPR's original 100-words-long passages are split into k equal long sequences and training positives and negatives as Sentence-level's methods.", "Compared to MVR, Sentence-level drops a lot even lower than DPR maybe for they lose contextual information in passages.", "It also indicates that the multi-view embeddings of MVR do not just correspond to sentence embeddings, but capture semantic meanings from different views.", "For even a single sentence may contain diverse information that can answer different potential queries (as in Fig.1).", "The k-equal-split methods perform worse much for it further lose the sentence structure and may contain more noise.", "To further show the effectiveness of our proposed MVR framework, we evaluate the distribution of multi-view document representations.", "We conduct evaluations on the randomly sampled subset of SQuAD development set, which contains 1.5k query-doc pairs and each document has an average of 4.8 corresponding questions.", "We adopt two metrics Local Variation and Perplexity (Brown et al., 1992)(denoted as LV and PPL ) to illustrate the effect of different methods.", "We first compute the normalized scores between the document's multiview embeddings and query embedding as in Eq.4, 1 www.nltk.org Models PPL LV ME-BERT 1.02 0.159 MVR 3.19 0.126 MVR w/o LC loss 3.23 0.052 MVR w/o Annealed 2.95 0.118 Table 6: Analysis of multi-view embeddings produced by different methods.", "and record the scores f i ( q, d ) of all the viewers.", "Then Local Variation of a query-doc pair can be computed as follows, and then we average it on all the pairs.", "The Local Variation measures the distance of the max scores to the average of the others, which can curve the uniformity of different viewers.", "The higher it is, the more diversely the multi-view embeddings are distributed.", "Then we collect the index of the viewer having the max score, and group the indexes of different queries corresponding to the same documents.", "Next, we can get the distributions of different indexes denoted as p i .", "The Perplexity can be computed as follows: P P L = exp ( (cid:88) m p i log p i ) (10) If different viewers are matched to totally different queries, the p i tends to be a uniform distribution and PPL goes up.", "The comparison results are shown in Table", "6. When evaluating MEBERT, we find its multiple embeddings collapse into the same [CLS] embeddings rather than using the different token embeddings.", "So its PPL is near to one and Local Variation is too high.", "For MVR model, we find that without local uniformity loss (LC loss in short), the Local Variation drops rapidly, indicating our proposed LC loss can improve the uniformity of different viewers.", "In addition, MVR w/o annealed will damage the PPL, which also confirms it does help activate more viewers and align them better with different queries.", "As shown in Table 7, there are two examples retrieved by DPR and MVR for qualitative analysis.", "The top scoring passages retrieved by DPR can't give a clear answer for the queries, though they 5997 Question Passage received by DPR Passage retrieved by MVR What continent ranged over the majority of the southern hemisphere of earth in the Cambrian?", "seem to have a similar topic to the queries.", "In contrast, our MVR is able to return the correct answers when the passages contain rich information and diverse semantics.", "Take the second sample as an example, the passage retrieved by DPR is around Ordovician in the question but there are no more details answering the question.", "In comparison, MVR mines more fine-grained information in the passage and return the correct answer 485.4 1.9 Ma (Ma means million years ago).", "It indicates that DPR can only capture the rough meaning of a passage from a general view, while our MVR is able to dive into the passage and capture more fine-grained semantic information from multiple views.", "In this paper, we propose a novel Multi-View Representation Learning framework.", "Specifically, we present a simple yet effective method to generate multi-view document representations through multiple viewers.", "To optimize the training of multiple viewers, we propose a global-local loss with annealed temperature to enable multiple viewers to better align with different semantic views.", "We conduct experiments on three open-domain retrieval datasets, and achieve state-of-the-art retrieval performance.", "Our further analysis proves the effectiveness of different components in our method.", "We thank Yuan Chai, Junhe Zhao, Yimin Fan, Jun-jie Huang and Hang Zhang for their discussions and suggestions on writing this paper." ]
[ "abstain", "abstain", "abstain", "objective", "objective", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "objective", "abstain", "method", "method", "objective", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "objective", "result", "abstain", "other" ]
[ "Humor is an important social phenomenon, serving complex social and psychological functions.", "However, despite being studied for millennia humor is computationally not well understood, often considered an AI-complete problem.", "In this work, we introduce a novel setting in humor mining: automatically detecting funny and unusual scientific papers.", "We are inspired by the Ig Nobel prize, a satirical prize awarded annually to celebrate funny scientific achievements (example past winner: Are cows more likely to lie down the longer they stand?).", "This challenging task has unique characteristics that make it particularly suitable for automatic learning.", "We construct a dataset containing thousands of funny papers and use it to learn classifiers, combining findings from psychology and linguistics with recent advances in NLP.", "We use our models to identify potentially funny papers in a large dataset of over 630 , 000 articles.", "The results demonstrate the potential of our methods, and more broadly the utility of integrating state-of-the-art NLP methods with insights from more traditional disciplines.", "Humor is an important aspect of the way we interact with each other, serving complex social functions (Martineau, 1972).", "Humor can function either as a lubricant or as an abrasive: it can be used as a key for improving interpersonal relations and building trust (Wanzer et al., 1996; Wen et al., 2015), or help us work through difficult topics.", "It can also aid in breaking taboos and holding power to account.", "Enhancing the humor capabilities of computers has tremendous potential to better understand interactions between people, as well as build more natural human-computer interfaces.", "Nevertheless, computational humor remains a long-standing challenge in AI; It requires complex language understanding, manipulation capabilities, creativity, common sense, and empathy.", "Some even claim that computational humor is an AI-complete problem (Stock and Strapparava, 2002).", "As humor is a broad phenomenon, most works on computational humor focus on specific humor types, such as knock-knock jokes or one-liners (Mi-halcea and Strapparava, 2006; Taylor and Mazlack, 2004).", "In this work, we present a novel humor recognition task: identifying quirky, funny scientific contributions.", "We are inspired by the Ig Nobel prize 1 , a satiric prize awarded annually to ten scientific achievements that first make people laugh, and then think.", "Past Ig Nobel winners include Chickens prefer beautiful humans and Beauty is in the eye of the beer holder: People who think they are drunk also think they are attractive.", "Automatically identifying candidates for the Ig Nobel prize provides a unique perspective on humor.", "Unlike most humor recognition tasks, the humor involved is sophisticated, and requires common sense, as well as specialized knowledge and understanding of the scientific culture.", "On the other hand, this task has several characteristics rendering it attractive: the funniness of the paper can often be recognized from its title alone, which is short, with simple syntax and no complex narrative structure (as opposed to longer jokes).", "Thus, this is a relatively clean setting to explore our methods.", "We believe humor in science is also particularly interesting to explore, as humor is strongly tied to creativity.", "Quirky contributions could sometimes indicate fresh perspectives and pioneering attempts to expand the frontiers of science.", "For example, Andre Geim won an Ig Nobel in 2000 for levitating a frog using magnets and a Nobel Prize in Physics 1 improbable.com/ig-about in 2010 .", "Our contributions are: We formulate a novel humor recognition task in the scientific domain.", "We construct a dataset containing thousands of funny scientific papers.", "We develop multiple classifiers, combining findings from psychology and linguistics with recent NLP advances.", "We evaluate them both on our dataset and in a real-world setting, identifying potential Ig Nobel candidates in a large corpus of over 0 .", "6 M papers.", "We devise a rigorous, data-driven way to aggregate crowd workers' annotations for subjective questions.", "We release data and code 2 .", "Beyond the tongue-in-cheek nature of our application, we more broadly wish to promote combining data-driven research with more-traditional works in areas such as psychology.", "We believe insights from such fields could complement machine learning models, improving performance as well as enriching our understanding of the problem.", "Humor in the Humanities.", "A large body of theoretical work on humor stems from linguistics and psychology.", "Ruch (1992) divided humor into three categories: incongruity, sexual, and nonsense (and created a three-dimensional humor test to account for them).", "Since our task is to detect humor in scientific contributions, we believe that the third category can be neglected under the assumption that no-nonsense article would (or at least, should) be published (notable exception: the Sokal hoax (Sokal, 1996)).", "The first category, incongruity, was first fully conceptualized by Kant in the eighteenth century (Shaw, 2010).", "The well-agreed extensions to incongruity theory are the linguistics incongruity resolution model and semantic script theory of humor (Suls, 1972; Raskin, 1985).", "Both state that if a situation ended in a manner that contradicted our prediction (in our case, the title contains an unexpected term) and there exists a different, less likely rule to explain it the result is a humorous experience.", "Simply put, the source of humor lies in 2 github.com/nadavborenstein/Iggy violation of expectations.", "Example Ig Nobel winners include: Will humans swim faster or slower in syrup? and Coordination modes in the multisegmental dynamics of hula hooping.", "The second category, sex-related humor is also common among Ig Nobel winning papers.", "Examples include: Effect of different types of textiles on sexual activity. Experimental study and Mag-netic resonance imaging of male and female genitals during coitus and female sexual arousal.", "Humor Detection in AI.", "Most computational humor detection work done in the context of AI relies on supervised or semi-supervised methods and focuses on specific, narrow, types of jokes or humor.", "Humor detection is usually formulated as a binary text classification problem.", "Example domains include knock-knock jokes (Taylor and Mazlack, 2004), one-liners (Miller et al., 2017; Simpson et al., 2019; Liu et al., 2018; Mihalcea and Strappa-rava, 2005; Blinov et al., 2019; Mihalcea and Strap-parava, 2006), humorous tweets (Maronikolakis et al., 2020; Donahue et al., 2017; Ortega-Bueno et al., 2018; Zhang and Liu, 2014), humorous product reviews (Ziser et al., 2020; Reyes and Rosso, 2012), TV sitcoms (Bertero and Fung, 2016), short stories (Wilmot and Keller, 2020), cartoons captions (Shahaf et al., 2015), and even That's what she said jokes (Hossain et al., 2017; Kiddon and Brun, 2011).", "Related tasks such as irony, sarcasm and satire have also been explored in similarly narrow domains (Davidov et al., 2010; Reyes et al., 2012; Ptacek et al., 2014).", "Our goal in this paper is to automatically identify candidates for the Ig Nobel prize.", "More precisely, to automatically detect humor in scientific papers.", "First, we consider the question of input to our algorithm.", "Sagi and Yechiam (2008) found a strong correlation between funny title and humorous subject in scientific papers.", "Motivated by this correlation, we manually inspected a subset of Ig Nobel winners.", "For the vast majority of them, reading the title was enough to determine whether it is funny; very rarely did we need to read the abstract, let alone the full paper.", "Typical past winners' titles include Why do old men have big ears? and If you drop it, should you eat it? Scientists weigh in on the 5-second rule.", "An example of a non-informative title is Pouring flows, a paper calculating the optimal way to dunk a biscuit in a cup of tea.", "Based on this observation, we decided to focus on the papers' titles .", "More formally: Given a title t of an article, our goal is to learn a binary function ( t ) { 0 , 1 } , reflecting whether the paper is humorous, or Ig Nobel-worthy'.", "The main challenge, of course, lies in the construction of .", "To take a data-driven approach to tackle this problem, we crafted a first-of-its-kind dataset containing titles of funny scientific papers 2 .", "We started from the 211 Ig Nobel winners.", "Next, we manually collected humorous papers from online forums and blogs 3 , resulting in 1 , 707 papers.", "We manually verified all of these papers can be used as positive examples.", "In Section 6 we give more indication these papers are indeed useful for our task.", "For negative examples, we randomly sampled 1 , 707 titles from Semantic Scholar 4 (to obtain a balanced dataset).", "We then classify each paper into one of the following scientific fields: neuroscience, medicine, biology, or exact sciences 5 .", "We balanced the dataset in a per-field manner.", "While some of these randomly sampled papers could, in princi-ple, be funny, the vast majority of scientific papers are not (we validated this assumption through sam-pling).", "In deep learning, architecture engineering largely took the place of feature engineering.", "One of the goals of our work is to evaluate the value of features inspired by domain experts.", "In this section, we describe and formalize 127 features implementing insights from humor literature.", "To validate the predictive power of the features that require training, we divide our data to train and test sets ( 80% / 20% ).", "We now describe the four major feature families.", "Research suggests that surprise is an important source of humor (Raskin, 1985; Suls, 1972).", "Indeed, we notice that titles of Ig Nobel winners often include an unexpected term or unusual language, e.g.: On the rheology of cats , Effect of coke on sperm motility and Pigeons' discrimination of paintings by Monet and Picasso.", "To quantify unexpectedness, we create several different language-models (LMs): 3 E.g., reddit.com/r/ScienceHumour , popsci.com/read/funny-science-blog , goodsciencewriting.wordpress.com 4 api.semanticscholar.org/corpus/ 5 Using scimagojr.com to map venues to fields.", "N-gram Based LMs.", "We train simple N-gram LMs with n { 1 , 2 , 3 } on two corpora 630 , 000 titles from Semantic Scholar, and 231 , 600 one-line jokes (Moudgil, 2016).", "Syntax-Based LMs.", "Here we test the hypothesis that humorous text has more surprising grammatical structure (Oaks, 1994).", "We replace each word in our Semantic Scholar corpus with its corresponding part-of-speech (POS) tag 6 .", "We then trained N-gram based LMs ( n { 1 , 2 , 3 } ) on this corpus.", "Transformer-Based LMs.", "We use three different Transformers based (Vaswani et al., 2017) models:", "1) BERT (Devlin et al., 2018) (pre-trained on Wikipedia and the BookCorpus),", "2) SciBERT (Belt-agy et al., 2019), a variant of BERT optimized on scientific text from Semantic Scholar, and", "3) GPT-2 (Radford et al., 2019), a large Transformer-based LM, trained on a dataset of 8 M web pages.", "We fine-tuned GPT-2 on our Semantic Scholar corpora (details in Appendix C.1).", "Using the LMs.", "For each word in a title, we compute the word's perplexity.", "For the N-gram LMs and GPT-2, we compute the probability to see the word given the previous words in the sentence ( n 1 previous words in the case of the N-gram models and all the previous words in the case of GPT-2).", "For the BERT-based models, we compute the masked loss of the word given the sentence.", "For each title, we computed the mean, maximum, and variance of the perplexity across all words in the title.", "Inspired by previous findings (Ruch, 1992; Gultchin et al., 2019), we hypothesize that titles of funny papers tend to be simpler (e.g., the past Ig Nobel winners: Chickens prefer beautiful humans and Walking with coffee: Why does it spill?).", "We utilize several simplicity measures: Length.", "Short titles and titles containing many short words tend to be simpler.", "We compute title length and word lengths (mean, maximum, and variance of word lengths in the title).", "Readability.", "We use the automated readability index (Smith and Senter, 1967).", "Age of Acquisition (AoA).", "A well-established measure for word's difficulty in psychology (Brys-baert and Biemiller, 2017), denoting word's difficulty by the age a child acquires it.", "We compute mean, maximum and variance AoA.", "AoA and Perplexity.", "Many basic words can be found in serious titles (e.g., water' in a hydraulics paper).", "Funny titles, however, contain simple words which are also unexpected .", "Thus, we combine AoA with perplexity.", "We compute word perplexity using the Semantic Scholar N-gram LMs and divide it by AoA.", "Higher values correspond to simpler and unexpected words.", "We compute the mean, maximum, minimum, and variance.", "According to relief theory, crude and scatological connotations are often considered humorous (Shur-cliff, 1968) (e.g., the Ig Nobel winners Duration of urination does not change with body size, Acute management of the zipper-entrapped penis).", "We trained a Naive Bayes SVM (Wang and Manning, 2012) classifier over a dataset of toxic and rude Wikipedia comments (Zafar, 2018), and compute title probability to be crude.", "Similar to the AoA feature, we believe that crude words should also be unexpected to be considered funny.", "As before, we divide perplexity by the word's probability of being benign.", "Higher values correspond to crude and unexpected words.", "We compute the mean, maximum, minimum, and variance.", "Some words (e.g., nincompoop, razzmatazz) are inherently funnier than others (due to various reasons surveyed by Gultchin et al. (2019)).", "It is reasonable that the funniness of a title is correlated with the funniness of its words.", "We measure funniness using the model of Westbury and Hollis (2019), quantifying noun funniness based on humor theories and human ratings.", "We measure the funniness of each noun in a title.", "We also multiplied perplexity and funniness (for funny and unexpected) and use the mean, maximum, minimum, and variance.", "As a first reality check, we plotted the distribution of our features between funny and not-funny papers (see Appendix A.1 for representative examples).", "For example, we hypothesized that titles of funny papers might be linguistically similar to one-liners, and indeed we saw that the one-liner LM assigns lower perplexity to funny papers.", "Similarly, we saw a difference between the readability scores.", "test 7 (see Table 1).", "Interestingly, all feature families include useful features.", "Combining perplexity with other features (e.g., surprising and simple words) was especially prominent.", "In the next sections, we describe how we use those features to train models for detecting Ig Nobel worthy papers.", "We can now create models to automatically detect scientific humor.", "As mentioned in Section 4, one of our goals in this paper is to compare between the NLP SOTA huge-models approach and the literature-inspired approach.", "Thus, we trained a binary multi-layer perceptron (MLP) classifier using our dataset (described in Section 3, see reproducibility details in Appendix C.2), receiving as input the 127 features from Section 4.", "We named this classifier Iggy' , after the Ig Nobel prize.", "As baselines representing the contemporary NLP approach (requiring huge compute and training data), we used BERT (Devlin et al., 2018) and SciBERT (Beltagy et al., 2019), which is a BERT variant optimized on scientific corpora, rendering it potentially more relevant for our task.", "We fine-tuned SciBERT and BERT for Ig Nobel classification using our dataset (see Appendix C.3 for implementation details).", "We also experimented with two models combining BERT/SciBERT with our features (see Figure 6 in Appendix C.4), denoted as BERT f / SciBERT f .", "In the spirit of the original BERT paper, we added two linear layers on top of the models and used a standard cross-entropy loss.", "The input to this final MLP is the concatenation of two vectors: our features' embedding and the last hidden vector from BERT/SciBERT ([CLS]).", "See Appendix C.4 for implementation details.", "For the sake of completeness, we note that we also conducted exploratory experiments with simple syntactic baselines (title length, maximal word length, title containing a question, title containing a colon) as well as BERT trained on sarcasm detection 8 .", "None of these baselines was strong enough on its own.", "We note that the colon-baseline tended to catch smart-aleck titles, but the topic was not necessarily funny.", "The sarcasm baseline achieved near guess-level accuracy ( 0 . 482 ), emphasizing the distinction between the two humor tasks.", "We first evaluate the five models (Iggy, SciBERT, BERT, SciBERT f and BERT f ) on our labeled dataset in terms of general accuracy and Ig Nobel retrieval ability.", "As naive baselines, we added two bag of words (BoW) based classifies: random forest (RF) and logistic regression (LR).", "Accuracy.", "We randomly split the dataset to train, development, and test sets ( 80% 10% 10% ), and used the development set to tune hyper-parameters (e.g., learning rate, number of training epochs).", "Table 2 summarizes the results.", "We note that all five models achieve very high accuracy scores and that the simple BoW models fall behind.", "This gives some indication about the inherent difficulty of the task.", "Both features-based Iggy and BERT-based models outperform simple baseline.", "SciBERT f outperforms the other models across all measures.", "Ig Nobel Winners Retrieval.", "Our positive examples consist of 211 Ig Nobel winners and additional 1 , 496 humorous papers found on the web.", "Thus, the portion of real Ig Nobel winning papers in our data is relatively small.", "We now measure whether our web-originated papers serve as a good proxy for Ig Nobel winners.", "Thus, we split the dataset differently: the test set consists of the 211 Ig Nobel winners, plus a random sample of 211 negative titles (slightly increasing the test set size to 12 %).", "Train set consists of the remaining 2 , 992 papers.", "This experiment follows our initial inspiration of finding Ig Nobel-worthy papers, as we test our models' ability to retrieve only the real winners.", "Table 3 demonstrate that our web-based funny papers are indeed a good proxy for Ig Nobel winners.", "Similar to the previous experiment, the combination of SOTA pretrained models with literature based features is superior.", "we test our models in a more realistic setting; we run them on a large sample of scientific papers, ranking each paper according to their certainty in the label (humorous'), and identifying promising candidates.", "We use the same dataset of 630 k papers from Semantic Scholar used for training the LMs (Section 4).", "We compute funniness according to our models (excluding random forest and logistic regression, which performed poorly).", "Table 4 shows examples of top-rated titles.", "We use the Amazon Mechanical Turk (MTurk) crowdsourcing platform to assess models' performance.", "In an exploratory study, we asked people to rate the funniness of titles on a Likert scale of 1 5 .", "We noted that people tended to confuse funny research topic and funny title .", "For example, titles like Are you certain about SIRT? or NASH may be trash received high funniness scores, even though the research topic is not even clear from the title.", "To mitigate this problem, we redesigned the study to include two 5 -point Likert scale questions:", "1) whether the title is funny, and", "2) whether the research topic is funny.", "This addition seems to indeed help workers understand the task better.", "Example papers rated as serious title, funny topic include Hat-wearing patterns in spectators attending baseball games: a 10 -year retrospective comparison.", "Funny title, serious topic include Slicing the psychoanalytic pie: or, shall we bake a new one? Commentary on Greenberg.", "Unless stated otherwise, the evaluation in the reminder of the paper was done on the funny topic Likert scale.", "We paid crowd workers $0 .", "04 per title.", "As this task is challenging, we created a qualification test with 4 titles ( 8 questions), allowing for one mistake.", "The code for task and test can be found in the repository 2 .", "We also required workers to have completed at least 1 , 000 approved HITs with at least 97% success rate.", "All algorithms classified and ranked (according to certainty) all 630 k papers.", "However, in any reasonable use-case, only the top of the ranked list will ever be examined.", "There is a large body of work, both in academia and industry, studying how people interact with ranked lists (in particular, search result pages) (Kelly and Azzopardi, 2015; Beus, 2020).", "Many information retrieval algorithms assume the likelihood of the user examining a result to exponentially decrease with rank.", "The conventional wisdom is that users rarely venture into the second page of search results.", "Thus, we posit that in our scenario of Ig Nobel recommendations, users will be willing to read only the several tens of results.", "We choose to evaluate the top300 titles for each of our five models, to study (in addition to the performance at the top of the list) how performance decays.", "We also included a baseline of 300 randomly sampled titles from Semantic Scholar.", "Altogether we evaluated 1375 titles (due to overlap).", "Each title was rated by five crowd workers.", "Overall, 13 different workers passed our test.", "Seven workers annotated less than 300 titles, while four annotated above 1 , 300 each.", "Decision rule.", "Each title was rated by five different crowd workers on a 1 5 scale.", "There are several reasonable ways to aggregate these five continuous scores to a binary decision.", "A commonly-used aggregation method is the majority vote.", "The majority vote should return the clear-cut humorous titles.", "However, we stress that humor is very subjective Decision rule Threshold Expert corr.", "(and in the case of scientific humor, quite subtle).", "Indeed, annotators had low agreement on the topic question (average pairwise Spearman = 0 . 27 ).", "Thus, we explored more aggregation methods 9 .", "Our hypothesis class is of the general form at least k annotators gave a score at least m 10 .", "To pick the best rule, we conducted two exploratory experiments: In the first one, we recruited an expert scien-tist and thoroughly trained him on the problem.", "He then rated 90 titles and we measured the correlation of different aggregations with his ratings.", "Results are summarized in table 5: The highest-correlation aggregation is when at least one annotator crossed the 3 threshold (Spearman = 0 . 7 ).", "In the second experiment, we used the exact same experimental setup as the original task, but with labeled data.", "We used 100 Ig Nobel winners as positives and a random sample of 100 papers as negatives.", "The idea was to see how crowd workers rate papers that we know are funny (or not).", "Table 5 shows the accuracy of each aggregation method.", "Interestingly, the highest accuracy is achieved with the same rule as in the first experiment (at least one crossing 3 ).", "Thus, we chose this aggregation rule.", "We believe the method outlined in this section could be more broadly applicable to aggregation of crowd sourced annotations for subjective questions.", "Results.", "Figure 1 shows precision at k for the top-rated 300 titles according to each model.", "The random baseline is 0 .", "03 .", "Upon closer inspection, these seem to be false positives of the annotation.", "9 For completeness, see Figure 3 in Appendix A.2.", "10 There is a long-running debate about whether it is valid to average Likert scores.", "We believe we cannot treat the ratings in this study as interval data.", "In this range, Iggy slightly outperforms the other four models (BERT is particularly bad, as it picks up on short, non-informative titles).", "For larger k values SciBERT and BERT f take the lead.", "We note that even at k = 300 , all models still achieve considerable (absolute) precision.", "We obtain similar results using normalized discounted cumulative gain (nDCG), a common measure for ranking quality (see Table 6 for nDCG scores for the top 50 and the 300 papers).", "Overall, these relatively high scores suggest that our models are able to identify funny papers.", "We stress that Iggy is a small and simple network ( 33 k parameters), compared to pretrained 110 million parameters BERT-based models.", "Yet despite its simplicity, Iggy's performance is roughly comparable to BERT-based methods.", "We believe this demonstrates the power of implementing insights from domain experts.", "We hypothesize that if the fine-tuning dataset were larger, BERT f and SciBERT f would outperform the other models.", "Taking a closer look at the actual papers in the experiment of Section 7, the overlap between the three feature-based models is 26 56% (for 1 < k < 50 ) and 39 62% (for 1 < k < 300 ).", "BERT had very low overlaps with all other models ( 0% in top 50, 10% in all 300).", "SciBERT had almost no overlap in top 50 (maximum 2% ), 10 40% in all 300 (see full details in Appendix A.3).", "We believe this implies that the features were indeed important and informative for both BERT f and SciBERT f .", "We have seen Iggy performs surprisingly well, given its relative simplicity.", "In this section, we Figure 1: Precision at k for our chosen decision rule.", "wish to better understand the reasons.", "We chose to analyze Iggy with Shapely additive explanations (SHAP) (Lundberg and Lee, 2017).", "SHAP is a feature attribution method to explain the output of any black-box model, shown to be superior to more traditional feature importance methods.", "Importantly, SHAP provides insights both globally and locally (i.e., for specific data points).", "Global interpretability.", "We compute feature importance globally.", "Among top contributing features we see multiple features corresponding to incongruity (both alone and combined with funniness) and to word/sentence simplicity.", "Interestingly, features based on the one-liner jokes seem to play an important role (See Figure 4 in Appendix A.4).", "Local interpretability.", "To understand how Iggy errs, we examined the SHAP decision plots for false positives and false negatives (See Figure 5 in Appendix A.4).", "These show the contribution of each feature to the final prediction for a given title, and thus can help debugging the model.", "Looking at false negatives, it appears that various perplexity features misled Iggy, while funniness and joke LM steered it in the right direction.", "We see a contrary trend in false positives: perplexity helped, and joke LM confused the classifier.", "We also observe that the model learned that a long title is an indication of a serious paper.", "We expected our rudeness classifier to play a bigger role in some of the titles (e.g., Adaptive interpopulation differences in blue tit life-history traits on Corsica), but the signal was inconclusive, perhaps indicating our rudeness classifier is lacking.", "We now take a more qualitative approach to understand the models.", "First, we set out to explore whether the models confuse funny titles and funny topics.", "Using the crowd sourced annotations from Section 7, we measure the portion of this mistake in the top-rated 300 titles of all five models.", "That is, we check in how many cases our models classify a title as Ig Nobel-worthy while the workers have classified it as funny title and non-funny topic.", "Iggy had the highest degree of such confusion ( 0 . 28 ).", "Similarly, BERT f and SciBERT f exhibit more confusion than the versions without features ( 0 . 24 , 0 . 19 compared to 0 . 13 , 0 . 08 ).", "Random baseline is 0 .", "02 .", "Examples of this kind of error include A victim of the Occam's razor., While waiting to buy a Ferrari, do not leave your current car in the garage!, and Reinforcement learning: The good, the bad and the ugly?.", "All were classified as Ig Nobel-worthy, although their topic is serious (or even unclear from the title).", "Looking closer at the data, we observe that a high portion of these are editorials with catchy titles.", "As our dataset does not differentiate between editorials and real research contributions, filtering editorials is not straightforward.", "Interestingly, the portion of editorials is also greater in the lowest annotators' agreement area, hinting that this confusion also occurs in humans.", "In addition to editorials, we notice another category of papers causing the same type of confusion.", "There are papers dealing with disturbing or unfortunate topics (violence, death, sexual abuse), whose titles include literary devices used to lighten the mood.", "Censored (for the readers' own wellbeing) examples include Licorice for hepatitis C: yum-yum or just ho-hum?, The song of the siren: Dealing with masochistic thoughts and behaviors.", "A note on scientific disciplines.", "Another observation we make concerns with the portion of Ig Nobel-worthiness across the different scientific disciplines.", "We notice that most papers classified by our models as funny belong to social sciences (Dogs can discriminate human smiling faces from blank ex-pressions) or medicine (What, if anything, can monkeys tell us about human amnesia when they can't say anything at all?), compared to exact sciences (The kinematics of eating with a spoon: bringing the food to the mouth, or the mouth to the food?).", "We believe this might be the case since, quite often, social sciences and medicine papers study topics that are more familiar to the layperson.", "We also note that although our models performed about the same across the different disciplines, they were slightly better in psychology.", "In this work, we presented a novel task in humor recognition detecting funny and unusual scientific papers, which represents a subtle and sophisticated humor type.", "It has important characteristics (short, simple syntax, stand-alone) making it a (relatively) clean setting to explore computational humor.", "We created a dataset of funny papers and constructed models, distilling humor literature into features as well as harnessing SOTA advances in NLP.", "We conducted experiments both on our dataset and in a real-world setting, identifying funny papers in a corpus of over 0 .", "6 M papers.", "All models were able to identify funny papers, achieving high nDCG scores.", "Interestingly, despite the simplicity of the literature-based Iggy, its performance was overall comparable to complex, BERT-based models.", "Our dataset can be further used for various humor related tasks.", "For example, it is possible to use it to create an aligned corpus, pairing every funny paper title with a nearly identical but serious title, using methods similar to West and Horvitz (2019).", "This would allow us to understand why a paper is funny at a finer granularity, by identifying the exact words that make the difference.", "This technique will also allow exploring different types of funny.", "Another possible use of our dataset is to collect additional meta-data about the papers (e.g., citations, author information) to explore questions about whether funny science achieves disproportionate attention and engagement, who tends to produce it (and at which career stage), with implications to science of science and science communication.", "Another interesting direction is to expand beyond paper titles and consider the paper abstract, or even full text.", "This could be useful in examples such as the Ig Nobel winner Cure for a Headache, which takes inspiration from woodpeckers to help cure headaches in humans.", "Finally, we believe multi-task learning is a direction worth pursuing towards creating a more holistic and robust humor classifier.", "In multi-task learning, the learner is challenged to solve multiple problems at the same time, often resulting in better generalization and better performance on each individual task (Ruder, 2017).", "As multi-task learning enables unraveling cross-task similarities, we believe it might be particularly fruitful to apply to tasks highlighting different aspects of humor.", "We believe our dataset, combined with other task specific humor datasets, could assist in pursuing such a direction.", "Despite the tongue-in-cheek nature of our task, we believe that computational humor has tremendous potential to create personable interactions, and can greatly contribute to a range of NLP applications, from chatbots to educational tutors.", "We also wish to promote complementing data-driven research with insights from more-traditional fields.", "We believe combining such insights could, in addition to improving performance, enrich our understanding of core aspects of being human.", "We thank the reviewers for their insightful comments.", "We thank Omri Abend, Michael Doron and Meirav Segal, Ronen Tamari and Moran Mizrahi for their help, and Shuki Cohen for preliminary discussions.", "This work was supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant no. 852686, SIAM), US National Science Foundation, US-Israel Binational Science Foundation (NSF-BSF) grant no. 2017741, and Amazon Research Awards." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "method", "method", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "objective", "method", "objective", "method", "objective", "abstain", "abstain", "method", "result", "other", "other", "other", "objective", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "objective", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "objective", "abstain", "abstain", "method", "abstain", "result", "method", "objective", "abstain", "result", "other", "other", "other" ]
[ "Podcasts have shown a recent rise in popularity.", "Summarization of podcasts is of practical benefit to both content providers and consumers.", "It helps people quickly decide whether they will listen to a podcast and/or reduces the cognitive load of content providers to write summaries.", "Nevertheless, podcast summarization faces significant challenges including factual inconsistencies of summaries with respect to the inputs.", "The problem is exacerbated by speech disfluen-cies and recognition errors in transcripts of spoken language.", "In this paper, we explore a novel abstractive summarization method to alleviate these issues.", "Our approach learns to produce an abstractive summary while grounding summary segments in specific regions of the transcript to allow for full inspection of summary details.", "We conduct a series of analyses of the proposed approach on a large podcast dataset and show that the approach can achieve promising results.", "Grounded summaries bring clear benefits in locating the summary and transcript segments that contain inconsistent information, and hence improve summarization quality in terms of automatic and human evaluation.", "Podcasts are one of the most popular forms of new media.", "As of today, over 155 million people listen to a podcast every week (Christian, 2021).", "With the growing interest, there is an increased demand for textual summaries that foretell the content of podcasts.", "Those summaries help people decide, in a few seconds, if they will listen to a podcast or subscribe to the channel.", "They are helpful for users who want to find podcasts previously listened to.", "Furthermore, they can be re-purposed for social media posts or email marketing campaigns, enabling content creators to make their podcasts accessible to a larger audience.", "It is desirable to generate grounded summaries from podcast transcripts, where spans of summary text are closely tethered to the original audio.", "Figure 1 provides an example of a grounded abstractive summary.", "When a user clicks on a summary segment, she will be directed to an audio clip that gives further detail of the conversational context.", "Grounded summaries give us a preview of notable podcast clips (Shalom, 2019) and they may further release summarization service providers from potential legal claims by directing users to the original audio.", "This is because, speech recognizers induce transcription errors and abstractive summarization models may hallucinate facts that are not entailed by the original (Kryscinski et al., 2020), both can cause podcast summaries to contain misleading or inaccurate information.", "With grounded summaries, users are able to frame, interpret, and place into context any system-generated summaries, thus reducing the barriers to deploy podcast summarization technology.", "One may attempt to align summary text and podcast transcripts in a post-processing step to generate grounded summaries.", "Unfortunately, hallucinations do not allow for proper alignments as they are not found in the transcripts (Maynez et al., 2020).", "Hierarchical attention models may seem promising for this task (Liu and Lapata, 2019).", "However, the excessive length of the transcripts makes it difficult to produce attention distributions over the entire transcripts.", "Recent evidence suggests that attention weights are not reliable indicators of the relative importance of inputs (Jain and Wallace, 2019), thus it remains an open question whether attention can be used to find alignments between transcripts and summary segments.", "In this paper, we seek to generate grounded summaries from podcast transcripts by exploring an on-demand abstractive summarizer.", "It mimics how a human might approach a lengthy transcript the expert would identify a portion of the transcript that is deemed most important and relevant to the existing summary, use it as a ground to produce a new 4407 Welcome to my podcast series a normal kind of all the Poetry from a normal Kind of Love is from my chapbook presentation.", "piece of the summary, and that process is repeated until the summary is finished.", "Our summarizer employs a novel regularization technique that enables it to visit portions of the transcript in chronological order, while allowing zigzags in order to produce a coherent summary.", "This has another implication.", "It implies that we may estimate what percentage of a podcast transcript is covered by the summary and thus adjust that when necessary.", "Distinguishing our work from earlier research on extract-then-abstract methods (Hsu et al., 2018; Chen and Bansal, 2018; Gehrmann et al., 2018; Lebanoff et al., 2019; Jin et al., 2020; Pilault et al., 2020), we require selected transcript chunks to have high salience, but also those salient content must appear at the beginning of the selected chunks, so that the corresponding audio clips can provide good jump-in points for users to start listening.", "Our experiments are performed on a large podcast summarization dataset containing over 100,000 English podcasts (Clifton et al., 2020).", "We show that our proposed grounded summarizer can perform competitively or better than the state-of-the-art methods, including the recent methods that leverage large, pretrained models (Lewis et al., 2020; Beltagy et al., 2020) as judged by automatic metrics and human evaluation.", "Our contributions in this paper are as follows.", "We address the problem of podcast summarization by investigating an on-demand summarizer that produces grounded abstracts.", "The abstracts help users quickly decide if they will listen to the podcasts and offer a sampler of salient podcast clips.", "The on-demand summarizer does not need to encode the entire transcript, hence substantially reduces the GPU memory footprint.", "We conduct a series of analyses to gain insights into the impact of specific design decisions.", "They include how a transcript chunk should be defined, whether those transcript chunks overlap, to what extent the summary content is taken verbatim from selected chunks, and how the summary may be extended to cover more information.", "Through extensive experiments on a benchmark podcast dataset, we demonstrate the effectiveness of our proposed approach and show results that are comparable to human writer performance.", "The approach opens an avenue towards generating a new kind of abstractive summaries that allow users to verify the information consistency of summary parts against the original audio clips.", "1 2 Related Work With the rapid rise of podcasts comes the need for automatic summarization of podcast transcriptions.", "While comparatively understudied, recent work has shown great progress.", "Clifton et al. (2020) present the Spotify dataset that was adopted in TREC 2020 for the podcast summarization task.", "2 Our participating system in TREC 2020 focuses on identifying salient segments from transcripts and using them as input to an abstractive summarizer (Song et al., 2020).", "Reddy et al. (2021) develop classifiers to detect and eliminate extraneous marketing materials in podcasts to aid summarization.", "In this paper, we explore techniques that generate grounded podcast summaries where pieces of summary text are tied to short podcast clips.", "One of the most serious problems of neural abstractive summarization is that the summaries can contain factually incorrect information and hallucinations (Falke et al., 2019; Kryscinski et al., 2020; Maynez et al., 2020; Lebanoff et al., 2020).", "Without grounded summarization, users have to listen to the full episodes to find connections between details of the summaries and the original podcasts.", "If successful, grounded summaries will benefit a number of summarization tasks where the input involves lengthy transcripts, including meetings (Li et al., 2019; Koay et al., 2020, 2021; Zhong et al., 2021), medical conversations (Liu and Chen, 2019), interviews (Zhu et al., 2021), livestreams (Cho et al., 2021) and more.", "An extract-then-abstract strategy could be used to produce grounded abstractive summaries (Chen and Bansal, 2018; Gehrmann et al., 2018; Hsu et al., 2018; Jin et al., 2020; Pilault et al., 2020).", "Most of these approaches are tailored to written documents, e.g., news, Wikipedia, and scholarly articles.", "They extract sentences from the documents and use them as input to an abstractive summarization model to produce a summary.", "Nevertheless, transcripts of spoken language lack essential document structure such as sentence, paragraph and section boundaries, making it unclear how these approaches will perform on podcasts.", "Attention provides another mechanism for aligning the summary and transcript segments.", "The use of sparse attention allows a summarization model to potentially scale to longer documents (Beltagy et al., 2020; Kitaev et al., 2020; Huang et al., 2021).", "Hierarchical Transformer encodes multiple paragraphs in a hierarchical manner to allow them to exchange information (Liu and Lapata, 2019; Fab-bri et al., 2019; Chen and Yang, 2020).", "However, it is shown that attention weights are not reliable indicators of the relative importance of inputs, as alternative attention distributions would have yielded similar results (Jain and Wallace, 2019).", "Our approach in this paper is to better align summary segments with chunks of the transcripts to allow easy tracing of inconsistent information.", "It features a generator that writes a summary from beginning to end, and a savvy selector that knows when to switch to a new transcript chunk and where to switch to.", "Differing from PG networks (See et al., 2017) and retrieval-augmented generation (Guu et al., 2020; Lewis et al., 2021), our selector places heavy emphasis on modeling and selection of transcript chunks.", "A desirable chunk is expected to be about 2 minutes long and places important information at the beginning to enable easy user verification.", "In the following section, we present details of the model implementation.", "A major challenge facing podcast summarization is the dramatic length difference between source and target sequences.", "At a speaking rate of 122 words per minute for spontaneous speech (Polifroni et al., 1991), the full transcript of a 1-hour long episode contains roughly 7,000 words and that of a 1.5-hour long episode could reach 10,000 words.", "In contrast, a podcast summary is short, containing on average 61 words according to Manakul and Gales (2020).", "The ratio of their lengths could reach as high as 100-to-1, and this motivates our study of abstractive grounded summarization where summary segments are grounded to selected chunks of transcripts as a way of combating the inevitable errors that occur in podcast summarization.", "Let x be the sequence of tokens in the source transcript and y be the sequence of tokens in the summary.", "These tokens share the same vocabulary V .", "We use x C to denote a chunk of the transcript, and C gives the indices of tokens that belong to the chunk.", "The full transcript can be decomposed into a sequence of chunks, denoted by {C 1 , , CM } .", "The chunks may have varying sizes and overlap with each other; they are the grounds for generating a podcast summary.", "Our assumption is twofold.", "Firstly, we assume a summary segment is produced by conditioning on the previously generated tokens ( y <j ) and a specific chunk of the transcript.", "Secondly, there exists a function G ( x , y <j ) (Eq.", "(1)) that determines the most appropriate grounding chunk for generating all tokens of the segment.", "Particularly, when the entire transcript is treated as a single chunk, it reduces to the standard conditional generation model p ( y j | y <j , x ) .", "Thus, the crucial point is a coarse segmentation of the source transcript and an alignment between the transcript chunks and summary segments.", "In this work we use a sliding window to produce transcript chunks, with window size W and stride size 4409 S .", "3 The sizes can be measured in terms of tokens.", "E.g., W =256 and S =128 tokens will produce a series of fixed-length chunks that overlap with each other.", "The rationale for using overlapping chunks is to find those that serve both as grounds for summary generation and good jump-in points for user verification.", "The sizes can also be measured by the number of sentences.", "E.g., W =20 and S =20 sentences produce a set of varying-length, non-overlapping chunks.", "In spoken language, a series of consecutive short sentences often indicates the content is relatively unimportant (Marge et al., 2010).", "Given a summary segment (cid:101) y , we designate x C as a grounding chunk if it attains the highest score S ( x C , (cid:101) y ) (Eq.", "(2)).", "This position-biased coverage score favors the transcript chunk that covers summary bigrams and puts summary content at the beginning to aid humans in performing content verification.", "It measures the percentage of unique summary bigrams B ( (cid:101) y ) covered by a chunk x C .", "Particularly, I [ b k x C ] is an indicator that returns 1 if the bigram b k appears in x C and 0 otherwise.", "Each bigram b k has an associated weight w k (Eq.", "(3)).", "If it appears in the first position of x C ( pos k = 0 ), it receives a weight of one.", "Otherwise, the weight is decayed according to the relative position of the bigram's first occurrence in the chunk ( pos k ) and is a coefficient for the decay.", "4 S ( x C , (cid:101) y ) = 1 |B ( (cid:101) y ) | (cid:88) b k B ( (cid:101) y ) w k I [ b k x C ] (2) w k = 1 pos k |C| ; [0 , 1] (3) We proceed by training a neural encoder-decoder model to generate an abstractive summary from the grounding transcript chunks.", "Each segment of the summary (= sentence) 5 is generated conditioned on its grounding chunk x C and all the previously generated tokens y <j .", "The process starts from the first 3 Discourse segmentation is beyond the scope of this work.", "There is little to no data available to build a discourse segmentation tool and little existing work on discourse analysis of podcasts.", "We refer the reader to Joty et al. (2019) for recent advances in discourse processing research.", "4 If a summary segment cannot be mapped to a chunk using Eqs.", "(2-3), we perform the following: (cid:101) y is assigned to the first chunk C 1 if it is the first segment of the summary.", "Otherwise, (cid:101) y is assigned to the same chunk as the previous summary segment to improve coherence.", "We require x C and (cid:101) y to have a minimum of four shared bigrams (stopwords-only bigrams are excluded).", "Future work may consider aligning transcripts and summaries based on propositions (Ernst et al., 2020).", "5 We use sentences as summary segments; other sentencelike segments are possible in future work.", "chunk of the transcript x C 1 .", "The encoder converts this grounding chunk into a sequence of hidden vectors [ h C 1 , . . . , h C m ] (Eq.", "(4)).", "The decoder predicts the next summary token y j (Eq.", "(5)) and continues to do so until a switch point is detected.", "At this point the current summary segment is finished and the decoder is poised to select the next transcript chunk x C new and generate a new summary segment from it.", "The decoding process finishes when a special symbol ( [sep] ) is predicted that indicates the end of the summary.", "[ h C 1 , . . . , h C m ] = Encode ( x C ) (4) y j = Decode ( y <j , [ h C 1 , . . . , h C m ]) (5) G ( x , y <j ) = x C 1 , j = 1 x C new , j > 1 & switch G ( y <j 1 ) , j > 1 & no-switch There is a notable difference between our approach and most extract-then-abstract approaches that select important sentences from the document and provide them to the abstractor all-at-once.", "As illustrated in Figure 2, strong position bias causes the abstractor to use only content at the beginning of the input to generate a summary.", "By exposing the chunks progressively, our approach naturally makes use of this characteristic to consolidate information from multiple source chunks.", "It reduces the amount of computation necessary to train the encoder-decoder model, as only selected transcript chunks are encoded which is equal to the number of summary segments.", "Moreover, it is possible to encourage the summary to have a good coverage of the source content by specifying a minimal set of grounding chunks to be used for generation.", "Regularizing Chunk Selection.", "Learning function G ( x , y <j ) that predicts a transcript chunk x C to switch to is crucial for success at inference time.", "Let there be M transcript chunks and N summary segments in a training instance.", "We define p cj to be the model probability that the c -th chunk is predicted as the ground for generating the j -th summary segment; c is the gold chunk obtained using Eq.", "(2-3).", "Our learning objective is a cross-entropy loss against the gold labels with a novel regularizing term R to enable chunks to be selected as per their original order in the transcript, while allowing zigzags to produce a coherent summary (Eq.", "(6-7)).", "L ( ) = (cid:80) Nj =1 log p c j + R (6) R = 1 N (cid:80) Nj =1 (cid:80) Mc =1 max(0 , s cj +1 s cj ) (7) Particularly, s cj = (cid:80) cc =1 p c j denotes the sum of the probability assigned to all chunks up to the c -th position, in order to generate the j -th summary segment.", "We encourage (cid:80) Mc =1 max(0 , s cj +1 s cj ) to be a small value so that if a chunk (up to the c -th position) is assigned to the j -th summary segment, it is unlikely to be assigned to the ( j +1 )-th segment.", "R is designed to regularize the loss and penalize violations; is its coefficient which will be tuned on the validation set.", "Given a partial summary y <j , selecting the next transcript chunk depends on two factors.", "Firstly, it should be a chunk that contains salient content at its beginning.", "We use I ( x C ) to denote the importance of the chunk.", "It is obtained by encoding the chunk into a vector h x C using RoBERTa (Liu et al., 2019), then apply a feedforward network to it to estimate the importance (Eq.", "(9)).", "6 p cj exp( I ( x C ) + R ( x C , y <j )) (8) I ( x C ) = FFN 1 ( h x C ) (9) R ( x C , y <j ) = FFN 2 ([ h x C || h y <j ]) (10) + LowRank ( h x CW h y <j ) Secondly, the chunk may be relevant to the partial summary y <j .", "We define the relevance score R ( x C , y <j ) to capture two levels of interaction between the candidate chunk, represented by h x C and 6 The parameters of FFN 1 are pretrained on an extraction task that favors chunks that contain summary content at the beginning.", "For each chunk, we compute its position-biased coverage score (Eq.", "(2)) against the entire summary.", "1/4 of the chunks that yield the highest coverage scores are designated as positive instances, the remaining are negative instances.", "FFN 1 is thus pretrained as a binary classifier.", "the last hidden state of the partial summary, represented by h y <j .", "Their linear interaction is captured by a feedforward network ( FFN 2 ) and bilinear interaction is modelled by h x CW h y <j where a lowrank approximation is used: LowRank ( p W q ) = ( p U )( V q ) .", "The score p cj is the likelihood that the c -th chunk is assigned to the j -th summary segment considering saliency and content relevancy.", "Switch Point.", "A skilled writer pauses after writing down a sentence.", "We borrow that intuition to inform the construction of a switch-point predictor.", "The model combines the last hidden state of the summary sequence h y <j and the embedding of the anticipated token E ( y j ) , and use a feedforward network FFN 3 to predict if the j -th decoding step corresponds to a switch point (Eq.", "(11)).", "During training, the last token of each summary sentence is a ground-truth switch point.", "At inference time, the model predicts a switch point if p ( switch ) exceeds a threshold, at which point we compute p cj to decide the next chunk.", "Note that the model may choose use the same transcript chunk after switching.", "With over 100,000 podcast episodes, the Spotify dataset (Clifton et al., 2020) is one of the largest corpora available for podcast search and summarization.", "It encompasses a wide range of topics: travel, business, sports, book reviews, mysteries, guided meditations, nutrition and weight loss, among others.", "Each episode is accompanied by an audio file, an automatic transcript generated by Google's Speech-to-Text API, 7 and metadata provided by the podcast creator.", "We do not use the audio data in this paper.", "Our summarizer takes as input a transcript and uses the creator-provided episode description as the reference summary.", "Data Filtering.", "Episode descriptions provided by podcast creators show wide variations in quality.", "When noisy descriptions are used as reference summaries, they can cause a summarizer to hallucinate content.", "We conduct aggressive filtering of the training data to remove low-quality creator descriptions so as to maintain a balance between the amount of training examples available and quality of those examples.", "We clean up reference summaries on the token-, sentenceand summary-level.", "and specific products we love! hk_uu_podcast1 In this episode, Jessica and Natalie go head-to-head in the Great Exfoliation Debate!", "They each advocate their own type of exfoliator and try each other's products to see if they're worth the price difference.", "They also do a wine pairing and talk about the pros and cons of each of the products they tried.", "UCF_NLP2 In this weeks episode, Jessica and Natalie go head-to-head in the great exfoliation debate.", "They each advocate for their own type of exfoliator, and then try each other's products for 10 minutes to see what they think.", "We also talk about the pros and cons of each type of product and recommend a wine to pair with this episode.", "Santa Julia Winemakers Reserve Mountain Blend .", "cued_speechUniv2 In this episode of the Great Exfoliation Debate, Jessica and Natalie talk about their favorite types of exfoliators and the pros and cons of each of their favorite products.", "We also do a wine pairing and talk about the benefits and drawbacks of different types of chemical and physical exfoliation products.", "GrndAbs-to This week we are talking about what we like to call the Great Exfoliation Debate.", "Because we've got two different points of view and we are going to Duke it out mano a mano this week.", "We will also of course do our wine pairing because we are your Somali A's and this week we're going with something a little bit more aggressive...a little bit bold.", "GrndAbs-sn In this episode, Natalie and Jessica debate the pros and cons of exfoliation.", "Exfoliation is this step in your skincare routine that is taking off all the dead skin cells on your face.", "And the point of Exfoliating is to reveal brighter, healthier skin while reducing the size of your pores.", "In this week's episode, we'll be discussing the pros, cons, and what we think is the best way to exfoliate your skin.", "GrndAbs-so Natalie and Jessica are back to debate the merits of exfoliation.", "This week, they are going mano a mano and will be debating the pros and cons of using exfoliating on your face.", "We will also do our wine pairing because we are your Somali A's and this week we're going with something a little bit more aggressive.", "We would like to recommend Santa Julia Winemakers' Reserve Mountain Blend.", "That is a Malbec and Cab Franc blend from 2016.", "It's just a bit of a middle of the road wine but super super tasty Table 1: Grounded abstractive summaries ( GrndAbs-* ) demonstrate a high level of specificity compared to summaries without grounding.", "Tokens that correspond to URLs, email addresses, @mentions, #hashtags, and those excessively long tokens (>25 characters) are directly removed from the summaries.", "Each sentence in the summary is given a salience score that is the sum of IDF scores of its words.", "A low score (<10) indicates the sentence contains few informative words and it is thus removed from the summary.", "Finally, if, after sentence removal, the reference summary is too short or cannot be properly aligned to transcript chunks (3), the instance is removed from the dataset.", "8 This process filters out a substantial amount of low-quality reference summaries, yielding 40,302 episodes in the training set.", "The Spotify dataset has a standard test set of 1,027 episodes and 179 of them are set for human evaluation.", "Baselines.", "Our baselines consist of three of the best performing systems in the TREC 2020 competition on podcast summarization.", "These systems were judged the best performing by both automatic metrics and human evaluation performed by NIST assessors.", "All systems make use of the BART-large model (Lewis et al., 2020).", "The model is tuned first on a news summarization dataset, i.e., CNN/DM or XSum, then fine-tuned on the podcast dataset.", "Due to the long length of the transcripts, Karlbom and Clifton (2020) describe a combined Longformer-BART model that replaces the BART attention lay-8 A summary is required to contain a minimum of 10 BPE tokens and have >2 shared bigrams with all of its grounding chunks.", "ers with attentions of Longformer (Beltagy et al., 2020); their system is named hk_uu_podcast1 .", "Song et al. (2020) develop an extractive module to select segments from transcripts, then integrate the extractor with BART abstractor to generate summaries ( UCF_NLP2 ).", "Their baseline ( UCF_NLP1 ) directly truncates the transcript to the first 1,024 tokens.", "Manakul and Gales (2020) develop a similar baseline ( cued_speechUniv3 ) using the first 1,024 tokens.", "Further, they perform sentence filtering using a hierarchical attention model ( cued_speechUniv1/2/4 ) and ensembles of models from different data shuffles and checkpoints ( cued_speechUniv1/2 ).", "In this paper, our system is called GrndAbs for generating grounded abstracts.", "It has 4 options: -to, -tn, -so, -sn , indicating the sliding window is defined in terms of tokens ( -t ) or sentences ( -s ), overlapping ( -o ) or non-overlapping ( -n ).", "We obtain outputs from these competitive baselines and our system to examine both the successes and failures of these attempts.", "Experimental Settings.", "Our encoder-decoder model uses BART-large as the base model before fine-tuning it on the podcast dataset.", "We use the AdamW (Loshchilov and Hutter, 2017) optimizer, where the momentum parameters are set to 0.9 and 0.999.", "The regularizing coefficient is tuned on the validation set in the range of { 0 , 0 .", "01 , 0 .", "1 , 1 } .", "For summary decoding, we use beam search with a beam size K =4 and a length penalty p =2.", "Our 4412 Run ID R-1(%) R-2(%) R-L(%) BertS(%) BLEURT SummL cued_speechUniv1 30.54 11.25 21.05 84.17 -0.7434 58.16 cued_speechUniv2 30.52 11.36 21.16 84.20 -0.7491 56.93 cued_speechUniv3 28.44 9.55 19.52 83.77 -0.7897 55.58 cued_speechUniv4 29.00 10.42 19.95 83.99 -0.7781 51.75 UCF_NLP1 30.09 12.07 21.75 84.16 -0.7508 57.35 UCF_NLP2 30.44 11.99 21.67 84.14 -0.7382 57.85 hk_uu_podcast1 29.02 10.70 20.66 84.21 -0.7992 44.63 GrndAbs-so 25.42 7.95 16.93 82.62 -0.8164 80.44 GrndAbs-sn 25.58 8.27 16.99 82.64 -0.8220 78.80 GrndAbs-to 25.79 8.38 17.15 82.67 -0.8028 82.98 GrndAbs-tn 25.79 8.25 17.20 82.71 -0.8130 79.90 Table 2: Results on the standard test set containing 1,027 episodes.", "sliding window, measured in terms of tokens or sentences, only contain whole sentences.", "We use the Byte-Pair Encoding (BPE) tokenizer with a vocabulary size V =50,265.", "For transcripts and reference summaries, we use the SpaCy tool to segment them into sentences (model en_core_web_lg 2.2.5 ).", "Example Summaries.", "In Table 1, we provide a direct comparison of system summaries.", "This podcast is hosted by Natalie and Jessica who call themselves Skincare Sommeliers. The episode is named The Great Exfoliation Debate .", "We find that grounded abstractive summaries ( GrndAbs-* ) have a higher level of specificity compared to summaries without grounding.", "Segments of grounded summaries are tied to specific transcripts chunks.", "If a listener finds a summary segment interesting, they can tap to hear the selected summary segment in context.", "Our baselines are highly competitive.", "Their summaries tend to contain more generic content.", "The description provided by podcast creators is relatively short and at times it does not directly summarize the episode.", "There are clear benefits in automatic summarization of podcasts, which can reduce the cognitive load and the time it takes for podcast creators to write the summary.", "Automatic Metrics.", "In Table 2, we report results on the standard test set containing 1,027 podcast episodes.", "The metrics include ROUGE (Lin, 2004) variants that compare system summaries with creator descriptions based on n-gram overlap.", "Further, we experiment with recently developed metrics: BertScore (Zhang et al., 2020) and BLEURT (Sel-lam et al., 2020) that draw on deep neural representations to evaluate generated text.", "Our approach does not outperform the baselines in ROUGE evaluation against creator descriptions.", "However, the gap has been substantially reduced when more advanced metrics (BertScore and BLEURT) are considered.", "There are two possible explanations.", "First, grounded summaries are about 50% longer than plain abstractive summaries.", "Their average length is about 80 words per summary, yielding low precision scores.", "Second, the quality of creator descriptions can be poor.", "Jones et al. (2020) report only 40% of such descriptions are of Good or Excellent quality, indicating future work may consider creating high-quality ground-truth summaries.", "Among the four variants of our approach, we observe that their difference is not prominent.", "The token-based, non-overlapping windows (-tn) variant outperforms others in terms of R-1 and R-L.", "This system is used in subsequent experiments and analyses.", "Human Evaluation.", "It is imperative to perform human evaluation given that creator-provided descriptions are of poor quality and ground-truth summaries are nonexistent.", "We follow the TREC guidelines to ask human evaluators to assign each summary to one of the four grades: Excellent , Good , Fair and Poor .", "The excellent summary will accurately conveys the most important content of the episode (topical content, genre, and participants).", "It should contain almost no redundant material, be coherent, comprehensible, and has no grammatical errors (Jones et al., 2020).", "We also asked the human evaluators to answer 8 yes/no questions re-4413 Q1: People Q2: People Q3: Main Q4: Podcast Q5: Title Q6: Summ Q7: Good Q8: Start/End System Names Add Info Topics Format Context Redund English Points creator_description 60.08 50.19 80.81 59.61 57.00 16.28 88.76 60.16 hk_uu_podcast1 64.15 47.29 85.63 57.62 58.95 10.85 94.76 70.35 UCF_NLP2 67.38 51.55 87.02 63.57 62.52 14.40 95.15 71.71 cued_speechUniv2 69.12 50.67 87.98 64.73 63.62 12.87 94.93 77.00 GrndAbs-tn 75.15 64.47 89.73 69.51 66.15 17.09 94.55 73.35 Table 4: Average scores per human judgment of 179 testing summaries on 8 Yes/No questions.", "Q1 Does the summary include names of the main people (hosts, guests, characters) involved or mentioned in the podcast?", "Q2 Does the summary give any additional information about the people mentioned (such as their job titles, biographies, personal background, etc)?", "Q3 Does the summary include the main topic(s) of the podcast?", "Q4 Does the summary tell you anything about the format of the podcast ; e.g. whether it's an interview, whether it's a chat between friends, a monologue, etc?", "Q5 Does the summary give you more context on the title of the podcast?", "Q6 Does the summary contain redundant information ?", "Q7 Is the summary written in good English ?", "Q8 Are the start and end of the summary good sentence and paragraph start and end points?", "garding the quality of the summary as (Jones et al., 2020) suggested, those questions are shown in Table 5.", "We conduct these experiments on the test set containing 179 podcast episodes as (Jones et al., 2020) did, where each summary is evaluated by five Master workers recruited on the mechanical turk.", "As shown in Table 3, we find that humans prefer the lengthier grounded abstractive summaries, which substantially outperform all baselines.", "25% of grounded abstractive summaries are rated as Excellent and 76% of them receive a rating of either Excellent or Good .", "Table 4 shows the results of the 8 questions.", "Comparing to previous best systems, our grounded abstractive summaries have a significant performance gain on retrieving important information including People Names(+6.03%), People Additional Information(+12.92%), Main Topics(+1.75%), Podcast Format(+4.78%) and Title related context(+2.47%) with slight redundancy.", "We are curious to know how well our system performs on predicting grounding chunks: G ( x , y <j ) .", "In this study, we assume switch points are known and report results on the validation set.", "Our decoder starts from the first transcript chunk and predicts the next chunk at each switch point.", "We find that it achieves an accuracy of 86.02% on identifying ground-truth chunks.", "Next, we examine the performance of switch point prediction.", "On the validation set, we observe that the predictor achieves 98.75%, 84.95% and 91.33%, respectively, for precision, recall and F-score.", "Moreover, each summary has an average of 3.67 switch points.", "A majority of the time (92.42%) the model decides to use the current chunk to continue to decode the next summary segment.", "At a small percentage (7.58%) the model decides to find to a new grounding chunk.", "We find 1.24 unique grounding chunks per summary.", "The statistics suggest that identifying grounding chunks is crucial for summary generation.", "Grounded Summaries.", "In Table 7, we measure the percentage of summary n-grams that appear in the transcripts (for all baselines) or grounding chunks (for our approach).", "While the distributions of unigrams are largely similar, we observe that grounded abstractive summaries tend to reuse more bigrams and trigrams of their grounding chunks.", "Moreover, for trigrams that are found in the grounding chunks, we find 70% of them tend to appear at the beginning the front half of the chunks.", "These results suggest that the grounding chunks identified by our approach can provide effective support for summary generation.", "We manually analyze a large amount of transcripts and their creator descriptions to identify the challenging points of podcast summarization in Table 8: Substantial lexical mismatch exists between the spoken and written form of descriptions.", "Speech recognition errors are abundant.", "E.g., by Hans Christian Andersen has been misrecognized into 4414 Excellent The summary accurately conveys all the most important attributes of the episode, which could include topical content, genre, and participants.", "buy homes Christian Andersen . The creator descriptions are sometimes highly abstractive, do not always summarize the episode and contain teasers.", "E.g., A male perspective podcast to start a conversation... and Ever wondered how Ed Sheeran became famous . The transcripts contain advertising inserts, e.g., I need to tell you about our sponsor... and the same description is used for different episodes that causes confusion to the model, e.g., The goal of Daily Fortnite is to build a community... 6 Conclusion In this paper, we investigate podcast summarization to produce textual summaries for podcast episodes that help listeners to understand why they might want to play those podcasts.", "We present a new kind of podcast summary where spans of summary text are tethered to the original audio to allow users to interpret system-generated abstracts in context.", "The authors would like to thank all anonymous reviewers for their insightful comments which helped improve this paper.", "This research was supported in part by the National Science Foundation (NSF) Grant #2143792." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "method", "method", "abstain", "objective", "objective", "objective", "objective", "method", "objective", "objective", "objective", "objective", "abstain", "abstain", "method", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "other", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "other", "other" ]
[ "We propose a novel large-scale referring expression recognition dataset, Refer360, consisting of 17,137 instruction sequences and ground-truth actions for completing these instructions in 360 scenes.", "Refer360 differs from existing related datasets in three ways.", "First, we propose a more realistic scenario where instructors and the followers have partial, yet dynamic, views of the scene followers continuously modify their field-of-view (FoV) while interpreting instructions that specify a final target location.", "Second, instructions to find the target location consist of multiple steps for followers who will start at random FoVs.", "As a result, intermediate instructions are strongly grounded in object references and followers must identify intermediate FoVs to find the final target location correctly.", "Third, the target locations are neither restricted to predefined objects nor chosen by annotators; instead, they are distributed randomly across scenes.", "This point anywhere approach leads to more linguistically complex instructions, as shown in our analyses.", "Our examination of the dataset shows that Refer360 manifests linguistically rich phenomena in a language grounding task that poses novel challenges for computational modeling of language, vision, and navigation.", "Imagine a scenario in which you are asked to retrieve medication from a bathroom.", "First, face the sink, then find the second drawer in the cabinet to your left.", "The pills should be inside that drawer behind the", "toothbrush. Interpreting instruction sequences in order to locate targets in novel environments is challenging for AI systems (e.g. personal robots and self-driving cars).", "First, the system needs to ground the instructions into visual perception (Anderson et al., 2018b; Hu et al., look towards the door leading outside the cafe.", "2019).", "This often requires identification of the mentioned object (Plummer et al., 2015) through physical relationships with surrounding objects (Hu et al., 2017b; Cirik et al., 2018a).", "Second, since human visual perception has limited field-of-view, instructions are often sequential: First, the correct FoV should be identified before searching for the final target.", "In many situations, the target location is not visually unique (e.g. in the middle of a plain wall), and several intermediate instructions are required.", "To study these challenges, we introduce a novel dataset, named Refer360 1 , for the task of localizing a target in 360 scenes given a sequence of instructions.", "Figure 1 presents an example scenario 1 The annotations, learning simulator, and annotation setup are publicly available for further research https: //github.com/volkancirik/refer360 .", "An example scene from the Refer360 dataset.", "Note that both annotators and systems cannot observe the shaded area.", "They only observe a partial field of view which can be updated dynamically.", "(b) An example scene from Touchdown-SDR where the bullseye is pointing to the target location.", "Instructions for this instance are a black doorway with red brick to the right of it, and green brick to the left of it. it has a light just above the doorway, and on that light is where you will find touchdown.", "from Refer360 .", "For this scenario, finding the target location requires first finding the door leading outside, then looking at the coffee pot, and finally finding the trash can, which is the nearest object to the target.", "Here, instructions are given from the perspective of a partial field of view (FoV) of the scene, and these FoVs can dynamically be changed.", "Thus, the correct interpretation of the sequence of instructions will require reasoning about what is currently visible in the FoV (e.g., grounding of objects) but also what is not visible yet.", "These scenarios will often require adjusting the FoV based on intermediate instructions.", "An important feature of the Refer360 dataset is that the target location is not an object; instead, it can be any point in the scene, which makes the grounding task more challenging since it is harder to describe a location when we cannot readily refer to it with the name of an object.", "Refer360 consists of 17,137 instruction sequences with ground-truth actions to complete these instructions in 360 scenes.", "Refer360 has some unique characteristics which differentiate it from prior work.", "First, Refer360 allows the scene to be viewed through a partial FoV that can be dynamically changed as instructions are followed.", "This is in contrast with existing 360 scene-based datasets such as Touchdown-SDR (Chen et al., 2018) and 2D image-based referring expression datasets (Kazemzadeh et al., 2014; Hu et al., 2016; Mao et al., 2016), where the visual input is either fixed, corresponding to a holistic, oracle-like view, or consists of fixed, cardinal FoVs.", "The partial and dynamic FoV in Refer360 poses new challenges for language grounding (see Figure 2a, 2b, and 2c for an illustrative comparison).", "For instance, the mentioned objects may not be visible in the current FoV, and language may refer to the FoV itself.", "Further, since our annotators generate instructions while observing a partial and dynamic FoV, and do so for a follower whose first FoV will be initially located at random, the instruction following task is strongly sequential.", "To interpret the sequence of instructions to find the target correctly, a follower must reason about the sequence of FoVs referenced by the instructor.", "Second, unlike other datasets, the target locations in Refer360 are randomly distributed and thus may occur anywhere not just on predetermined objects.", "As a result, target locations are less prone to bias (Devlin et al., 2015; Agrawal et al., 2016; Jabri et al., 2016; Goyal et al., 2016; Cirik et al., 2018b).", "These random locations lead to more linguistically complex instructions, as shown in our analyses when instead annotators choose the target location, they are likely to be biased towards locations that are more easily described (e.g. on top of a named object).", "Table 1 shows a comparison of similar datasets.", "In the following section, we motivate Refer360 dataset in more detail.", "The vision behind Refer360 is to build systems that perform localization of any point in 3D space, bringing us closer to human-like reasoning.", "This is an important milestone towards better collaboration between AI systems (e.g. personal robots) and humans, allowing them to act within the same space.", "It might also pave the way for AI-agents interacting with virtual worlds.", "The Refer360 dataset was designed to address three technical challenges towards this vision.", "First, learning environments we create need to reflect the characteristics of human's perception of 3D space.", "In such an environment, the agent only observes a partial FoV of the scene.", "This requires adjusting the FoV in accordance with instructions so that current view and instructions are aligned.", "The agent's FoV can be changed in a continuous manner, moving smoothly left, right, up, and down.", "This is analogous to a real-world robot performing motor actions to change its camera position, or a human changing their head's pitch and yaw.", "Further, real scenes are 3D, but the FoV is represented in 2D in our task.", "Thus, interpreting some instructions will require inferences about depth.", "Second, the paradigm of 360scenes with partial FoV will almost always necessitate instructions that consist of multiple intermediate steps.", "As the first intermediate step, the follower and instructor need to find a common referential FoV.", "Then, the instructor can continue giving guidance towards the target location, often by identifying objects that are physically related to the target location.", "This multistep process can serve as a natural benchmark for measuring whether systems achieve localization through a human-like process of progressively getting closer to the target location by interpreting intermediate steps.", "In other words, this setup may helps researchers make sure that our systems are arriving at the referred location for the right reasons.", "Third, since any point in the scene could be of interest, instructions will be more complex: many points in the scene will not correspond to easily named objects, and thus, when such points are allowed as targets, more sophisticated instructions will be required to unambiguously refer to them.", "The instructor may rely on description of physical relationships with the closest easily named locations in the scene (Nagaraja et al., 2016; Hu et al., 2017b; Cirik et al., 2018b).", "For instance, in Figure 2a, the target location is on the side of a trash bin, which is difficult to unique describe with a single word or a short phrase.", "In this case, the instructor may use the distance to the floor or to another object in the scene in order to describe the exact location of the target.", "This will additionally introduce description of degree (e.g. slightly above', a few inches away from') rather than more discrete spatial relationships (e.g. on top of the desk').", "Referring expression recognition.", "Grounding a short phrase or a sentence into a visual modality such as video (Khoreva et al., 2018; Anayurt et al., 2019) or imagery (Kong et al., 2014; Plummer et al., 2015, 2018; Yu et al., 2018a) is a well studied problem in intelligent user interfaces (Chai et al., 2004), human-robot interaction (Fang et al., 2012; Chai et al., 2014; Williams et al., 2016), and situated dialogue (Kennington and Schlangen, 2017).", "Kazemzadeh et al. (2014), Hu et al. (2017a), and Mao et al. (2016) introduce two benchmark datasets for the real-world 2D images.", "Nagaraja et al. (2016) propose a model where the target and supporting objects (i.e. objects that are mentioned in order to disambiguate the target object) are identified and scored jointly.", "Hu et al. (2017b) introduce a compositional approach where they assume that the referring expression can be decomposed into a triplet consisting of the target object, the supporting object, and their spatial relationship.", "Similarly, Cirik et al. (2018a) propose a type of neural modular network (Andreas et al., 2016) where the grounding of referring expression depends on the parse tree of the input referring expression to learn to ground an unconstrained number of supporting objects.", "360 Scenes.", "Although 360 scenes are well studied in the computer vision domain (Xiao et al., 2012; Su et al., 2016; Wijmans and Furukawa, 2017; Yang and Zhang, 2016; Xu et al., 2018; Yang et al., 2018; Yu et al., 2018b), few studies explore the challenges of 360scenes in the context of language grounding.", "Chou et al. (2018) introduce a dataset where 360 videos are narrated.", "They address the task of predicting the field of view for the given narration.", "Anderson et al. (2018b) introduce the vision and language navigation task for simulated indoor environments where an agent is placed in a location in a house and follows the instructions to go to a target location.", "Here the agent observes a discretized view of the current location (i.e. the 360 scene is split into a fixed number of field of views).", "The most related work to Refer360 is Touchdown (Chen et al., 2018) which introduces two tasks: a vision and language navigation task and a spatial description resolution (SDR) task (i.e. a referring expression recognition task for a simulated outdoor environment).", "In contrast with Touchdown, in our setup instructors, followers, and learning systems observe a partial FoV of the scene, but they can change the FoV continuously to explore the scene.", "This approach yields instructions with a stronger sequential dependencies and with stronger reference to the FoV itself.", "We demonstrate some of these differences in analysis in Section 5.", "Concurrent work studies visual question answering (Chou et al., 2020a) and object detection (Chou et al., 2020b) for 360scenes.", "Another concurrent study (Qi et al., 2020) combines vision-and-language navigation and referring expression recognition into one task where the system is asked to localize the referred object after navigating to another point in a real images of rendered buildings.", "In this section, we describe the details of the Refer360 dataset, a vision-and-language benchmark for localizing a target point in a panoramic image.", "Refer360 consists of 17,137 instruction sequences that describe randomly distributed target locations in 2,000 panoramic scenes from the SUN360 (Xiao et al., 2012) dataset.", "We first explain the annotation procedure for collecting and validating the instruction sequences.", "Later, we discuss the statistics of the Refer360 dataset.", "Annotation of the Refer360 dataset was carried out in three stages on Amazon Mechanical Turk with two tasks, namely a description task and a finding task.", "First we describe the two tasks in more detail.", "Description Task.", "Our main goal is to collect instructions for finding any point in a 360 image.", "Annotators started this task looking at the ceiling of the 360 image with a random yaw 2 .", "We asked them to find the target location for which we use an icon of Waldo 3 .", "Target locations are choosen randomly we discuss the details of this design choice in Section 4.2.", "The target can be at any longitude and can have a latitude within a range of 45 degrees from the top and bottom of the 360 image.", "This restriction in latitude is made for two reasons: (1) visual distortions happen at extreme points, and (2) during the finding task, the starting point is the ceiling of the 360 image.", "Annotators were asked to give instructions to find the target location using at least three instructions 4 .", "Finding Task.", "We design this task to verify the quality of instruction sequences provided by anno-2 We wanted to avoid introducing any bias by beginning the same position each time for each scene.", "3 https://en.wikipedia.org/wiki/Where% 27s_Wally%3F 4 Please see Figure 5 in Appendix to see a screenshot of the user interface we build for this task.", "tators in the description task.", "We asked annotators to complete the instruction sequences sentence by sentence.", "The initial field of view of annotators is always pointing at the ceiling of the 360 image with a random yaw.", "We asked annotators to change the FoV after each instruction so that the center of the FoV points to the location the intermediate instruction is describing.", "After moving the FoV to the correct position, annotators clicked a button to read the next instruction.", "We recorded the spherical coordinates of the center of the FoV after each instruction.", "As a result, our annotations include aligned intermediate steps that find the target location.", "After the final instruction, the annotators predicted the target location by changing the center of FoV or clicking on the FoV.", "We collected and verified the quality of our data in three stages using description and finding tasks.", "In the first stage, we sought a pool of annotators providing high-quality annotations.", "For the second, aimed to collect a large number of annotations and verify their quality.", "In the third stage, we further verified instruction sequences that were not verified in the second stage.", "Stage I.", "In this stage, we asked annotators to complete the finding task for four different scenes.", "We wrote the instruction sequences for this stage's finding task to give annotators an example of instruction sequences for describing the target location.", "Then, annotators completed the description task for 4 different scenes.", "A total of 256 annotators participated in this first stage.", "We manually inspected each instruction sequences provided by these annotators for their quality of descriptions of the target location and reduced the pool of annotators to 86.", "Stage II.", "In this stage, for each annotation session, we asked annotators first to find the target location for four different scenes, and later, describe the target location four times for different scenes 5 .", "We used the finding task to verify the quality of the instruction sequences.", "If an annotator predicts the target location within a radius of 11 degrees in spherical coordinates, which is roughly equal to the size of the Waldo icon we used, we counted that instance as verified.", "Stage III.", "After the second stage, we have some instructions where the annotators could not find 5 Annotators never observed their own instruction sequences while doing the finding tasks.", "the target accurately.", "This could mean either the instructions are not clear, or it is actually harder to find the target location with these instruction sequences.", "In the third stage, we did another round of the finding tasks to verify these harder instruction sequences.", "After these three stages, we have a total of 17,137 instruction sequences in which at least one annotator was able to find the target location accurately.", "Statistics for data collection in these stages and the payment structure is in the Appendix.", "We split our presentation of dataset statistics into two parts: namely, scene statistics and language statistics.", "Scene Statistics: To investigate the challenges in localizing a target location for both indoor and outdoor scenes as well as for different kinds of indoor and outdoor scene categories, we use seven scene categories from the SUN360 (Xiao et al., 2012) dataset.", "We use total of 2,000 scenes.", "Table 2 shows the distribution of scene categories that comprise the Refer360 dataset.", "We want to analyze the richness of the scenes in the Refer360 dataset and compare it with Touchdown-SDR.", "The domain of the scenes will affect the instruction one needs to use to describe a target location.", "To be more specific, when annotators give instructions, they use supporting objects as anchor points to help guide the attention of the follower.", "Thus, the availability of a rich set of objects is essential for describing the target location.", "Since the annotation of objects in 360 images is a laborious task itself, we use an off-the-shelf object detection method (Anderson et al., 2018a) to annotate scenes with objects.", "We split 360 images into 12 different 2D images covering the 360 view 6 .", "This provides us a proxy to analyze the 6 We fixed the confidence threshold for detection of objects kind of objects usually observed in 360 images in Touchdown-SDR and Refer360 .", "Table 3 shows the average number of objects and the perplexity of the distribution of detected objects per 360 scene used in Refer360 and Touchdown-SDR datasets.", "7 As expected, the average number of detected objects in Touchdown-SDR scenes is higher than in Refer360 because all scenes depict outdoor settings from Google's StreetView API.", "However, this analysis shows that Refer360 has much larger diversity of object types and therefore will likely have greater lexical diversity in instructions.", "Language Statistics: Refer360 contains a total of 17,137 instruction sequences (8.57 per scene) describing target locations.", "Table 4 shows language statistics for Refer360 and other referring expression recognition datasets.", "Refer360 is bigger than Touchdown-SDR, yet, smaller than other datasets.", "This is because it is a more time-intensive and costly process to annotate and validate 360 images compared with 2D images.", "Figure 3 shows the distribution of text length for the instructions.", "Compared to other referring expression recognition and image captioning datasets, Refer360 contains the longest instructions on average.", "This is a result of two differences with previous tasks.", "First, previous datasets use the entire scene as a single field of view.", "Thus, there is reduced need to describe how to find the target loca-to 0.5 and maximum number of objects to 20.", "7 In the appendix, Figure 6 shows the most detected objects for Refer360 and Touchdown-SDR datasets.", "tion sequentially.", "In Touchdown-SDR, the recognition system or human annotator needs to find an FoV that includes the target location.", "In Refer360 , the finding task is carried out sequentially; thus, each instruction needs to be completed accurately to be able to find the target location.", "Second, in Refer360 , the target location is randomly distributed in scenes.", "As seen in Table 6, when the target location is randomly selected, the target location is on average further from other objects (we discuss this in more detail in Section 5.1).", "Dataset Splits: We use a similar train, validation, and test split strategy as the Room-to-Room dataset (Anderson et al., 2018b).", "We reserve a subset of images from each scene category for validation and test splits for unseen scene evaluation i.e. these scenes are not observed in the train split to study generalization capabilities of models.", "The remaining scenes are pooled together for training, validation, and test splits for seen scenes evaluation.", "Table 5 shows statistics for the splits.", "Following the previous studies, the ground-truth annotations for test splits will not be released.", "Instead, we will provide an evaluation server where model predictions may be uploaded for scoring.", "We conduct four analyses of the Refer360 dataset.", "First, we investigate if the random selection process of target locations can mitigate possible bias issues.", "Recent studies (Devlin et al., 2015; Agrawal et al., 2016; Jabri et al., 2016; Goyal et al., 2016) show that design decisions for collecting annotations may introduce bias into datasets.", "High-capacity machine learning models can exploit these issues which hinders the meaningful progress towards real language understanding (Zhou et al., 2015; Cirik et al., 2018b).", "Second, we study whether each instruction in an instruction sequence is critical in finding the target location, or whether some instruction sequences are overcomplete.", "It may be very well the case that, by just understanding the last instruction, one can easily locate the target location.", "Third, we perform a qualitative analysis of Refer360 to provide the types of linguistic reasoning required to find the target location accurately.", "Finally, we analyze the performance of the state-of-the-art on Refer360 .", "The selection method for the target location plays a crucial role in the kind of language one needs to use to describe that location.", "Earlier studies on referring expression recognition datasets (Kazemzadeh et al., 2014; Hu et al., 2016; Mao et al., 2016; Strub et al., 2017) select the target location as object boxes annotated by humans.", "In Touchdown-SDR (Chen et al., 2018) instead, annotators decide the location of the target rather than choosing one of the pre-defined lists of object boxes 8 .", "This could introduce a location bias to the dataset i.e. if annotators get to select the target location, they may choose targets that are easy to describe, sometimes leading to trivial or uninteresting examples, and more broadly to artificially simple language overall.", "For instance, if there is only one pink object in the scene, annotators usually preferred describing that region rather than some other obscure location in the scene.", "Instead of letting annotators decide where to place targets in the scene, we randomly picked a target location in the scene and asked them to describe how to find that loca-8 In our initial iterations for the data collection, we followed this procedure.", "However, we observed that in many cases, annotators chose the most salient, or unique object or region in the image.", "Figure 7 in the appendix compares the distribution of instruction sequence lengths for random and manual selection of targets.", "tion.", "As a result, our instruction sequences are complex as we show next.", "To measure the differences in instructions for randomly or manually choosen targets, we compute three quantities.", "First, we compute the variety of objects that the target is located on using the perplexity of object frequencies.", "Similarly, we also compute the variety of objects closest to the target objects.", "Since we use objects near to the target location as anchor points, this is also another useful metric.", "The higher the perplexity of both metrics, the harder it is to predict the target location using just the object type or the closest object.", "Third, we measure the average distance between the target location and the nearest object.", "The closer the target location to another object, the easier it is to describe using the closest object as an anchor point.", "Table 6 shows statistics for target locations in Touchdown-SDR and Refer360 .", "For both perplexity metrics, we observe that the target is located near or inside a wider variety of objects in Refer360 .", "Also, on average, the target location is further away from other objects for Refer360 .", "These statistics show that randomly choosing the target location helps us address possibly bias towards simple instructions and makes recognition more challenging.", "While collecting instructions, we asked annotators to describe the target location using at least three and at most five sentences.", "It might be possible to Phenomenon c Example from Refer360 Coreference 96 1.6 on the very upper left corner of the blue part of that window Comparison 15 0.1 the smaller building to the right of the spire Sequencing 13 0.1 go right just a smidge and then go up above Counting 30 0.3 shaped like a football and has 3 silver legs Allocentric Spatial Mention 46 0.6 find the shelves with books nearest to you Egocentric Spatial Mention 35 0.5 waldo is sitting on the right side of the window Direction 92 1.6 look at the knife on the wall to the left Temporal Condition 13 0.1 turn right until you see a mirror on the wall 3D understanding 22 0.2 counter with the two bar stools sitting in front of it Inexact/Approximate Language 28 0.2 in front of the white strip at the bottom slightly off center More than 2 Supporting Objects 47 0.5 now look on the floor in between the table and the chair Table 8: Linguistic analysis of 100 randomly sampled examples from Refer360 .", "find the target location using only the last instruction, which may make the first sentences unnecessary.", "Such redundancy makes it harder to study the core challenges of grounding instructions to visual perception and actions.", "Thus, we conducted an ablation study with the same pool of annotators using 1K instructions from the dataset.", "Here we check whether Refer360 has strong dependencies between instructions.", "We ran two ablation studies to examine the necessity of using all instruction sentences.", "For the first study, we ran a finding task with the same pool of annotators, where we provided only the final instruction.", "For the second study, similarly, we ran another finding task where we provided only the penultimate and the final instruction.", "We compare the average euclidean distance between the predicted locations and the target location, and the accuracy, i.e. for what percentage of the time the distance between the predicted location and the target location is less than 11 degrees.", "Table 7 shows the result of our ablation analysis.", "Annotators' performance significantly dropped when they can only read the last instruction.", "They could find the target object only 37% of the time.", "Using the penultimate instruction helped them a lot, and they achieved 63% accuracy.", "The best performance is achieved when they observe the full instructions.", "These results show that each instruction is necessary for accurately finding the target location.", "Before designing a system to address a language-related task, it is important the understand different kinds of linguistic phenomena observed in the task.", "We follow the procedure described in Touchdown-SDR (Chen et al., 2018), and added a few novel phenomena including 3D understanding, inexact language, and the use of more than two supporting objects as linguistic phenomena.", "Table 8 shows the result of our analyses for 100 randomly sampled instances.", "Refer360 requires reasoning for a rich set of linguistic phenomenon including the resolution of the coreference chains, counting objects, a rich set of spatial language phenomena such as multiple-supporting object mentions and 3D scene understanding.", "Our analyses in the previous subsections suggest that Refer360 poses several challenges.", "In Section 5.1, we show that since the target locations are randomly chosen, it is harder to exploit possible location bias.", "In Section 5.2, we show that it is essential to model the sequential nature of the instructions.", "Section 5.3 shows that there are lots of interesting linguistic phenomena observed in Refer360 .", "We want to verify these claims by training the state-of-the-art model and measure its performance on our Refer360 dataset.", "We use the same experimental setup in Touchdown-SDR using the scenes provided in the concurrent work (Mehta et al., 2020), where we slice 360scene into 8 FoVs covering the scene.", "We pass each of these FoVs to a pre-trained model (He et al., 2016), and extract features from fourth to the last layer before classification to get a feature map representation of the FoVs.", "We concatenate 8 FoV slices to a single tensor to represent the 360 scene.", "We use the LingUNet model (Chen et al., 2018; Misra et al., 2018; Blukis et al., 2018), which performs the state-of-the-art results on TouchDown-SDR dataset.", "LingUNet is an image-to-image encoder-decoder model where a language and image representations are fused to predict a probability over the input image.", "Instructions are fed to bi-directional Long Short-Term Memory (LSTM) recurrent neural network to induce a language representation.", "To induce fused image-text representations, the input image tensor is passed to a convolutional neural network conditioned on the test representations.", "The fused representation is then fed to deconvolution layers to predict the location of the target.", "We use the same accuracy and distance metrics described in Section 5.2.", "As we can see in Table 9, LingUNet performs significantly worse on Refer360 9 .", "This might be due to the difference we highlighted in earlier sections.", "First and foremost, instructions must be completed sequentially.", "However, LingUNet does not model the sequential nature of the task for Refer360 , rather uses all instruction sequence and oracle-view of the 360 scene.", "Second, the scenes in Touchdown-SDR is from a single domain, but in Refer360 , we have a richer set of scenes for both indoor and outdoor.", "We designed Refer360 to study 3D spatial language understanding for real scenes.", "We collected a fine-grained set of annotations that support study at many levels of language grounding.", "Refer360 is 9 We used publicly available code provided by authors to run the experiments.", "We could not replicate the exact numbers reported in the paper, yet, we use exactly the same setup for both Refer360 and Touchdown-SDR for a fair comparison.", "a versatile dataset and enables investigation along three axes: Language: Refer360 enables modeling tasks that study single instruction, multiple instructions, or interactive language where the next instruction is revealed only after reaching an intermediate milestone.", "Vision: Refer360 enables modeling tasks that try to predict targets at different granularities: at the object level if trying to identify the closest object to the target, at the region level in a similar style to Touchdown-SDR, and finally, at the pixel level.", "Action: Refer360 enables modeling tasks where the action space is static with the whole 360 image given upfront, where the action space consists of a sequence of discrete choices between fixed views, and when the action space is continuous, consisting of angles for rotation.", "In our experiments, we presented one of these scenarios (single instruction, static, and pixel-level) since it was the closest to the pre-existing Touchdown-SDR system.", "However, one can also study a much larger number of scenarios and modeling tasks using Refer360 .", "We are thankful to anonymous ACL conference reviewers for providing valuable feedback.", "We thank members of MultiComp Lab at CMU and Berg Lab at UCSD for useful discussions.", "We thank Howard Chen for helping us replicate their experiments.", "This material is partially supported by Siemens and the National Science Foundation.", "Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of Siemens or the National Science Foundation, and no official endorsement should be inferred." ]
[ "objective", "abstain", "objective", "abstain", "abstain", "abstain", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "result", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "objective", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "method", "abstain", "method", "objective", "method", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "other", "other", "other", "other", "other" ]
[ "Automatic personalized corrective feedback can help language learners from different backgrounds better acquire a new language.", "This paper introduces a learner English dataset in which learner errors are accompanied by information about possible error sources.", "This dataset contains manually annotated error causes for learner writing errors.", "These causes tie learner mistakes to structures from their first languages, when the rules in English and in the first language diverge.", "This new dataset will enable second language acquisition researchers to computationally analyze a large quantity of learner errors that are related to language transfer from the learners' first language.", "The dataset can also be applied in personalizing grammatical error correction systems according to the learners' first language and in providing feedback that is informed by the cause of an error.", "English has become an international language.", "It is the lingua-franca that unites native speakers of other languages around the world (Lysandrou and Lysandrou, 2003).", "For that reason, it is not hard to believe that the teaching of English as a Second Language 1 has caught a lot of attention from the research community (Caine, 2008).", "Over the years, computational linguistics researchers have collected corpora containing text written by language learners.", "These corpora have made possible several advances in language teaching, such as automatic writing assessment (Rahimi et al., 2017; Yannakoudakis et al., 2011) and automatic error detection and correction (Chollampatt et al., 2016; Nadejde and Tetreault, 2019; Omelianchuk et al., 2020).", "1 Throughout this manuscript, we use the term second language to refer to any additional language beyond the mother tongue, whether the speaker is in a second or foreign language learning context.", "Although learner corpora are used to model grammatical error correction systems, they are not as often employed in the enhancement of learner feedback.", "Language learners benefit from direct corrective feedback (Sheen, 2007).", "Moreover, feedback that makes them reflect upon their errors and distinguish a cause for their mistakes correlates to increased performance (Demmans Epp and McCalla, 2011; Sheen, 2007; Shintani and Ellis, 2013; Karim and Nassaji, 2020).", "In this paper, we introduce a learner English dataset enhanced with error cause information and concrete examples of learner errors that relate to the learners' first language.", "It has the potential to help create computational models that provide personalized feedback to English language learners based on the learners' native languages.", "This new dataset can be accessed by following the instructions described in our research group's repository 2 .", "The dataset presented in this paper contains supplementary explanations for errors made by Chinese native speakers when writing in English.", "Chinese learners represent a growing share of the English as a Second Language market.", "A nationwide language survey from the Chinese government reports that at the beginning of 2001 at least one third of China's population was learning a new language and out of those, 93% were learning English (Wei and Su, 2012).", "These numbers have only seemed to increase in recent years.", "The latest survey of international students in the US that was conducted by the Institute of International Education (2020) shows that 35% of these students come from China.", "With that in mind, it is reasonable to say that this large portion of English learners can benefit from receiving personalized feedback on their writing errors.", "One computational task that can benefit from the contrast between first (L1) and second language (L2) is Grammatical Error Correction (GEC).", "In this task, the objective is to find and correct grammatical errors in learner text (Ng et al., 2013, 2014; Bryant et al., 2019).", "Since the GEC task was introduced in 2013, many types of grammatical errors have been added.", "The BEA-2019 Shared Task upgraded the task's error pool by adding new test sets containing essays written by learners from a more diverse set of nationalities.", "This update is meaningful as it exposes the GEC models to a more general set of error types.", "In the previous tasks, the essays analyzed were written by South-East Asian students and due to that, the distribution of grammatical error types in the dataset was skewed towards that group's most common mistakes (Bryant et al., 2019).", "Grammatical error correction research shows that GEC systems benefit from L1 specific learner data.", "Rozovskaya and Roth (2011) used L1 specific learner data to adapt a Nave Bayes GEC system.", "They applied priors extracted from L1 specific learner error distributions to improve the correction of preposition replacement errors.", "Chollampatt et al. (2016) used L1-specific data from Russian, Spanish, and Chinese learners to adapt a general GEC model.", "The resulting adapted models outperformed their general counterpart.", "Nadejde and Tetreault (2019) expand on this topic by adapting general GEC models to L1-specific and proficiency-specific learner data.", "Their experimental setup covered twelve different L1s and five proficiency levels.", "Both L1 and proficiency adaptations outperformed the baseline, and the models which achieved the best performance were the ones that were adapted to both features at the same time.", "Direct corrective feedback, such as grammatical error correction, helps language learners improve their writing proficiency (Liaqat et al., 2020; Sheen, 2007).", "In addition to that, feedback that contrasts erroneous utterances with correct ones facilitates the acquisition of accurate language structures.", "This facilitation occurs both when the feedback is applied to L1-transfer and non-transfer errors (Tomasello and Herron, 1989).", "Fine-tuning error feedback by contrasting L1 and L2 has been shown to increase learners' language understanding and awareness (Kupferberg, 1999; Han, 2001).", "Advances in learner data annotation foster language transfer research by providing details that can be used to inform the contrast between learners' L1s and L2s and possibly further explain incorrect learner utterances.", "Highlighting this contrast is beneficial to learners as it has the potential of increasing their metalinguistic awareness.", "That is, it can improve the learners' capacity to think about language as an object.", "It supports their ability to recognise the mismatch between their L1 and L2 as well as their ability to refrain from incorrectly using L1 rules in L2 utterances (Wanderley and Demmans Epp, 2020).", "Considering the importance of feedback for learners, Nagata (2019) introduced the task of feedback comment generation.", "In this task, the objective is to automatically generate feedback for learner essays.", "Along with this new task, the author introduced a dataset that contains learner essays and their respective annotated feedback.", "The annotation available in this new dataset contains feedback regarding preposition usage errors and text organization.", "It also contains annotation samples in which the feedback praises the learners' writing.", "While our annotation procedure focused on annotating Chinese L1 learner errors, with a special focus on whether those errors were related to negative language transfer, our datasets may complement the one described by Nagata (2019).", "As, ultimately, both efforts aim to provide more personalized feedback to language learners.", "The differences between L1s and English can provide valuable features that help identify learners' L1s.", "Information about learner errors and their association with the learners' L1s can be useful in tasks such as native language identification.", "This task takes advantage of latent signals in non-native written data to identify the authors' L1s (Tetreault et al., 2013).", "Wong and Dras (2009) apply the contrastive analysis hypothesis (Lado, 1957), which correlates the learner's more common errors to the divergences between L1 and L2 in a native language identification task.", "They analyzed three types of syntactic errors and found evidence that the contrastive analysis hypothesis can aid in L1 detection.", "The distribution of learner errors alone can also be employed in native language Iidentifi-cation.", "Flanagan et al. (2015) showed that writing error patterns performed well as features in the prediction of learners' native languages.", "The correlation between L1s and writing error patterns happens because language learners, sometimes unknowingly, use certain strategies when they are learning how to communicate in a new language.", "One of those strategies is called language transfer.", "Language transfer, or cross-linguistic effects, is a subject that has been studied since 1957 when Robert Lado defined the phenomenon and its effects on second language acquisition (Lado, 1957).", "According to Lado, second language learners rely on their first languages when forming utterances in the second language.", "They tend to transfer morphological, syntactical, and semantic paradigms that they are accustomed to from their L1 when attempting to communicate in the L2.", "When learners transfer patterns from their L1 and those patterns are not valid in the L2, it results in negative language transfer.", "Since Lado's book was published, language transfer evidence has consistently been reported by language teachers, linguists, and second language acquisition researchers (Swan and Smith, 2001).", "This body of evidence supports the theory that learners' L1s influence their L2 learning.", "English learner data is amply available online, especially due to endeavours like the aforementioned native language identification and grammatical error correction tasks.", "However, it is considerably more difficult to find learner data that highlights the differences between the learners' L1s and English, and how these differences influence learners' mistakes.", "Learner English lacks large and accessible corpora like the MERLIN corpus, a dataset of Italian, German, and Czech learner essays in which errors are annotated with several characteristics of learner language and their potential causes (Boyd et al., 2014).", "This corpus contains features derived from sources such as language teachers, reference textbooks, and second language acquisition research.", "Some of these features (e.g., capitalization errors by German native speakers and negation errors in Czech) can be associated with the learner's L1 (Boyd et al., 2014).", "There have been efforts to enhance English learner data.", "Meaningful work that provides syntactic analyses for learner English was introduced by Berzak et al. (2016); they created a manually annotated syntactic treebank for learner English.", "The Treebank of Learner English they created aims to facilitate second language acquisition research and research on the processing of ungrammatical language.", "It contains part-of-speech tags and dependency parse trees for erroneous learner English sentences, as well as the same features for their corrected counterparts.", "Although computational tasks and previous research on learner English have shared several learner datasets, these datasets do not contain information about linguistic phenomena, such as negative language transfer.", "It is well-known that learner error patterns and L1 versus English distinctions can aid both computational tasks and language learning, e.g., Nadejde and Tetreault (2019); Flanagan et al. (2015); Karim and Nassaji (2020).", "In the present paper, we introduce an enhanced learner English dataset, manually annotated with error cause features that highlight the differences between English and the learners' L1, Chinese.", "The goal of this dataset is to inform learner error feedback with metalinguistic details that can aid learning and to support computational linguistics tasks that take into account native language influence on learner English.", "The negative language transfer annotation proposed in this paper builds on the collection of error annotated learner essays described by Yannakoudakis et al. (2011).", "These essays were written by English as a Second Language learners while taking the First Certificate in English (FCE) test, an upper-intermediate level English certification.", "The dataset contains essays written between the years 2000 and 2001 by 1244 distinct learners.", "Each essay in the dataset contains the answers to two FCE writing tasks.", "There is one essay script per learner amounting to 1244 essays in the dataset.", "Each script has on average 409 words ( SD = 96).", "In total, the essays contain more than 500K words.", "Each essay in the FCE dataset was manually annotated with the learners' errors.", "These errors are categorized with error types that follow the error coding described by Nicholls (2003).", "Most errors in the dataset are also accompanied by corrections suggested by the annotators.", "The few er-Incorrect utterance Correct utterance Negative language transfer?", "Along with the error annotation, the dataset includes metadata such as the learners' L1, age range, essay score, and overall exam score.", "Sixteen different L1s are represented in the FCE dataset.", "There are 66 essays written by 66 distinct Chinese native speakers in the FCE dataset.", "These essays amount to a total of 30K words.", "Each essay contains on average 468 words ( SD = 101).", "We enhanced the essays written by Chinese native speakers in the FCE dataset by adding information that associates the learners' L1 rules to the annotated writing errors.", "Each error in this subset of FCE essays is classified as being related to language transfer or not.", "For an error to be categorized as negative language transfer, there has to be concrete evidence that English and Chinese rules diverge for that specific sentence structure.", "The categorization of an error as negative language transfer is an indicator that the error was the learner's attempt, conscious or not, to apply one or more L1 rules while writing in English.", "Along with the binary negative language transfer classification, each error in this dataset is annotated with a possible reason for its occurrence.", "Whether that reason is related to language transfer or not, all errors are accompanied by a short sentence describing one of their possible causes.", "Table 1 presents examples of learner errors, their negative language transfer label, and possible error causes.", "The FCE dataset augmented with error cause annotations as described above is complemented by a new learner English dataset.", "This dataset catalogues the error cause categories and provides more substantial descriptions for each error cause, as well as exemplar sentences in English and in the learner's L1 that highlight the different language rules possibly related to the mistake.", "The error cause categories used in this dataset are the same as the ones used in the FCE dataset error cause annotations.", "Maintaining this link means that if a Accompanied by corrections Not accompanied by corrections Total Negative language transfer errors 1797 94 1891 Not negative language transfer errors 1276 113 1389 Spelling errors 292 292 Omitted errors 12 12 Combined 3377 207 3584 Table 3: Descriptive statistics for the negative language transfer dataset Error type Error description Total Negative language transfer Not negative language transfer RP replace punctuation 336 228 (67.86%) 108 (32.14%) TV incorrect tense of verb 267 185 (69.29%) 82 (30.71%) RV replace verb 230 81 (35.22%) 149 (64.78%) MD missing determiner 209 206 (98.56%) 3 (1.44%) RT replace preposition 209 118 (56.46%) 91 (43.54%) Table 4: Distribution of negative language transfer errors across the most frequent error categories.", "Table 2 provides error cause exemplars and their respective descriptions in the new dataset.", "In total, 269 possible error causes have been identified for the errors made by Chinese native speakers.", "Each possible error cause in the dataset occurs on average 11 times ( SD = 26); 110 of the error causes were only found once.", "The most common negative language transfer error cause was Chinese uses commas to mark the end of a complete thought.", "This error cause occurs 270 times and refers to the disparity in punctuation usage patterns between English and Chinese an example of negative language transfer.", "The non-negative language transfer error cause that is most frequent in the dataset is Overcorrection, found 186 times.", "This possible error cause indicates that learners may have used known English patterns where they were not necessary, in a failed attempt to conform to English grammatical rules.", "Table 3 presents the statistics of the negative language transfer dataset.", "There are 3584 errors in the Chinese L1 dataset.", "Of those errors, 52.76% are tagged as negative language transfer and 38.76% are tagged as non-transfer errors.", "The remaining 8.48% were left unlabelled in the dataset for one of two reasons: they were spelling errors or they were omitted due to, for example, the correction proposed not being enough to amend the error or the error being tagged as incorrect because of an English variety divergence, e.g., the learner sentence was correct according to American English rules but not according to British English rules.", "Among the learner errors that received a negative language transfer annotation, it is important to make a distinction between errors that are accompanied by corrections and errors that are not.", "The FCE dataset annotation scheme allowed annotators to highlight errors by enclosing them with <i> and </i> tags.", "It also instructed that the suggested corrections for those errors should be enclosed in <c> and </c> tags.", "In some situations, the FCE annotators were unsure about the appropriate correction for an error and, hence, did not suggest edits (Bryant, 2019).", "In these situations, the annotators simply highlighted the errors using <NS> and </NS> tags.", "Although these errors are annotated with negative language transfer and error cause information in our dataset, they are kept separate from the other errors due to them not containing any information about error correction.", "There are 207 errors made by Chinese native speakers that are not accompanied by edits in the FCE dataset.", "Out of those, 94 are related to negative language transfer and 113 are not.", "language transfer statistics across the most common error types in the dataset.", "By investigating these types, it is possible to detect recognizable patterns from language that Chinese learners of English use.", "One of the most problematic grammatical structures for Chinese native speakers, when writing in English, is the placement of determiners before noun phrases.", "As the Chinese language does not have determiners, Chinese learners have trouble deciding when to use them and when to refrain from using determiners in their writing (Han et al., 2006).", "This fact is reflected in the proportion of missing determiner (MD) errors that are labelled as negative language transfer in the dataset.", "Out of the 209 MD errors, 206 (98.56%) are labelled as transfer related errors.", "An example of a non-negative transfer error for MD is where the learner omits a determiner which specifies the subject, for instance I want to ask for [my] money back where the word my is the missing possessive determiner.", "Generally speaking, in Chinese, the word my ( Pinyin: wo de) is also used in formal settings.", "However, it can be omitted to shorten sentences in informal settings.", "Therefore, this error is not classified as a negative transfer error.", "On the other hand, there are error types in the dataset that are rarely associated with negative language transfer.", "Errors involving the unnecessary usage of determiners, for example, are not related to negative language transfer.", "They are a result of learners overusing an L2 grammatical structure by placing it where it is not needed (Smith, 1982).", "Replacement errors, i.e., errors in which the erroneous word needs to be replaced by another word from the same category, tend to be distributed more evenly between negative language transfer and not negative language transfer.", "These errors are labelled as not negative language transfer when the erroneous structure used by the learner has no parallel in Chinese.", "That is, it is not possible that the learner is reusing an L1 structure because the structure used only occurs in English.", "The FCE dataset errors were grouped by learner L1 and each error was annotated by one annotator.", "The Chinese errors' annotator is a native speaker of Mandarin Chinese and English who teaches Chinese as a foreign language.", "She is also able to read and write in both languages, with higher proficiency writing in English.", "She speaks multiple dialects of Mandarin originating from South-East China.", "Furthermore, she has taken linguistics courses on English syntax.", "The annotator had access to a dataset containing all the errors made by Chinese native speakers.", "To facilitate the annotation process, the errors in the datasets were further grouped by error type.", "The annotator then worked on one error category at a time.", "For example, she analyzed and annotated all the wrong verb tense errors in the dataset before moving on to another error category.", "This procedure helped keep the annotator focused on a small number of grammatical structures at a time which aided the recognition of common error patterns.", "In fact, this structured use of error types is one of the reasons presented by Nicholls (2003) for the addition of error type features to learner English datasets.", "Beyond the grammatical errors and their types, the annotator had access to more information about the errors, such as the context surrounding the erroneous utterance and the Extensible Markup Language (XML) data extracted from the FCE dataset.", "These two features proved useful to elucidate semantic errors.", "A semantic error initially looks like an annotation error, since the utterance's grammatical structure is not problematic.", "However, when the annotator checked the context around the error, they would often find its cause to be context-related.", "In the sentence I have never been to in my life., the word never does not seem incorrect, although it is tagged as such.", "By looking at the context surrounding this error, It was the worst show and theatre I have never been to in my life., it is possible to see that indeed the word never should be replaced with the word ever.", "During the annotation procedure, ambiguous cases were discussed and reviewed among the annotator and the research group in weekly meetings.", "The annotator highlighted entries that she found hard to label and those were discussed within the group.", "Such cases ranged from entries that were deemed erroneous by the FCE annotators due to language variety (e.g., British or American idioms), entries that did not have an equivalent structure in the learners' L1 (e.g., hyphenated words, which do not exist in Chinese), and semantic errors (e.g., errors in Incorrect utterance Correct utterance Ambiguity Number of cases British vs American English varieties We all would like to go there. We would all like to go there. The incorrect version of the sentence is more commonly used in the American variety of English. It is not incorrect in that variety. 18 Chinese does not have an equivalent structure I'm standing on your left hand side. I'm standing on your left-hand side. Hyphens do not have a parallel structure in Chinese. 17 Semantic errors tagged as structural errors You could find a restaurant. You can find a restaurant. Although the verb could is in the past tense, some learners may choose to use it to indicate respect. 10 Table 5: Ambiguous errors from the FCE dataset which the grammatical structure is not incorrect, but the utterance does not fit the overall essay con-text).", "Table 5 presents examples of errors that were discussed during the annotation process.", "These errors are considered ambiguous with regards to whether they should be labelled as transfer related.", "The annotation scheme was designed to highlight the relationship between the learner error and the learner's L1.", "Other than the boolean label representing whether an error is related to negative language transfer, each entry carries information about the possible reason behind that learner mistake.", "Even when the relationship between the error and the learner's L1 is not apparent, the annotation scheme will provide a possible cause for the error.", "This cause is not related to language transfer.", "The error cause feature was heavily influenced by language teacher guides, books that aim to make teachers aware of the learner errors they can encounter in the classroom, e.g., Learner English: A Teacher's Guide to Interference and other Prob-lems by Swan and Smith (2001).", "Guides like these have been written based on years of in-classroom experience and contain information about error causes along with potential learner feedback.", "These guides were used as a baseline for negative language transfer detection during the annotation process.", "Other important sources of guidance for the error cause feature annotation were Chinese and English grammar books and guides 3 (Faigley, 2015; 3 https://www.grammarly.com/blog/ category/handbook/ Li and Thompson, 1989).", "These sources allowed the direct contrast of erroneous utterances with language rules and this contrast enabled the derivation of possible causes for learner mistakes.", "Second language acquisition researchers and language teachers are well acquainted with learner errors that are related to the learners' L1s.", "These communities have produced comprehensive guides to learner errors and their causes.", "Learner language guides allow learners and teachers to identify the reasons behind certain error types and, with that, better understand and prevent those mistakes.", "In some of these guides the reader will find information that connects learners' L1s with common error types committed by native speakers of that language.", "Our new dataset, enables the use of other indicators, such as linguistic features and proficiency levels, to identify errors related to negative language transfer.", "To understand the effect of linguistic features in negative language transfer prediction, we built classification models to predict when a learner error is related to negative language transfer.", "We wanted to explore the relationship between negative language transfer and the linguistic features of errors, such as part-of-speech (POS) tags and dependency labels, since these features are made available by this new dataset.", "In this experiment, we used the new negative language transfer dataset to compare the predictive", "power of classification models for negative transfer 4 .", "These models are trained on error features from the new negative language transfer dataset.", "The models output whether the errors are related to negative transfer.", "This is a binary classification problem in which most of the features available are categorical.", "For this reason, we converted the categorical features, such as error types, into one-hot-encoding columns and binary vectors.", "The one-hot-encoding conversion creates one new column on the dataset for each unique categorical value.", "The binary vector conversion creates one new column on the dataset containing a binary number in which the position of the digit one corresponds to the category of the entry.", "The conversion of categorical features into one-hot-encoding columns increased the number of dimensions in our data.", "Hence, we decided to experiment with a random forest classifier, a classification model that is known to perform well with high-dimensional data (Xu et al., 2012).", "For our baseline model we decided to use a logistic regression model trained only on the error type data features.", "This choice was based on the parallel that can be drawn between the dataset's error type information and the teacher guide descriptions of connections between L1s and specific error patterns.", "A strong baseline for the experiment relies solely on error types to predict negative language transfer.", "Both classifiers were trained using the models available in the Python library scikit-learn 5 .", "Since the new dataset contains actual learner writing, it is possible to extract a wide range of linguistic features from the sentences in the dataset.", "We used the Python library spaCy 6 to extract dependency labels, Universal Dependencies POS tags (Nivre et al., 2016), and Penn Treebank POS tags (Marcus et al., 1993) from the erroneous utterances and their surrounding tokens.", "These features were then converted to one-hot-encoding columns and binary vectors, as described above.", "dataset, we performed an initial step of feature selection to determine the most relevant features to predict the negative language transfer label.", "The feature selection process consisted in performing 10-fold cross validation with 90% of the dataset as training data.", "The remaining 10% was held out for testing.", "We performed the cross validation on all feature set combinations training a random forest classifier on nine folds and testing it on the remaining one.", "The mean score for each feature set combination was used to select the best performing set.", "The best performing model in the feature selection process was trained with three features: the error length (the number of words in the er-ror), the error type (described in Nicholls (2003)), and the Penn Treebank POS tags of the erroneous utterance plus the POS tags of the error's two subsequent words.", "Table 6 presents an example of the features selected.", "The columns Error type and POS tags trigram were converted into one-hot-encoding columns during the feature selection, training, and testing processes but are presented here as categorical data for intelligibility.", "After feature selection, a random forest classifier was trained on the three best performing features using 90% of the dataset, i.e., 2952 error instances.", "This model accurately classified 78.04% of the test samples as negative language transfer or not.", "The baseline model, a logistic regression model trained on the error type features, achieved 72.56% accuracy on the test set.", "Table 7 presents the accuracy, precision, and recall scores yielded by both baseline and random forest models on the test set, which contained 328 error instances.", "that more information about learners' incorrect utterances captures more of the language transfer phenomenon.", "One of the most common baseline model misclassifications was the prediction of replace punctuation errors as negative language transfer when they were not negative transfer related.", "The model misclassified 30% of the replace punctuation errors.", "Although Chinese learners are known to replace periods with commas incorrectly (Liu, 2011), the error category by itself is not enough to make an accurate classification.", "In contrast, the random forest classifier mislabelled 16% of the replace punctuation errors.", "The random forest approach misclassified entries as negative transfer and not negative transfer related, demonstrating that this approach does not simply associate one error type with one output label.", "Another error category in which the random forest classifier outperformed the baseline model was when classifying the wrong verb tense errors.", "The error type features do not provide information about the verb tense that was used incorrectly, but the POS tags extracted from the incorrect utterance do.", "This extra information helps the random forest classifier make more accurate predictions about negative language transfer related errors.", "These results suggest that negative language transfer classification can benefit from features other than the error type.", "Furthermore, it shows that linguistic features are important in the identification of negative language transfer errors.", "Our dataset is the first we are aware of that annotates a large amount of learner English data with negative language transfer features and error causes.", "It has the potential to improve the performance of computational linguistics tasks, such as native language transfer identification and grammatical error correction.", "More importantly, its content can benefit English teachers and learners by making more personalized error feedback available.", "Another potential application of the dataset is in the automatic detection of negative language transfer.", "This application could help provide real-time L1-informed feedback to English learners.", "Our research group is currently working on annotating errors from other L1 learner groups.", "We are also expanding our annotation process to learner data from sources other than the FCE dataset, such as the Lang-8 English corpus (Mizumoto et al., 2011).", "With that, we hope to broaden the scope of English learners supported by L1 informed error feedback.", "The new datasets presented in this paper are built on top of the FCE dataset described in Yannakoudakis et al. (2011).", "The FCE dataset contains anonymised essays from the First Certificate in English test-takers between the years of 2000 and 2001.", "The essays' meta-data contains information about the age range and native language of the learners.", "Although the original dataset description does not address how the learners' consent was obtained, Cambridge Assessment should be governed by the same consent procedures as other UK researchers.", "The candidate privacy policy 7 from the Cambridge Assessment website states that the learners' data could be used in developing and delivering publications and other resources that support learning.", "The annotation procedure described in this paper was performed by undergraduate and graduate students as part of individual project courses and research assistantships, respectively.", "All three authors 8 are fluent in at least one variety of English.", "Two of them also have deep familiarity with other English varieties.", "This knowledge was used to ensure that the negative language transfer annotations do not reinforce existing power structures around language varieties and standard forms of English.", "That said, cases of linguistic imperialism are likely to remain in the dataset.", "Another facet that may limit these datasets' applicability is the fact that the FCE annotated essays were collected 20 years ago.", "As both the Chinese and English languages have evolved and, possibly intersected over time, the occurrence of some negative language transfer errors may have decreased.", "For example, the prevalence of determiner omis-7 https://www.cambridgeenglish.org/ch/ fr/footer/data-protection/candidates/ 8 Demmans Epp has specialized in computer-assisted language learning and she has taught English as a second or foreign language in a variety of educational contexts.", "She has training in several areas of linguistics that include language acquisition and sociolinguistics.", "She is a first language speaker of Canadian English and has experience working in American and British English contexts.", "ficient in English.", "She has experience working in British and Canadian English contexts.", "Zhao grew up speaking Mandarin Chinese and is familiar with various dialects of Chinese.", "She received her education in American and Canadian English and is familiar with British English to a certain extent.", "sion errors in Chinese L1 English communication has been targeted from an instructional perspective in recent years which may have lowered the occurrence rate of this negative language transfer error.", "On the English language front, the usage of the pronoun they has changed and may have rendered some of the entries in our dataset obsolete.", "The new dataset's main purpose is to aid English language learning by providing personalized error causes according to the learner's L1.", "It aims to help English as a Second Language learners acquire a better understanding of the English language by contrasting it to the learner's L1.", "Although this type of information tends to be helpful to language learners, there might be learners who do not benefit from it.", "The data available in the dataset was reviewed by our research group to ensure clarity and correctness.", "We do not foresee additional risks stemming from the usage of the new dataset.", "We acknowledge the financial support of the Natural Sciences and Engineering Research Council of Canada (NSERC), [RGPIN-2018-03834], and the Social Sciences and Humanities Research Council (SSHRC).", "We would also like to acknowledge the anonymous reviewers for their insightful and valuable feedback." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other" ]
[ "We demonstrate the surprising strength of unimodal baselines in multimodal domains, and make concrete recommendations for best practices in future research.", "Where existing work often compares against random or majority class baselines, we argue that unimodal approaches better capture and reflect dataset biases and therefore provide an important comparison when assessing the performance of multimodal techniques.", "We present unimodal ablations on three recent datasets in visual navigation and QA, seeing an up to 29% absolute gain in performance over published baselines.", "All datasets have biases.", "Baselines should capture these regularities so that outperforming them indicates a model is actually solving a task.", "In multimodal domains, bias can occur in any subset of the modalities.", "To address this, we argue it is not sufficient for researchers to provide random or majority class baselines; instead we recommend presenting results for unimodal models.", "We investigate visual navigation and question answering tasks, where agents move through simulated environments using egocentric (first person) vision.", "We find that unimodal ablations (e.g., language only) in these seemingly multimodal tasks can outperform corresponding full models ( 4.1).", "This work extends observations made in both the Computer Vision (Goyal et al., 2018; Cirik et al., 2018) and Natural Language (Mudrakarta et al., 2018; Glockner et al., 2018; Poliak et al., 2018; Gururangan et al., 2018; Kaushik and Lipton, 2018) communities that complex models often perform well by fitting to simple, unintended correlations in the data, bypassing the complex grounding and reasoning that experimenters hoped was necessary for their tasks.", "We ablate models from three recent papers: (1) navigation (Figure", "1) using images of real homes paired with crowdsourced language descriptions (Anderson et al., 2018); and (2, 3) navigation and egocentric question answering (Gordon et al., 2018; Das et al., 2018a) in simulation with synthetic questions.", "We find that unimodal ablations often outperform the baselines that accompany these tasks.", "Recommendation for Best Practices: Our find-ings show that in the new space of visual navigation and egocentric QA, all modalities, even an agent's action history, are strongly informative.", "Therefore, while many papers ablate either language or vision, new results should ablate both .", "Such baselines expose possible gains from unimodal biases in multimodal datasets irrespective of training and architecture details.", "In the visual navigation and egocentric question answering tasks, at each timestep an agent receives an observation and produces an action.", "Actions can move the agent to a new location or heading forward turn left turn right tilt up tilt down end start f o r w a r d t u r n l e f t t u r n r i g h t t i l t u p t i l t d o w n e n d s t a r t C o n d i t i o n a l M a r g i n a l .36 .44 .43 .54 .54 .393 .255 .257 .22 .22 .16 .071 .012 .012 .001 .00 .00 .00 .00 .00 .00 .00 .02 .02 .01 .01 .01 .01 Figure 2: P ( act = col | prev = row ) and marginal action distributions in Matterport training.", "Peaked distributions enable agents to memorize simple rules like not turning left immediately after turning right, or moving forward an average number of steps.", "(e.g., turn left ), or answer questions (e.g., answer brown' ).", "At timestep t , a multimodal model M takes in a visual input V t and language question or navigation command L to predict the next action a t .", "The navigation models we examine also take in their action from the previous timestep, a t 1 , and minimally sensed' world information W specifying which actions are available (e.g., that forward is unavailable if the agent is facing a wall).", "In each benchmark, M corresponds to the au-thor's released code and training paradigm.", "In addition to their full model, we evaluate the role of each input modality by removing those inputs and replacing them with zero vectors.", "Formally, we define the full model and three ablations: Full Model is M ( V t , L , a t 1 ; W ) (2) A is M ( (cid:126) 0 , (cid:126) 0 , a t 1 ; W ) (3) A + V is M ( V t , (cid:126) 0 , a t 1 ; W ) (4) A + L is M ( (cid:126) 0 , L , a t 1 ; W ) (5) corresponding to models with access to A ction inputs, V ision inputs, and L anguage inputs.", "These ablations preserve the architecture and number of parameters of M by changing only its inputs.", "We evaluate on navigation and question answering tasks across three benchmark datasets: Matterport Room-to-Room (no question answering compo-nent), and IQUAD V1 and EQA (question answering that requires navigating to the relevant scene in the environment) (Anderson et al., 2018; Gordon", "et al., 2018; Das et al., 2018a).", "We divide the latter two into separate navigation and question answering components.", "We then train and evaluate models separately per subtask to analyze accuracy.", "An agent is given a route in English and navigates through a discretized map to the specified destination (Anderson et al., 2018).", "This task includes high fidelity visual inputs and crowdsourced natural language routes.", "Published Full Model: At each timestep an LSTM decoder uses a ResNet-encoded image V t and previous action a t 1 to attend over the states of an LSTM language encoder ( L ) to predict navigation action a t (seen in Figure 2).", "Published Baseline: The agent chooses a random direction and takes up to five forward actions, turning right when no forward action is available.", "IQUAD V1 (Gordon et al., 2018) contains three question types: existence (e.g., Is there a ...? ), counting (e.g., How many ...? ) where the answer ranges from 0 to 3, and spatial relation: (e.g., Is there a ... in the ...? ).", "The data was constructed via randomly generated configurations to weaken majority class baselines (Figure 3).", "To evaluate the navigation subtask, we introduce a new THOR-Nav benchmark.", "1 The agent is placed in a random location in the room and must approach one of fridge, garbage can, or microwave in response to a natural language question.", "Although we use the same full model as Gordon et al. (2018), our QA results are not directly comparable.", "In particular, Gordon et al. (2018) do not quantify the effectiveness of the QA component independent of the scene exploration (i.e. navigation and interaction).", "To remove the scene explo-1 Formed from a subset of IQUAD V1 questions.", "ration steps of Gordon et al. (2018), we provide a complete ground-truth view of the environment.", "2 We use ground-truth rather than YOLO (Redmon et al., 2016) due to speed constraints.", "Nav Full Model: The image and ground-truth semantic segmentation mask V t , tiled question L , and previous action a t 1 are encoded via a CNN which outputs a distribution over actions.", "Optimal actions are learned via teacher forcing.", "Nav Baseline: The agent executes 100 randomly chosen navigation actions then terminates.", "In AI2THOR (Kolve et al., 2017), none of the kitchens span more than 5 meters.", "With a step-size of 0.25 meters, we observed that 100 actions was significantly shorter than the shortest path length.", "Published QA Full Model: The question encoding L is tiled and concatenated with a top-down view of the ground truth location of all objects in the scene V .", "This is fed into several convolutions, a spatial sum, and a final fully connected layer which outputs a likelihood for each answer.", "EQA (Das et al., 2018a) questions are programmatically generated to refer to a single, unambiguous object for a specific environment, and are fil-tered to avoid easy questions (e.g., What room is the bathtub in? ).", "At evaluation, an agent is placed a fixed number of actions away from the object.", "Published Nav Full Model: At each timestep, a planner LSTM takes in a CNN encoded image V t , LSTM encoded question L , and the previous action a t 1 and emits an action a t .", "The action is executed in the environment, and then a lower-level controller LSTM continues to take in new vision observations and a t , either repeating a t again or returning control to the planner.", "Published Nav Baseline: This baseline model is trained and evaluated with the same inputs as the full model, but does not pass control to a lower-level controller, instead predicting a new action using the planner LSTM at each timestep (i.e., no hierarchical control).", "Das et al. (2018a) name this baseline LSTM+Question .", "2 This approximates the agent having visited every possible location, interacted with all possible objects, and looked in all possible directions before answering.", "Published QA Full Model: Given the last five image encodings along the gold standard navigation trajectory, V t 4 . . . V t , and the question encoding L , image-question similarities are calculated via a dot product and converted via attention weights to a summary weight V , which is concatenated with L and used to predict the answer.", "Das et al. (2018a) name this oracle-navigation model ShortestPath+VQA .", "QA Baseline: Das et al. (2018a) provide no explicit baseline for the VQA component alone.", "We use a majority class baseline inspired by the data's entropy based filtering.", "Across all benchmarks, unimodal baselines outperform baseline models used in or derived from the original works.", "Navigating unseen environments, these unimodal ablations outperform their corresponding full models on the Matterport (ab-solute 2 . 5 % success rate) and EQA ( 0 . 06 m distance to target).", "We evaluate our ablation baselines on Matterport, 3 THOR-Nav, and EQA (Table 1), 4 and discover that some unimodal ablations outperform their corresponding full models.", "For Matterport and THOR-Nav, success rate is defined by proximity to the target.", "For EQA, we measure absolute distance from the target in meters.", "Unimodal Performance: Across Matterport, THOR-Nav, and EQA, either A + V or A + L 3 We report on Matterport-validation since this allows comparing Seen versus Unseen house performance.", "4 For consistency with THOR-Nav and EQA, we here evaluate Matterport using teacher forcing .", "achieves better performance than existing baselines.", "In Matterport, the A + L ablation performs better than the Full Model in unseen environments.", "The diverse scenes in this simulator may render the vision signal more noisy than helpful in previously unseen environments.", "The A + V model in THOR-Nav and EQA is able to latch onto dataset biases in scene structure to navigate better than chance (for IQA), and the nonhierarchical baseline (in EQA).", "In EQA, A + V also outperforms the Full Model; 5 the latent information about navigation from questions may be too distant for the model to infer.", "The agent with access only to its action history ( A ) outperforms the baseline agent in Matterport and THOR-Nav environments, suggesting it learns navigation correlations that are not captured by simple random actions (THOR-Nav) or programmatic walks away from the starting position (Mat-terport).", "Minimal sensing (which actions are available, W ) coupled with the topological biases in trajectories (Figure 2), help this nearly zero-input agent outperform existing baselines.", "6 Matterport Teacher vs Student forcing With teacher forcing, at each timestep the navigation agent takes the gold-standard action regardless of what action it predicted, meaning it only sees steps along gold-standard trajectories.", "This paradigm is used to train the navigation agent in THOR-Nav and EQA.", "Under student forcing, the agent samples the action to take from its predictions, and loss is computed at each time step against the action that would have put the agent on the shortest 5 EQA full & baseline model performances do not exactly match those in Das et al. (2018a) because we use the expanded data updated by the authors https://github.", "com/facebookresearch/EmbodiedQA/ .", "6 This learned agent begins to relate to work in minimally sensing robotics (O'Kane and LaValle, 2006).", "path to the goal.", "Thus, the agent sees more of the scene, but can take more training iterations to learn to move to the goal.", "Table 2 gives the highest validation success rates across all epochs achieved in Matterport by models trained using student forcing.", "The unimodal ablations show that the Full Model, possibly because with more exploration and more training episodes, is better able to align the vision and language signals, enabling generalization in unseen environments that fails with teacher forcing.", "EQA Navigation Variants Table 3 gives the average final distance from the target ( d T , used as the metric in Table", "1) and average minimum distance from target achieved along the path ( d min ) during EQA episodes for agents starting 10, 30, and 50 actions away from the target in the EQA navigation task.", "At 10 actions away, the unimodal ablations tend to outperform the full model on both metrics, possibly due to the shorter length of the episodes (less data to train the joint parame-ters).", "The A + V ablation performs best among the ablations, and ties with or outperforms the Full Model in all but one setting, suggesting that the EQA Full Model is not taking advantage of language information under any variant.", "We evaluate our ablation baselines on IQUAD V1 and EQA, reporting top-1 QA accuracy (Table 4) given gold standard navigation information as V .", "These decoupled QA models do not take in a previous action, so we do not consider AONLY ablations for this task.", "lations perform nearly at chance.", "7 The VONLY model with access to the locations of all scene objects only improves by 2% over random guessing.", "For EQA, single modality models perform significantly better than the majority class baseline.", "The vision-only model is able to identify salient colors and basic room features that allow it to reduce the likely set of answers to an unknown question.", "The language only models achieve nearly 50%, suggesting that despite the entropy filtering in Das et al. (2018a) each question has one answer that is as likely as all other answers combined (e.g. 50% of the answers for What color is the bathtub? are grey , and other examples in Figure 4).", "Historically, semantic parsing was used to map natural language instructions to visual navigation in simulation environments (Chen and Mooney, 2011; MacMahon et al., 2006).", "Modern approaches use neural architectures to map natural language to the (simulated) world and execute actions (Paxton et al., 2019; Chen et al., 2018; Nguyen et al., 2018; Blukis et al., 2018; Fried et al., 2018; Mei et al., 2016).", "In visual question answering (VQA) (Antol et al., 2015; Hudson and Manning, 2019) and visual commonsense reasoning (VCR) (Zellers et al., 2019), input images are accompanied with natural language questions.", "Given the question, egocentric QA requires an agent to navigate and interact with the world to gather the relevant information to answer the question.", "In both cases, end-to-end neural architectures make progress on these tasks.", "For language annotations, task design, diffi-culty, and annotator pay can introduce unintended artifacts which can be exploited by models to cheat on otherwise complex tasks (Glockner 7 Majority class and chance for IQUAD V1 both achieve 50%, 50%, 25% when conditioned on question type; our Baseline model achieves the average of these. F i n a l I m a g e Question V Only L Only Maj Class Full Model Answer What room is the iron located in? What color is the dresser? What color is the loudspeaker ? What room is the fruit bowl located in? Brown Kitchen Brown Kitchen Brown Green Living Room Kitchen Brown Bathroom Brown Kitchen Brown Brown Brown Brown Brown Bathroom Brown Kitchen What color is the bathtub ? Grey Bathroom Grey Brown Grey Figure 4: Qualitative results on the EQA task. The language only model can pick out the most likely answer for a question. The vision only model finds salient color and room features, but is unaware of the question. et al., 2018; Poliak et al., 2018).", "Such issues also occur in multimodal data like VQA (Goyal et al., 2018), where models can answer correctly without looking at the image.", "In image captioning, work has shown competitive models relying only on nearest-neighbor lookups (Devlin et al., 2015) as well as exposed misalignment between caption relevance and text-based metrics (Rohrbach et al., 2018).", "Our unimodal ablations of visual navigation and QA benchmarks uncover similar biases, which deep architectures are quick to exploit.", "In this work, we introduce an evaluation framework and perform the missing analysis from several new datasets.", "While new state-of-the-art models are being introduced for several of these domains (e.g., Matterport: (Ma et al., 2019a; Ke et al., 2019; Wang et al., 2019; Ma et al., 2019b; Tan et al., 2019; Fried et al., 2018), and EQA: (Das et al., 2018b)), they lack informative, individual unimodal ablations (i.e., ablating both language and vision) of the proposed models.", "We find a performance gap between baselines used in or derived from the benchmarks examined in this paper and unimodal models, with unimodal models outperforming those baselines across all benchmarks.", "These unimodal models can even outperform their multimodal counterparts.", "In light of this, we recommend all future work include unimodal ablations of proposed multimodal models to vet and highlight their learned representations.", "This work was supported by NSF IIS-1524371, 1703166, NRI-1637479, IIS-1338054, 1652052, ONR N00014-13-1-0720, and the DARPA CwC program through ARO (W911NF-15-1-0543)." ]
[ "objective", "method", "result", "abstain", "abstain", "abstain", "result", "objective", "result", "other", "abstain", "abstain", "result", "objective", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "objective", "other", "result", "abstain", "objective", "other" ]
[ "A growing number of state-of-the-art transfer learning methods employ language models pretrained on large generic corpora.", "In this paper we present a conceptually simple and effective transfer learning approach that addresses the problem of catastrophic forgetting.", "Specifically, we combine the task-specific optimization function with an auxiliary language model objective, which is adjusted during the training process.", "This preserves language regularities captured by language models, while enabling sufficient adaptation for solving the target task.", "Our method does not require pretraining or finetuning separate components of the network and we train our models end-to-end in a single step.", "We present results on a variety of challenging affective and text classification tasks, surpassing well established transfer learning methods with greater level of complexity.", "Pretrained word representations captured by Language Models (LMs) have recently become popular in Natural Language Processing (NLP).", "Pretrained LMs encode contextual information and high-level features of language, modeling syntax and semantics, producing state-of-the-art results across a wide range of tasks, such as named entity recognition (Peters et al., 2017), machine translation (Ramachandran et al., 2017) and text classification (Howard and Ruder, 2018).", "However, in cases where contextual embeddings from language models are used as additional features (e.g. ELMo (Peters et al., 2018)), results come at a high computational cost and require task-specific architectures.", "At the same time, approaches that rely on fine-tuning a LM to the task at hand (e.g. ULMFiT (Howard and Ruder, 2018)) depend on pretraining the model on an extensive vocabulary and on employing a sophisticated slanted triangular learning rate scheme to adapt the parameters of the LM to the target dataset.", "We propose a simple and effective transfer learning approach, that leverages LM contextual representations and does not require any elaborate scheduling schemes during training.", "We initially train a LM on a Twitter corpus and then transfer its weights.", "We add a task-specific recurrent layer and a classification layer.", "The transferred model is trained end-to-end using an auxiliary LM loss, which allows us to explicitly control the weighting of the pretrained part of the model and ensure that the distilled knowledge it encodes is preserved.", "Our contributions are summarized as follows:", "1) We show that transfer learning from language models can achieve competitive results, while also being intuitively simple and computationally effective.", "2) We address the problem of catastrophic forgetting, by adding an auxiliary LM objective and using an unfreezing method.", "3) Our results show that our approach is competitive with more sophisticated transfer learning methods.", "We make our code widely available.", "1 2 Related Work Unsupervised pretraining has played a key role in deep neural networks, building on the premise that representations learned for one task can be useful for another task.", "In NLP, pretrained word vectors (Mikolov et al., 2013; Pennington et al., 2014) are widely used, improving performance in various downstream tasks, such as part-of-speech tagging (Collobert et al., 2011) and question answering (Xiong et al., 2016).", "These pretrained word vectors serve as initialization of the embedding layer and remain frozen during training, while our pretrained language model also initializes the hidden layers of the model and is fine-tuned to each 1 /github.com/alexandra-chron/siatl 2090 classification task.", "Aiming to learn from unlabeled data, Dai and Le (2015) use unsupervised objectives such as sequence autoencoding and language modeling for as pretraining methods.", "The pretrained model is then fine-tuned to the target task.", "However, the fine-tuning procedure of the language model to the target task does not include an auxiliary objective.", "Ramachandran et al. (2017) also pretrain encoder-decoder pairs using language models and fine-tune them to a specific task, using an auxiliary language modeling objective to prevent catastrophic forgetting.", "This approach, nevertheless, is only evaluated on machine translation tasks; moreover, the seq2seq (Sutskever et al., 2014) and language modeling losses are weighted equally throughout training.", "By contrast, we propose a weighted sum of losses, where the language modeling contribution gradually decreases.", "ELMo embeddings (Pe-ters et al., 2018) are obtained from language models and improve the results in a variety of tasks as additional contextual representations.", "However, ELMo embeddings rely on character-level models, whereas our approach uses a word-level LM.", "They are, furthermore, concatenated to pretrained word vectors and remain fixed during training.", "We instead propose a fine-tuning procedure, aiming to adjust a generic architecture to different end tasks.", "Moreover, BERT (Devlin et al., 2018) pretrains language models and fine-tunes them on the target task.", "An auxiliary task (next sentence prediction) is used to enhance the representations of the LM.", "BERT fine-tunes masked bi-directional LMs.", "Nevertheless, we are limited to a uni-directional model.", "Training BERT requires vast computational resources, while our model only requires 1 GPU.", "We note that our approach is not orthogonal to BERT and could be used to improve it, by adding an auxiliary LM objective and weighing its contribution.", "Towards the same direction, ULMFiT (Howard and Ruder, 2018) shows impressive results on a variety of tasks by employing pretrained LMs.", "The proposed pipeline requires three distinct steps, that include (1) pretraining the LM, (2) fine-tuning it on a target dataset with an elaborate scheduling procedure and (3) transferring it to a classification model.", "Our proposed model is closely related to ULMFiT.", "However, ULMFiT trains a LM and fine-tunes it to the target dataset, before transferring it to a classification model.", "While fine-tuning the LM to the target dataset, the metric (e.g. accuracy) that we intend to optimize cannot be observed.", "We propose adopting a multi-task learning perspective, via the addition of an auxiliary LM loss to the transferred model, to control the loss of the pretrained and the new task simultaneously.", "The intuition is that we should avoid catastrophic forgetting, but at the same time allow the LM to distill the knowledge of the prior data distribution and keep the most useful features.", "Multi-Task Learning (MTL) via hard parameter sharing (Caruana, 1993) in neural networks has proven to be effective in many NLP problems (Collobert and Weston, 2008).", "More recently, alternative approaches have been suggested that only share parameters across lower layers (So-gaard and Goldberg, 2016).", "By introducing part-of-speech tags at the lower levels of the network, the proposed model achieves competitive results on chunking and CCG super tagging.", "Our auxiliary language model objective follows this line of thought and intends to boost the performance of the higher classification layer.", "We introduce SiATL , which stands for Si ngle-step A uxiliary loss T ransfer L earning.", "In our proposed approach, we first train a LM.", "We then transfer its weights and add a task-specific recurrent layer to the final classifier.", "We also employ an auxiliary LM loss to avoid catastrophic forgetting.", "LM Pretraining.", "We train a word-level language model, which consists of an embedding LSTM layer (Hochreiter and Schmidhuber, 1997), 2 hidden LSTM layers and a linear layer.", "We want to minimize the negative log-likelihood of the LM: L ( p ) = 1 NN (cid:88) n =1 T n (cid:88) t =1 log p ( x nt | x n 1 , ..., x nt 1 ) (1) where p ( x nt | x n 1 , ..., x nt 1 ) is the distribution of the t th word in the n th sentence given the t 1 words preceding it and N is total number of sentences.", "Transfer & auxiliary loss.", "We transfer the weights of the pretrained model and add one LSTM with a self-attention mechanism (Lin et al., 2017; Bahdanau et al., 2015).", "In order to adapt the contribution of the pretrained model to the task at hand, we introduce an auxiliary LM loss during training.", "The joint loss is the 2091 Figure 1: High-level overview of our proposed TL architecture.", "weighted sum of the task-specific loss L task and the auxiliary LM loss LLM , where is a weighting parameter to enable adaptation to the target task but at the same time keep the useful knowledge from the source task.", "Specifically: L = L task + L LM (2) Exponential decay of .", "An advantage of the proposed TL method is that the contribution of the LM can be explicitly controlled in each training epoch.", "In the first few epochs, the LM should contribute more to the joint loss of SiATL so that the task-specific layers adapt to the new data distribution.", "After the knowledge of the pretrained LM is transferred to the new domain, the task-specific component of the loss function is more important and should become smaller.", "This is also crucial due to the fact that the new, task-specific LSTM layer is randomly initialized.", "Therefore, by back-propagating the gradients of this layer to the pretrained LM in the first few epochs, we would add noise to the pretrained representation.", "To avoid this issue, we choose to initially pay attention to the LM objective and gradually focus on the classification task.", "In this paper, we use an exponential decay for over the training epochs.", "them sequentially, according to Howard and Ruder (2018); Chronopoulou et al. (2018).", "We first fine-tune only the extra, randomly initialized LSTM and the output layer for n 1 epochs.", "At the n th epoch, we unfreeze the pretrained hidden layers.", "We let the model fine-tune, until epoch k 1 .", "Finally, at epoch k , we also unfreeze the embedding layer and let the network train until convergence.", "The values of n and k are obtained through grid search.", "We find the sequential unfreezing scheme important, as it minimizes the risk of overfitting to small datasets.", "Optimizers.", "While pretraining the LM, we use Stochastic Gradient Descent (SGD).", "When we transfer the LM and fine-tune on each classification task, we use 2 different optimizers: SGD for the pretrained LM (embedding and hidden layer) with a small learning rate, in order to preserve its contextual information.", "As for the new, randomly initialized LSTM and classification layers, we employ Adam (Kingma and Ba, 2015), in order to allow them to train fast and adapt to the target task.", "To pretrain the language model, we collect a dataset of 20 million English Twitter messages, including approximately 2M unique tokens.", "We use the 70K most frequent tokens as vocabulary.", "We evaluate our model on five datasets: Sent17 for sentiment analysis (Rosenthal et al., 2017), PsychExp for emotion recognition (Wall-bott and Scherer, 1986), Irony18 for irony detection (Van Hee et al., 2018), SCv1 and SCv2 for sarcasm detection (Oraby et al., 2016; Lukin and Walker, 2013).", "More details about the datasets can be found in Table 1.", "To preprocess the tweets, we use Ekphra-sis (Baziotis et al., 2017).", "For the generic datasets, we use NLTK (Loper and Bird, 2002).", "For the NBoW baseline, we use word2vec (Mikolov et al., 2013) 300-dimensional embeddings as features.", "For the neural models, we use an LM with an embedding size of 400, 2 hidden layers, 1000 neurons per layer, embedding dropout 0.1, hidden dropout 0.3 and batch size 32.", "We add Gaussian noise of size 0.01 to the embedding layer.", "A clip norm of 5 is applied, as an extra safety measure against exploding gradients.", "For each text classification neural network, we add on top of the transferred LM an LSTM layer of size 100 with self-attention and a softmax classification layer.", "In the pretraining step, SGD with a learning rate of 0.0001 is employed.", "In the transferred model, SGD with the same learning rate is used for the pretrained layers.", "However, we use Adam (Kingma and Ba, 2015) with a learning rate of 0.0005 for the newly added LSTM and classification layers.", "For developing our models, we use PyTorch (Paszke et al., 2017) and Scikit-learn (Pedregosa et al., 2011).", "Baselines and Comparison.", "Table 2 summarizes our results.", "The top two rows detail the baseline performance of the BoW and NBoW models.", "We observe that when enough data is available (e.g. Sent17 ), baselines provide decent results.", "Next, the results for the generic classifier initialized from a pretrained LM ( P-LM ) are shown with and without sequential unfreezing, followed by the results of the proposed model SiATL.", "SiATL is also directly compared with its close relative ULMFiT (trained on Wiki-103 or Twitter) and the state-of-the-art for each task; ULMFiT also fine-tunes a LM for classification tasks.", "The proposed SiATL method consistently outperforms the baselines, the P-LM method and ULMFiT in all datasets.", "Even though we do not perform any elaborate learning rate scheduling and we limit ourselves to pretraining in Twitter, we obtain higher results in two Twitter datasets and three generic.", "Auxiliary LM objective.", "The effect of the auxiliary objective is highlighted in very small datasets, such as SCv1 , where it results in an impressive boost in performance (7%).", "We hypothesize that when the classifier is simply initialized with the pretrained LM, it overfits quickly, as the target vocabulary is very limited.", "The auxiliary LM loss, however, permits refined adjustments to the model and fine-grained adaptation to the target task.", "Exponential decay of .", "For the optimal interval, we empirically find that exponentially decaying from 0.2 to 0.1 over the number of training epochs provides best results for our classification tasks.", "A heatmap of is depicted in Figure 3.", "We observe that small values of should be employed, in order to scale the LM loss in the same order of magnitude as the classification loss over the training period.", "Nevertheless, the use of exponential decay instead of linear decay does not provide a significant improvement, as our model is not sensitive to the way of decaying hyperpa-rameter .", "Sequential Unfreezing.", "Results show that sequential unfreezing is crucial to the proposed method, as it allows the pretrained LM to adapt to the target word distribution.", "The performance improvement is more pronounced when there is a mismatch between the LM and task domains, i.e., the non-Twitter domain tasks.", "Specifically for the PsychExp and SCv2 datasets, sequentially unfreezing yields significant improvement in F 1 building upon our intuition.", "Number of training examples.", "Transfer learning is particularly useful when limited training data are available.", "We notice that for our largest dataset 2093 Figure 2: Results of SiATL, our proposed approach (continuous lines) and ULMFiT (dashed lines) for different datasets (indicated by different markers) as a function of the number of training examples.", "Sent17 , SiATL outperforms ULMFiT only by a small margin when trained on all the training examples available (see Table 2), while for the small SCv2 dataset, SiATL outperforms ULMFiT by a large margin and ranks very close to the state-of-the-art model (Ilic et al., 2018).", "Moreover, the performance of SiATL vs ULMFiT as a function of the training dataset size is shown in Figure 2.", "Note that the proposed model achieves competitive results on less than 1000 training examples for the Irony18, SCv2, SCv1 and PsychExp datasets, demonstrating the robustness of SiATL even when trained on a handful of training examples.", "Catastrophic forgetting.", "We observe that SiATL indeed provides a way of mitigating catastrophic forgetting.", "Empirical results that are shown in Table 2 indicate that by only adding the auxiliary language modeling objective, we obtain better results on all downstream tasks.", "Specifically, a comparison of the P-LM + aux model and the P-LM model shows that the performance of SiATL on classification tasks is improved by the auxiliary objective.", "We hypothesize that the language model objective acts as a regularizer that prevents the loss of the most generalizable features.", "We introduce SiATL, a simple and efficient transfer learning method for text classification tasks.", "Our approach is based on pretraining a LM and Figure 3: Heatmap of the effect of to F 1 -score, evaluated on SCv2 .", "transferring its weights to a classifier with a task-specific layer.", "The model is trained using a task-specific functional with an auxiliary LM loss.", "SiATL avoids catastrophic forgetting of the language distribution learned by the pretrained LM.", "Experiments on various text classification tasks yield competitive results, demonstrating the effi-cacy of our approach.", "Furthermore, our method outperforms more sophisticated transfer learning approaches, such as ULMFiT in all tasks.", "In future work, we plan to move from Twitter to more generic domains and evaluate our approach to more tasks.", "Additionally, we aim at exploring ways for scaling our approach to larger vocabulary sizes (Kumar and Tsvetkov, 2019) and for better handling of out-of-vocabulary words (OOV) (Mielke and Eisner, 2018; Sennrich et al., 2015) in order to be applicable to diverse datasets.", "Finally, we want to explore approaches for improving the adaptive layer unfreezing process and the contribution of the language model objective (value of ) to the target task.", "We would like to thank Katerina Margatina and Georgios Paraskevopoulos for their helpful suggestions and comments.", "This work has been partially supported by computational time granted from the Greek Research & Technology Network (GR-NET) in the National HPC facility ARIS.", "Also, the authors would like to thank NVIDIA for supporting this work by donating a TitanX GPU." ]
[ "abstain", "method", "objective", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "objective", "method", "method", "method", "objective", "result", "objective", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "objective", "abstain", "abstain", "abstain", "method", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "result", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method", "method", "abstain", "method", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "objective", "result", "method", "abstain", "objective", "other", "other", "other" ]
[ "Humans create things for a reason.", "Ancient people created spears for hunting, knives for cutting meat, pots for preparing food, etc.", "The prototypical function of a physical artifact is a kind of commonsense knowledge that we rely on to understand natural language.", "For example, if someone says She borrowed the book then you would assume that she intends to read the book, or if someone asks Can I use your knife? then you would assume that they need to cut something.", "In this paper, we introduce a new NLP task of learning the prototypical uses for human-made physical objects.", "We use frames from FrameNet to represent a set of common functions for objects, and describe a manually annotated data set of physical objects labeled with their prototypical function.", "We also present experimental results for this task, including BERT-based models that use language model predictions from masked patterns as well as artifact sense definitions from WordNet and frame definitions from FrameNet.", "Humans are a creative species.", "New objects are invented by people every day, and most are created for a reason.", "Knives were created for cutting, bicycles were created for transportation, and telephones were created for communication.", "Some objects can perform multiple functions (e.g., smart phones) and humans are also creative at finding secondary uses for objects (e.g., heavy objects are often used as makeshift paperweights).", "But when we mention physical objects in conversation or in writing, people generally infer that the object will be used in the most prototypical way, unless they are told otherwise.", "that often plays a role in natural language understanding.", "Consider the following examples of inferences that arise from physical artifacts.", "Example 1", "a) He killed the mayor with a gun .", "b) He killed the mayor with a knife .", "c) He killed the mayor with a bomb .", "Example 1 describes a killing with three different types of instruments.", "Most readers would assume that", "a) describes a shooting,", "b) describes a stabbing, and", "c) describes an explosion.", "But exactly how each instrument was used is implicit.", "We make different inferences about how they were used based on our knowledge of the objects.", "Example 2", "a) She finished the cigarette .", "b) She finished the puzzle .", "c) She finished the movie .", "Example 2 illustrates how we infer different actions based on the object when the main action is elided (i.e., finished means that some action has ended but the action itself is implicit).", "Most people would assume that the cigarette was smoked, the puzzle was solved, and the movie was watched.", "Example 3", "a) She put the cake in the box .", "b) She put the cake in the oven .", "c) She put the cake in the refrigerator .", "Example 3 illustrates second-order inferences that can follow from a sentence.", "The verb put means that the cake was placed somewhere, but the object of in leads to different inferences about intention.", "Putting a cake in an oven implies that it will be baked, but putting a cake in a refrigerator implies that it will be cooled.", "Example 4", "a) He ordered a taxi .", "b) He ordered a pizza .", "c) He ordered a t-shirt .", "Example 4 reveals inferences about motivations and future plans.", "If someone orders a taxi then we infer that they need transportation, if they order a pizza then we expect they will eat it, and if they order a t-shirt then we assume it will be worn.", "We believe that it is essential for NLP systems to read between the lines and make the same types of inferences that people do when reading these sentences.", "The goal of our research is to explore methods for learning the prototypical functions of human-made physical artifacts so that future NLP systems can benefit from this knowledge.", "First, we define a new NLP task to associate physical objects with frames from FrameNet as a canonical representation for their prototypical function.", "We introduce a gold standard data set of 938 physical artifacts that have each been labeled with a frame that represents its prototypical function based on human judgements.", "Second, we evaluate baseline models to assess how well existing resources and simple methods perform on this task.", "Third, we present transformer-based models for this task that exploit both masked sentence patterns and the definitions of physical artifacts and frames.", "Experiments show that our best model yields substantially better results than the baseline methods.", "Researchers have known for a long time that commonsense knowledge is essential for natural language understanding (Charniak, 1972; Schank and Abelson, 1977).", "Some of this work specifically argued that commonsense knowledge about physical objects, including functional knowledge, plays an important role in narrative text understanding (Burstein, 1979; Lehnert and Burstein, 1979).", "These observations have led to considerable work toward constructing commonsense knowledge repositories.", "The Cyc project (Lenat, 1995) built a large ontology of commonsense concepts and facts over many years.", "More recently, ConceptNet (Speer et al., 2017) captures commonsense knowledge in the form of predefined relations expressed in natural language words and phrases.", "It was built from Open Mind Common Sense, a crowd-sourced knowledge project (Singh, 2002), and later enhanced with other sources such as Wik-tionary and WordNet (Miller, 1995).", "Within the NLP community, a variety of recent projects have focused on trying to acquire different types of commonsense knowledge, such as Forbes and Choi (2017); Collell et al. (2018); Rashkin et al. (2018); Yang et al. (2018).", "Sap et al. (2019) presented a crowd-sourced commonsense reasoning data set called ATOMIC that focuses on inferential knowledge related to events, which is organized as if-then relations.", "Bosselut et al. (2019) later proposed COMET, a transformer-based framework for automatic construction of commonsense knowledge bases that was trained from ATOMIC and ConceptNet.", "Both ConceptNet and COMET include a UsedFor relation that is relevant to our task, and we evaluate their performance on our data set in Section 6.", "Of relevance to our work, Jiang and Riloff (2018) learned the prototypical functions of locations by identifying activities that represent a prototypical reason why people go to a location.", "For example, people go to restaurants to eat, airports to catch a flight, and churches to pray.", "They referred to the associated activity as a prototypical goal activity and presented a semi-supervised method to iteratively learn the goal activities.", "Our work is also related to frame semantics, which studies how we associate words and phrases with conceptual structures called frames (Fillmore, 1976), which characterize an abstract scene or situation.", "The Berkeley FrameNet project (Baker et al., 1998; Ruppenhofer et al., 2016) provides an online lexical database for frame semantics and a corpus of annotated documents.", "There has been substantial work on frame semantic parsing (e.g., Das et al., 2014; Peng et al., 2018), which is the task of automatically extracting frame structures from sentences.", "Several efforts have enhanced FrameNet by mapping it to other lexicons, such as WordNet, PropBank and VerbNet (Shi and Mihalcea, 2005; Palmer, 2009; Ferrandez et al., 2010).", "Pavlick et al. (2015) increased the lexical coverage of FrameNet through automatic paraphrasing and manual veri-fication.", "Yatskar et al. (2016) introduced situation recognition, which is the problem of producing a concise summary of the situation that an image depicts.", "Similar to our work, they selected a subset of frames from FrameNet to represent possible situations depicted in an image.", "Our work uses a subset of frames from FrameNet to represent the prototypical functions for human-made physical artifacts.", "Our work was motivated by observing sentences that mention physical objects and realizing that we often infer a richer meaning for these sentences than what they explicitly state.", "We came to appreciate that the prototypical function of an object was the basis for many of our inferences, but we also recognized that not all objects have a prototypical function.", "In particular, naturally occurring objects rarely have a prototypical function (e.g., rock , snake ).", "In contrast, human-made physical objects usually do have a prototypical function because they were created for a purpose.", "Consequently, we limited the scope of our work to human-made artifacts.", "Of course, some objects are commonly used for multiple purposes, but in most cases there seems to be one use that is dominant, so for the sake of tractability we decided to assign a single (most) prototypical function to each artifact for this research.", "We had initially planned to include food items, but many foods are also naturally occurring plants or animals (e.g., watermelon , shrimp ) so we omitted them.", "It may be worth re-examining these limitations in future work.", "Another key decision that we had to make was how to represent the prototypical functions.", "Some recent work on commonsense knowledge acquisition has opted to generate words and phrases as expressions of a relation, such as ConceptNet (Speer et al., 2017) and ATOMIC (Sap et al., 2019).", "As an example, ConceptNet includes a relation called UsedFor that lists the following phrases as uses for a knife: stabbing, butter, cutting food, carving wood, slicing, boning .", "We chose to adopt a different approach.", "First, we wanted a canonical representation for each type of function that represents a general concept, rather than a list of phrases.", "This approach naturally captures clusters of objects (i.e., those assigned to the same frame) and avoids evaluation issues arising from differing phrases that may be learned for similar objects (e.g., cut vs. carve vs. slice ).", "Second, we did not want to reinvent the wheel and develop a new taxonomy of action types ourselves.", "For these reasons, we chose to use the semantic frames in FrameNet as a canonical representation for our prototypical functions.", "Although FrameNet is not per-fect nor complete, it contains many of the actions that we needed.", "Overall, it serves as an appropriate platform for our work.", "This approach also opens up new avenues for research down the road.", "Although it is beyond the scope of this paper, we can imagine that sentences could trigger frames based on inferences originating from physical objects during semantic parsing.", "For example, She used a pencil should arguably be represented as a writing ( Text Creation ) event.", "However we leave that challenge for future work.", "This paper focuses on the specific task of learning the prototypical functions for human-made physical artifacts using a subset of FrameNet frames as the set of function types.", "As explained in Section 3, our work focuses on artifacts that are", "1) physical objects and", "2) created by people.", "To acquire a list of objects that meet these criteria, we extracted all terms in synsets that are descendants of the artifact.n.01 synset 1 in WordNet (Miller, 1995).", "We then removed a term from the list if the artifact sense was not its first sense definition.", "2 This process produced 8,822 entries, many of which met our criteria except that the list still contained a lot of abstract terms (e.g., vocabulary, modernism ).", "To address this issue, we turned to Brysbaert et al. (2014) which presents concreteness ratings based on crowd sourcing for 37,058 English words and 2,896 two-word expressions.", "They used a 5-point rating scale ranging from abstract to concrete, so we extracted words with the part-of-speech noun and a rating 4.5, which produced a list of 3,462 concrete nouns.", "We then intersected this list with the terms extracted from WordNet, producing a set of 1,017 concrete physical artifacts.", "FrameNet 1.7 contains 1,221 frame definitions.", "However, not all of them are suitable for representing typical uses of physical artifacts, which should be actions that involve a physical object.", "For example, some frames are intended for abstract nominal categories (e.g., Calendric unit for temporal terms), high-level abstractions (e.g., Intentionally act which sits above more specific frames), 1 Except we removed synsets for buildings and roads.", "and events or states that are not typically associated with physical artifacts (e.g., Judgement ).", "To focus on an appropriate subset of frames, we manually selected 42 frames in FrameNet that represent actions that are common functions of human-made physical artifacts.", "We intentionally didn't select frames that categorize nouns in a general way.", "For example, FrameNet contains an Artifact frame that includes oven, phone and wheel as its lexical units.", "This frame only serves to identify terms that represent physical objects, and we wanted frames that represent a function.", "The list of frames that we used is shown in Table 1 along with the frequency with which they occur in our gold standard data set, as described in the next section.", "To create a gold standard data set with frame assignments for the physical artifacts, we recruited 3 human annotators.", "We presented the annotators with the WordNet definition for each term and asked them to select one frame that captures the most prototypical use for the artifact.", "In addition to the 42 function frames, we also gave them a None option if none of the frames was a good match, and a Not an artifact option if the term was not in fact a human-made physical artifact (because our list extracted from WordNet and Brysbaert et al. (2014) was not perfect).", "To prepare the annotators, we asked them to read the definitions of all the frames beforehand and we gave them detailed annotation guidelines to familiarize them with the task.", "We randomly Frame Artifact Examples Wearing hat , shirt Containing basket , luggage Self motion bicycle , yacht Protecting armor , helmet Supporting chair , scaffolding Cause harm cannon , spear Perception exp earphone , eyeglass Make noise bell , violin Cause motion engine , propeller Cutting knife , scissors Table 2: Examples of artifacts for the top 10 frames.", "sorted the artifacts before presenting them to the annotators.", "When the annotations were finished, we measured the pair-wise inter-annotator agreement (IAA) using Cohen's kappa.", "The IAA scores were 0.75, 0.72 and 0.69, with an average of = 0.72.", "Given the difficulty of this task (44 possible labels), we felt that the human agreement was relatively good.", "Finally, we created the gold standard data set 3 by using the majority label from the three human annotators.", "There were 72 artifacts with no majority label (i.e., the annotators assigned 3 different labels), and 7 terms with the majority label Not an 3 The data set is available at: https://github.com/ tyjiangU/physical_artifacts_function artifact , so we discarded these 79 terms.", "Consequently, our gold standard data set contains 938 physical artifacts that are each labeled with a frame representing its most prototypical function, or labeled as None when none of our 42 frames was appropriate.", "4 Table 2 shows the 10 most frequently assigned frames and a few examples of artifacts assigned to each frame.", "We explored several approaches for learning the prototypical functions of human-made physical artifacts.", "To assess the difficulty of this task, we first present baseline models that", "1) exploit information extracted from existing knowledge bases and", "2) use co-occurrence information extracted from a text corpus.", "Next, we explore methods that use large neural language models.", "We describe a method that uses masked pattern predictions, and then present models that also incorporate artifact sense definitions and frame definitions.", "We model our task as a multiclass classification problem.", "The artifacts and frames are denoted as a i ( i = 1 ..m ) and f j ( j = 1 ..n ) .", "The task is to select the f j that represents the most prototypical use for an artifact a i .", "We will denote the set of lexical units for f j in FrameNet as LU j = { l k | l k evokes f j } .", "5 5.2 ConceptNet and COMET Baselines ConceptNet (Speer et al., 2017) is a well-known commonsense knowledge resource that contains a UsedFor relation, which is potentially relevant to our task (though it should be noted that an object can be used in ways that are not prototypical, so our task of identifying the prototypical use is not exactly the same).", "COMET (Bosselut et al., 2019) is a framework that was trained on ConceptNet with the goal of improving upon its coverage.", "Our first experiments apply these resources to see how effective they can be for this task.", "For each artifact in ConceptNet, we extract the first word from each phrase listed under its UsedFor relation.", "These are typically verbs that describe an action although sometimes they are nouns.", "For COMET, we use its beam-10 setting to generate 10 phrases of the UsedFor relation for each artifact.", "4 83 terms were assigned to the None category.", "5 We merged lexical units from similar frames in FrameNet.", "See details in Appendix A. use N to V dobj xcomp V N dobj V ADP N prep pobj Figure 1: Dependency patterns used for co-occurrence.", "Next, we want to use the extracted words to rank candidate frames.", "FrameNet defines lexical units that can evoke a specific frame.", "For example, read can trigger the Reading activity frame.", "Suppose our artifact is a book and one of the extracted words is read , then Reading activity is a candidate frame.", "We then score each frame based on the overlap between the words extracted from ConceptNet or COMET and the frame's lexical units.", "Specifically, we define freq ( a i , w ) as the count of a word w occurring in the UsedFor relation of artifact a i , and I ( w, f j ) = 1 if w LU j otherwise 0 .", "Then our score for f j is defined as: S cn ( a i , f j ) = (cid:88) w(cid:15)W freq ( a i , w ) I ( w, f j ) , (1) where W is the set of extracted words.", "Finally, for each a i , we select f j (cid:48) such that j (cid:48) = arg max j S cn ( a i , f j ) as its prototypical function.", "If S cn ( a i , f (cid:48) j ) equals zero, then we predict None .", "An intuitive idea for potentially learning common functions associated with physical artifacts is to extract verbs that frequently co-occur with the artifact in a large text corpus.", "We assume that if a verb frequently co-occurs with an artifact, then the frames associated with the verb are plausible candidates for the artifact's prototypical function.", "For this approach, we created 3 dependency parse patterns to extract < noun, verb > pairs, as depicted in Figure 1.", "The physical object is the noun represented by N .", "The activity is a verb (with an appended particle if one exists) represented by V .", "We included the verb-dobj pattern because some artifacts and their functions are expressed in this way, such as read book or wear jacket .", "We used spaCy 6 to parse the whole English Wikipedia corpus (as of Feb 20, 2020) and extracted over 3.8 million < N, V > pairs (305,055 distinct pairs) for our 938 artifacts.", "We define the function freq ( a i , v ) as the co-occurrence count of artifact a i and verb v in the corpus.", "Then we apply the same method described in Section 5.2 to assign a score to each 6 https://spacy.io/ ... BERT Linear ... [CLS] A knife can be used to .", "frame based on the extracted verbs and select the best frame.", "Co-occurrence in text is a strong signal of correlation.", "But an activity that is highly correlated with an artifact may not be its prototypical use.", "For example, cut frequently co-occurs with rope , but the purpose of a rope is not to be cut its prototypical use is for attaching things.", "Recent work has successfully used masked language models to learn commonsense knowledge (Davison et al., 2019), so we explored whether masked language models could be beneficial for our task.", "We use the BERT (Devlin et al., 2019) masked language model to get prediction scores for every ( a i , l k ) pair, where a i is one of our physical artifacts and l k is a lexical unit linked to one of our 42 candidate frames.", "We defined 6 sentence templates that represent expressions describing what an object is used for, which are shown below.", "The first blank space is for artifact a i and the second blank space is for action l k .", "(1) can be used to .", "(2) I used to .", "(3) can be used for .", "(4) I used for .", "(5) The purpose of is to .", "(6) If I had , I could .", "Next, we produced a probability distribution over all of the lexical units based on the second blank position.", "Specifically, for the t -th sentence template s t , we obtain P r ( l k | s t , a i ) by masking only the second blank space ( a i is inserted into the first blank) and we obtain P r ( l k | s t ) by masking both blank space.", "Then we define the score of l k as the typical use of artifact a i based on the t -th template as: U ( a i , l k , s t ) = log P r ( l k | s t , a i ) log P r ( l k | s t ) .", "(2) The score U ( a i , l k ) using all templates is computed as: U ( a i , l k ) = 1 t (cid:80) t U ( a i , l k , s t ) .", "Finally, we define the score for f j being the prototypical function for a i as: S mlm ( a i , f j ) = (cid:88) l k LU j U ( a i , l k ) .", "Our MLM baseline uses the discrete output of the masked language model (i.e., the prediction tokens from the vocabulary and their scores).", "In order to take advantage of a language model's fine-tuning capability, we use the same architecture as described in Section 5.4, except that instead of using the predicted lexical units and their probability P r ( l k | s t , a i ) , we retrieve the last hidden state vector for the [MASK] token as output.", "Since there are 6 masked templates, we have 6 output vectors for each artifact a i .", "We compute the average of these vectors and pass it through a linear layer and a softmax layer to produce a probability distribution over all candidate frames plus None .", "Figure 2 shows the overview of this architecture, which we will call the PF mask model.", "We will refer to the final score for artifact a i with respect to frame f j as S mask ( a i , f j ) .", "The loss function is defined as: L = n (cid:88) i =1 log S mask ( a i , f j ) , (4) where f j is the gold label for a i .", "The challenge for our task is obtaining information about the intended function of a physical artifact.", "We observed that this information is often described in the dictionary definition of an artifact, although it can be expressed in many different ways.", "For example, the first sense definition in WordNet for knife is edge tool used as a cutting instrument... , and for bus it is a vehicle carrying many passengers... .", "The definition often provides a short and precise sentence that describes what the artifact is as well as what it is typically used for.", "FrameNet also provides a definition for each frame.", "For example, the definition of the Cutting frame is An Agent cuts a Item into Pieces using an Instrument .", "Jiang and Riloff (2021) exploited both frame and lexical unit definitions for the frame identification task in a model that assesses the semantic coherence between the meaning of a target word in a sentence and a candidate frame.", "Similarly, we hypothesized that a model could potentially learn the semantic relatedness between the definitions of a physical artifact and the frame that describes its typical function.", "To investigate this idea, we used the BERT model (Devlin et al., 2019) as the base of our architecture and fine-tuned BERT for our task using both dictionary definitions of artifacts and frame definitions from FrameNet.", "Figure 3 shows the overview of this architecture, which we call the PF def model.", "Each large green block represents an artifact a i paired with one of the candidate frames.", "We encode WordNet's definition of the artifact as the first input sequence and the frame's definition from FrameNet as the second input sequence to BERT.", "We use the last hidden vector of the [CLS] token as the output.", "For each artifact a i , we have n + 1 such pairs where n is the number of candidate frames and 1 refers to the None option.", "On top of BERT's output, we apply a linear and a softmax layer to produce a probability distribution + Linear Concat ... ...", "over all candidate frames.", "We will refer to the final score for artifact a i with respect to frame f j as S def ( a i , f j ) .", "The loss function is defined as: L = n (cid:88) i =1 log S def ( a i , f j ) , (5) where f j is the gold label for a i .", "Our final model combines the idea of using both definitions and masked sentence patterns.", "Figure 4 depicts the combined P def + mask model.", "The left part is the PF def model which estimates the relatedness between artifact and frame definitions.", "Its output is a matrix of dimension ( # of frames, hidden vector size ).", "The right part is the PF mask model, which predicts the most probable frame for an artifact using our masked patterns.", "It produces a single output vector of dimension ( 1, hidden vector size ).", "We broadcast it across the rows to have the same dimension as ( # of frames, hidden vector size ) and then we concatenate the matrices of both models to pass through a linear layer before computing the loss.", "The model uses fine-tuning to jointly learn all parameters so that information from both models will optimally contribute to the final prediction.", "Our gold standard data set contains 938 artifacts that are each paired with one frame that represents its most prototypical use.", "We set aside 20% (188) of the data as a development set and used 80% (750) as the test set.", "We evaluated all of the learning models by performing 5-fold cross validation on the test set.", "We use the pre-trained uncased BERT-base model with the same settings as Devlin et al. (2019) and fine-tuned BERT on the training data.", "We set the max sequence length as 200, batch size as 1, learning rate started at 2e-5, and train for 10 epochs.", "All reported results are averaged over 3 runs.", "We report overall accuracy as well as precision, recall and F1 scores macro-averaged over the 43 class labels (42 frames + None ).", "The first four rows in Table 3 show the performance of our four baseline methods.", "ConceptNet and the Co-occurrence model produced the lowest F1 scores.", "We see that ConceptNet has better precision but low recall because only about 1/3 of the artifacts in our data set has a UsedFor relation defined in ConceptNet.", "We also tried adding the CapableOf relation, which is defined as what an item can do, but it is even more sparse than UsedFor and combining both relations only marginally increased recall.", "The performance of COMET shows that COMET does indeed improve upon the coverage of ConceptNet, although it sacrifices some precision.", "We also tried using the beam-5 and greedy settings of COMET, which produced higher precision but lower recall and F1 scores.", "Compared to COMET, the Co-occurrence baseline has higher accuracy but a much lower F1 score.", "The explanation is that the Co-occurrence model Figure 5: F1 scores for high & low frequency frames.", "performs much better on frames that are associated with artifacts that are frequently mentioned in the corpus than for frames associated with less frequent artifacts.", "This is intuitive because, in general, we expect to extract a more representative sample of activities when we have more data.", "This phenomenon (accuracy much higher than F1) can also be observed in the MLM model which uses a pre-trained language model that learns from large corpora, so it is not surprising that Co-occurrence and the MLM model behave similarly.", "In contrast, ConceptNet and COMET behave more consistently across the set of frames.", "The bottom section of Table 3 shows the results for our new models, which were trained specifically for this task.", "The PF mask model achieves 58.5% accuracy and a 35.4% F1 score, which outperforms all of the baselines.", "The PF def model performs substantially better, achieving 74.7% accuracy and a 59.3% F1 score.", "This result demonstrates that the definitions of the artifacts and the frames provide valuable information that a learner can benefit from.", "The last row shows the performance of the combined model, which performed better than the individual models.", "This model saw additional gains in both precision and recall, increasing the accuracy from 74.7% to 76.8% and the F1 score from 59.3% to 62.4%.", "To understand the degree to which the number of training instances for each frame correlated with performance, we divided the frames into two sets: high frequency frames assigned to 15 artifacts and low frequency frames assigned to < 15 artifacts.", "The results are shown in Figure 5 with the F1 scores from the PF def + mask model displayed on the Y-axis.", "We conclude that frames with more training instances generally showed better performance, so our model would likely further improve ID Artifact PF mask PF def MASK (cid:51) DEF (cid:51) 1 scissors Cutting Cutting MASK (cid:51) DEF (cid:55) 2 hydrant Cause fluidic motion Cause temperature change MASK (cid:55) DEF (cid:51) 3 bed Supporting Sleep MASK (cid:55) DEF (cid:51) 4 helmet Wearing Protecting MASK (cid:55) DEF (cid:55) 5 snowplow Hunting Self Motion Table 4: Sample output of PF def and PF mask models.", "given more training data.", "Table 4 shows some examples of output from the PF mask and PF def models to compare their behavior.", "The correct predictions appear in bold.", "Both models are correct for example 1.", "For example 2, only the PF mask model is right, which indicates that the masked pattern can be more useful than the definition sometimes.", "For examples 3 and 4, PF def was correct and PF mask was wrong.", "The PF mask model sometimes generates frames representing functions that are true but tangential.", "For example, beds do support us and helmets are worn, but these functions do not sufficiently characterize the objects (e.g., chairs also support us but are not typically used for sleeping, and jewelry is also worn but not used for protection).", "For example 5, both models are wrong the correct frame is Removing .", "Though both are wrong, the PF def model produces a more reasonable answer than the PF mask model.", "7 We also observed that the MLM baseline sometimes produces seemingly random answers that are hard to explain.", "Finally, we investigated the 83 instances that were labeled as None to see what kind of artifacts fell into this category.", "The biggest cluster of related artifacts were 17 types of fabric, such as linen, silk and canvas .", "FrameNet does not include a frame for materials of this kind, probably because they are an ingredient for making clothes rather than tools themselves.", "Artifacts like toy were also labeled as None presumably because toys are used in a general way (for play).", "This category also included some artifacts not tied to a single prototypical function but commonly used for many purposes (e.g., computer, laptop ).", "We introduced the new task of learning prototypical functions for human-made physical artifacts, and", "used a subset of frames from FrameNet to represent the set of common functions.", "We also presented a manually annotated data set of 938 physical artifacts for this task.", "Our experiments showed that a transformer-based model using both artifact and frame definitions as well as masked pattern predictions outperforms several baseline methods.", "In future work, we hope to show the value of functional knowledge about objects for sentence-level understanding tasks as well as narrative document understanding.", "We thank Yuan Zhuang for his helpful comments on our work.", "We also thank the anonymous reviewers for their valuable suggestions and feedback." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "method", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "objective", "objective", "objective", "abstain", "abstain", "objective", "objective", "objective", "objective", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "abstain", "method", "method", "objective", "objective", "abstain", "method", "method", "result", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "method", "method", "other", "other", "method", "other", "other", "other", "other", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "other", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "result", "result", "other", "other" ]
[ "Building on Petroni et al. (2019), we propose two new probing tasks analyzing factual knowledge stored in Pretrained Language Models (PLMs).", "(1) Negation.", "We find that PLMs do not distinguish between negated (Birds cannot [MASK]) and non-negated (Birds can [MASK]) cloze questions.", "(2) Mispriming.", "Inspired by priming methods in human psychology, we add misprimes to cloze questions (Talk? Birds can [MASK]).", "We find that PLMs are easily distracted by misprimes.", "These results suggest that PLMs still have a long way to go to adequately learn human-like factual knowledge.", "PLMs like Transformer-XL (Dai et al., 2019), ELMo (Peters et al., 2018) and BERT (Devlin et al., 2019) have emerged as universal tools that capture a diverse range of linguistic and factual knowledge.", "Recently, Petroni et al. (2019) introduced LAMA (LAnguage Model Analysis) to investigate whether PLMs can recall factual knowledge that is part of their training corpus.", "Since the PLM training objective is to predict masked tokens, question answering (QA) tasks can be reformulated as cloze questions.", "For example, Who wrote Dubliners'? is reformulated as [MASK] wrote Dubliners'.", "In this setup, Petroni et al. (2019) show that PLMs outperform automatically extracted knowledge bases on QA.", "In this paper, we investigate this capability of PLMs in the context of (1) negation and what we call (2) mispriming .", "(1) Negation.", "To study the effect of negation on PLMs, we introduce the negated LAMA dataset .", "We insert negation elements (e.g., not) in LAMA cloze questions (e.g., The theory of relativity was not developed by [MASK].) this gives us posi-tive/negative pairs of cloze questions.", "Querying PLMs with these pairs and comparing the predictions, we find that the predicted fillers have high overlap.", "Models are equally prone to generate facts (Birds can fly) and their incorrect negation (Birds cannot fly).", "We find that BERT handles negation best among PLMs, but it still fails badly on most negated probes.", "In a second experiment, we show that BERT can in principle memorize both positive and negative facts correctly if they occur in training, but that it poorly generalizes to unseen sentences (positive and negative).", "However, after finetuning, BERT does learn to correctly classify unseen facts as true/false.", "(2) Mispriming.", "We use priming, a standard experimental method in human psychology (Tul-ving and Schacter, 1990) where a first stimulus (e.g., dog) can influence the response to a second stimulus (e.g., wolf in response to name an animal).", "Our novel idea is to use priming for probing PLMs , specifically mispriming : we give automatically generated misprimes to PLMs that would not mislead humans.", "For example, we add Talk? Birds can [MASK] to LAMA where Talk? is the misprime.", "A human would ignore the misprime, stick to what she knows and produce a filler like fly.", "We show that, in contrast, PLMs are misled and fill in talk for the mask.", "We could have manually generated more natural misprimes.", "For example, misprime regent of Anti-och in Tancred, regent of Antioch, played a role in the conquest of [MASK] tricks BERT into chosing the filler Antioch (instead of Jerusalem).", "Our automatic misprimes are less natural, but automatic generation allows us to create a large misprime dataset for this initial study.", "Contribution.", "We show that PLMs' ability to learn factual knowledge is in contrast to human capabilities extremely brittle for negated sentences and for sentences preceded by distracting material (i.e., misprimes).", "Data and code will be published.", "LAMA's cloze questions are generated from subject-relation-object triples from knowledge bases (KBs) and question-answer pairs.", "For KB triples, cloze questions are generated, for each relation, by a templatic statement that contains variables X and Y for subject and object (e.g, X was born in Y).", "We then substitute the subject for X and MASK for Y. In a question-answer pair, we MASK the answer.", "LAMA is based on several sources:", "(i) Google-RE.", "3 relations: place of birth, date of birth, place of death.", "(ii) T-REx (Elsahar et al., 2018).", "Subset of Wikidata triples.", "41 relations.", "(iii) ConceptNet (Li et al., 2016).", "16 commonsense relations.", "The underlying corpus provides matching statements to query PLMs.", "(iv) SQuAD (Rajpurkar et al., 2016).", "Subset of 305 context-insensitive questions, reworded as cloze questions.", "We use the source code provided by Petroni et al. (2019) and Wolf et al. (2019) to evaluate Transformer-XL large (Txl), ELMo original (Eb), ELMo 5.5B (E5B), BERT-base (Bb) and BERT-large (Bl).", "Negated LAMA.", "We created negated LAMA by manually inserting a negation element in each template or question.", "For ConceptNet we only consider an easy-to-negate subset (see appendix).", "Misprimed LAMA.", "We misprime LAMA by inserting an incorrect word and a question mark at the beginning of a statement; e.g., Talk? in Talk? Birds can [MASK].", "We only misprime questions that are answered correctly by BERT-large.", "To make sure the misprime is misleading, we manually remove correct primes for SQuAD and ConceptNet and automatically remove primes that are the correct filler for a different instance of the same relation for T-REx and ConceptNet.", "We create four versions of misprimed LAMA (A, B, C, D) as described in the caption of Table 3; Table 1 gives examples.", "Negated LAMA.", "Table 2 gives spearman rank correlation and % overlap in rank 1 predictions between original and negated LAMA.", "Version Query A Dinosaurs?", "Munich is located in [MASK] .", "B Somalia?", "Munich is located in [MASK] .", "C Prussia?", "Munich is located in [MASK] .", "D Prussia?", "This is great.", "... What a surprise. Good to know. ...", "Munich is located in [MASK] .", "should not overlap, so high values indicate lack of understanding of negation.", "The two measures are complementary and yet agree very well.", "The correlation measure is sensitive in distinguishing cases where negation has a small effect from those where it has a larger effect.", "2 % overlap is a measure that is direct and easy to interpret.", "In most cases, > 85% ; overlap in rank 1 predictions is also high.", "ConcepNet results are most strongly correlated but TREx 1-1 results are less overlapping.", "Table 4 gives examples (lines marked N).", "BERT has slightly better results.", "Google-RE date of birth is an outlier because the pattern X (not born in [MASK]) rarely occurs in corpora and predictions are often nonsensical.", "In summary, PLMs poorly distinguish positive and negative sentences.", "We give two examples of the few cases where PLMs make correct predictions, i.e., they solve the cloze task as human subjects would.", "For The capital of X is not Y (TREX, 1-1) top ranked predictions are listed, known, mentioned (vs. cities for The capital of X is Y).", "This is appropriate since the predicted sentences are more common than sentences like The capital of X is not Paris.", "For X was born in Y, cities are predicted, but 2 A reviewer observes that spearman correlation is generally high and wonders whether high spearman correlation is really a reliable indicator of negation not changing the answer of the model.", "As a sanity check, we also randomly sampled, for each query correctly answered by BERT-large (e.g., Einstein born in [MASK]), another query with a different answer, but the same template relation (e.g., Newton born in [MASK]) and computed the spearman correlation between the predictions for the two queries.", "In general, these positive-positive spearman correlations were significantly lower than those between positive (Einstein born in [MASK]) and negative (Einstein not born in [MASK]) queries (t-test, p < 0 . 01 ).", "There were two exceptions (not significantly lower): T-REx 1-1 and Google-RE birth-date.", "for X was not born in Y, sometimes countries are predicted.", "This also seems natural: for the positive sentence, cities are more informative, for the negative, countries.", "Balanced corpus.", "Investigating this further, we train BERT-base from scratch on a synthetic corpus.", "Hyperparameters are listed in the appendix.", "The corpus contains as many positive sentences of form x j is a n as negative sentences of form x j is not a n where x j is drawn from a set of 200 subjects S and a n from a set of 20 adjectives A .", "The 20 adjectives form 10 pairs of antonyms (e.g., good/bad).", "S is divided into 10 groups g m of 20.", "Finally, there is an underlying KB that defines valid adjectives for groups.", "For example, assume that g 1 has property a m = good.", "Then for each x i g 1 , the sentences x i is good and x i is not bad are true.", "The training set is generated to contain all positive and negative sentences for 70% of the subjects.", "It also contains either only the positive sentences for the other 30% of subjects (in that case the negative sentences are added to test) or vice versa.", "Cloze questions are generated in the format x j is [MASK]/ x j is not [MASK].", "We test whether", "(i) BERT memorizes positive and negative sentences seen during training,", "(ii) it generalizes to the test set.", "As an example, a correct generalization would be x i is not bad if x i is good was part of the training set.", "The question is: does BERT learn, based on the patterns of positive/negative sentences and within-group regularities, to distinguish facts from non-facts.", "Table 5 (pretrained BERT) shows that BERT memorizes positive and negative sentences, but poorly generalizes to the test set for both positive and negative.", "The learning curves (see appendix) show that this is not due to overfitting the training data.", "While the training loss rises, the test precision fluctuates around a plateau.", "However, if we Corpus Relation Facts A B C D Google-RE birth-place 386 11.7 44.7 99.5 98.4 birth-date 25 72.0 91.7 100.0 88.0 death-place 88 14.8 47.1 98.9 98.9 T-REx 1-1 661 12.7 20.6 30.1 28.1 N-1 7034 22.1 48.3 59.9 41.2 N-M 2774 26.6 55.3 58.7 43.9 ConceptNet 146 52.1 59.6 82.9 70.6 SQuAD 51 33.3 -68.6 60.8 Table 3: Absolute precision drop (from 100%, lower better) when mispriming BERT-large for the LAMA subset that was answered correctly in its original form.", "finetune BERT (finetuned BERT) on the task of classifying sentences as true/false, its test accuracy is 100%.", "(Recall that false sentences simply correspond to true sentence with a not inserted or removed.)", "So BERT easily learns negation if supervision is available, but fails without it.", "This experiment demonstrates the difficulty of learning negation through unsupervised pretraining.", "We suggest that the inability of pretrained BERT to distinguish true from false is a serious impediment to accurately handling factual knowledge.", "Misprimed LAMA.", "Table 3 shows the effect of mispriming on BERT-large for questions answered correctly in original LAMA; recall that Table 1 gives examples of sentences constructed in modes A, B, C and D. In most cases, mispriming with a highly ranked incorrect object causes a precision drop of over 60% (C).", "Example predictions can be found in Table 4 (lines marked M).", "This sensi-cloze question true top 3 words generated with log probs G oog l e REO Marcel Oopa died in the city of [MASK].", "tivity to misprimes still exists when the distance between misprime and cloze question is increased: the drop persists when 20 sentences are inserted (D).", "Striking are the results for Google-RE where the model recalls almost no facts (C).", "Table 4 (lines marked M) shows predicted fillers for these misprimed sentences.", "BERT is less but still badly affected by misprimes that match selectional restrictions (B).", "The model is more robust against priming with random words (A): the precision drop is on average more than 35% lower than for (D).", "We included the baseline (A) as a sanity check for the precision drop measure.", "These baseline results show that the presence of a misprime per se does not confuse the model; a less distracting misprime (different type of entity or a completely implausible answer) often results in a correct answer by BERT.", "Whereas Petroni et al. (2019)'s results suggest that PLMs are able to memorize facts, our results indicate that PLMs largely do not learn the meaning", "of negation.", "They mostly seem to predict fillers based on co-occurrence of subject (e.g., Quran) and filler (religious) and to ignore negation.", "A key problem is that in the LAMA setup, not answering (i.e., admitting ignorance) is not an option.", "While the prediction probability generally is somewhat lower in the negated compared to the positive answer, there is no threshold across cloze questions that could be used to distinguish valid positive from invalid negative answers (cf. Table 4).", "We suspect that a possible explanation for PLMs' poor performance is that negated sentences occur much less frequently in training corpora.", "Our synthetic corpus study (Table 5) shows that BERT is able to memorize negative facts that occur in the corpus.", "However, the PLM objective encourages the model to predict fillers based on similar sentences in the training corpus and if the most similar statement to a negative sentence is positive, then the filler is generally incorrect.", "However, after finetuning, BERT is able to classify truth/falseness correctly, demonstrating that negation can be learned through supervised training.", "The mispriming experiment shows that BERT often handles random misprimes correctly (Table 3 A).", "There are also cases where BERT does the right thing for difficult misprimes, e.g., it robustly attributes religious to Quran (Table 4).", "In general, however, BERT is highly sensitive to misleading context (Table 3 C) that would not change human behavior in QA.", "It is especially striking that a single word suffices to distract BERT.", "This may suggest that it is not knowledge that is learned by BERT, but that its performance is mainly based on similarity matching between the current context on the one hand and sentences in its training corpus and/or recent context on the other hand.", "Poerner et al. (2019) present a similar analysis.", "Our work is a new way of analyzing differences between PLMs and human-level natural language understanding.", "We should aspire to develop PLMs that like humans can handle negation and are not easily distracted by misprimes.", "PLMs are top performers for many tasks, including QA (Kwiatkowski et al., 2019; Alberti et al., 2019).", "PLMs are usually finetuned (Liu et al., 2019; Devlin et al., 2019), but recent work has applied models without finetuning (Radford et al., 2019; Petroni et al., 2019).", "Bosselut et al. (2019) investigate PLMs' common sense knowledge, but do not consider negation explicitly or priming.", "A wide range of literature analyzes linguistic knowledge stored in pretrained embeddings (Jumelet and Hupkes, 2018; Gulordava et al., 2018; Giulianelli et al., 2018; McCoy et al., 2019; Das-gupta et al., 2018; Marvin and Linzen, 2018; Warstadt and Bowman, 2019; Kann et al., 2019).", "Our work analyzes factual knowledge.", "McCoy et al. (2019) show that BERT finetuned to perform natural language inference heavily relies on syntactic heuristics, also suggesting that it is not able to adequately acquire common sense.", "Warstadt et al. (2019) investigate BERT's understanding of how negative polarity items are licensed.", "Our work, focusing on factual knowledge stored in negated sentences, is complementary since grammaticality and factuality are mostly orthogonal properties.", "Kim et al. (2019) investigate understanding of negation particles when PLMs are finetuned.", "In contrast, our focus is on the interaction of negation and factual knowledge learned in pretraining.", "Ettinger (2019) defines and applies psycho-linguistic diagnostics for PLMs.", "Our use of priming is complementary.", "Their data consists of two sets of 72 and 16 sentences whereas we create 42,867 negated sentences covering a wide range of topics and relations.", "setup while trying to keep the overall semantics the same.", "In contrast, we investigate large changes of meaning (negation) and context (mispriming).", "In contrast to adversarial work (e.g., (Wallace et al., 2019)), we do not focus on adversarial examples for a specific task, but on pretrained models' ability to robustly store factual knowledge.", "Our results suggest that pretrained language models address open domain QA in datasets like LAMA by mechanisms that are more akin to relatively shallow pattern matching than the recall of learned factual knowledge and inference.", "Implications for future work on pretrained language models.", "(i) Both factual knowledge and logic are discrete phenomena in the sense that sentences with similar representations in current pretrained language models differ sharply in factuality and truth value (e.g., Newton was born in 1641 vs. Newton was born in 1642).", "Further architectural innovations in deep learning seem necessary to deal with such discrete phenomena.", "(ii) We found that PLMs have difficulty distinguishing informed best guesses (based on information extracted from training corpora) from random best guesses (made in the absence of any evidence in the training corpora).", "This implies that better con-fidence assessment of PLM predictions is needed.", "(iii) Our premise was that we should emulate human language processing and that therefore tasks that are easy for humans are good tests for NLP models.", "To the extent this is true, the two phenomena we have investigated in this paper that PLMs seem to ignore negation in many cases and that they are easily confused by simple distractors seem to be good vehicles for encouraging the development of PLMs whose performance on NLP tasks is closer to humans.", "Acknowledgements.", "We thank the reviewers for their constructive criticism.", "This work was funded by the German Federal Ministry of Education and Research (BMBF) under Grant No. 01IS18036A and by the European Research Council (Grant No. 740516).", "The authors of this work take full responsibility for its content." ]
[ "abstain", "abstain", "result", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "objective", "result", "abstain", "result", "result", "abstain", "abstain", "objective", "objective", "method", "abstain", "objective", "method", "abstain", "method", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "method", "other", "other", "method", "other", "objective", "other", "abstain", "method", "other", "objective", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "method" ]